Theresa May wants to tackle online extremism. Here’s how to do it | Charles Arthur

A man types on a computer keyboard.
‘Facebook and Google benefit through having really engaged viewers; unfortunately, sometimes those viewers are looking at terrorist content.’ Photograph: Saul Loeb/AFP/Getty Images

Something must be done. Particularly, something must be done about Facebook and YouTube (and to a lesser extent Twitter). That has become the reflexive rallying cry of UK ministers whenever there is a terrorist attack, or when one is thwarted. Theresa May will tell G7 leaders as much at a summit in Sicily today.

On the same day, security minister Ben Wallace, told the BBC’s Today programme: “We think they could all do more. We need to have the tools to make them, where we need to, remove material quicker.”

Wallace also implied that governments find the encryption used in messaging apps such as WhatsApp frustrating – that they make it “almost impossible for us to actually lift the lid on these people” – but that argument would have more force if we didn’t know that the Manchester killer had been reported to the police multiple times by people worried about his attitude. The lid had already been lifted, if only someone had had the time to investigate. Ministers complaining about encryption are wishing for a genie to go back in the bottle. It won’t.

It’s the role of Facebook and Google-owned YouTube, though, which is in the spotlight just now. (Facebook owns WhatsApp, which it bought in 2014 for $19bn.) It’s hard to disagree that the content on their social networks plays a role in leading vulnerable people into patterns of thinking that aren’t good for society.

We all know about the rows over Facebook and “fake news” during the Brexit referendum and the US election. There are questions over precisely what role Facebook will be found to have played in the imminent UK election. But Facebook and YouTube also have an effect more generally.

Jamie Bartlett’s excellent book The Dark Web points out that you can pick any rabbit hole of human behaviour – anorexia encouragement, self-harm, obesity – and you’ll find like-minded groups coalescing around a few locations online that will not only encourage that behaviour but will make it seem as though it’s quite normal, by virtue of being there. And both Facebook and Google (and especially YouTube) reinforce the spiral by having algorithms designed to maximise the time people spend online seeing the adverts being placed there, rather than their users’ mental health.

The problem with extremist propaganda online is exactly the same: groups form and tell each other that what they’re doing is normal. All that’s different from an anorexia encouragement group is the outcomes.

And what really needs to change? The algorithms – the code that chooses what video or what content we will see next

To Facebook and Google, those outcomes are what an economist would call an “externality” – something that arises from their actions, but which they don’t pay for. If you were designing a social network or a video site to minimise the chances of people becoming radicalised, you wouldn’t deploy the algorithms currently in use on them, which can go from Sesame Street to footage of someone giving birth in a few clicks; the “recommended videos” and autoplay elements of YouTube are both intended to make you spend more time on the site, seeing more ads. Facebook, too, keeps optimising its Newsfeed to “engage” you more, while it also slips more ads in.

The need to make a profit by monetising people’s time has created a perverse incentive for these networks. They benefit through having really engaged viewers; unfortunately, sometimes those viewers are looking at terrorist content. And then everyone has a problem.

Google’s YouTube was hit by advertiser withdrawals earlier this year as that connection came into sharp focus – even though The Guardian had pointed out this problem in 2012. It’s only when their bottom line is threatened that these companies will act, but removing content isn’t necessarily feasible. The testimony of the low-paid Facebook moderators (one said: “You’d go into work at 9am every morning, turn on your computer and watch someone having their head cut off”) demonstrates how difficult it is to moderate the content generated by a billion users every day. If even a fraction of a percentage is overlooked, that constitutes thousands of piece of content every day.

Wallace and Theresa May might think the easy solution is just to censor YouTube or Facebook, or block WhatsApp at the border; but they should consider which other countries do this: Turkey, Pakistan, China. Do we want to be anything like them?

It’s not even as if censorship has prevented terrorism in any of those countries. (China suppresses news about it, but it happens all the same.) A simpler threat might be to withdraw government advertising from those sites, and then ask for oversight on advertising. It would gum up the process terribly, and you’d quickly see some change.

And what really needs to change? The algorithms – the code that chooses what video or what content we will see next. It’s not enough to pretend that it doesn’t affect what people think of the world they live in. If nobody were ever recommended those videos, Wallace’s complaints about the need to removing radicalising content more quickly would be arguable. Facebook and Google have to accept that they influence the world, for good and bad; the next step is to put societal benefit slightly ahead of maximising profit. We live in a world ruled by recommendation algorithms. It’s time to make those rulers more beneficent.