Whose job is it to stop the livestreaming of mass murder?

<span>Photograph: Akhtar Soomro/Reuters</span>
Photograph: Akhtar Soomro/Reuters

When a soldier in Thailand killed 29 people and injured more than 50 others last weekend, his bloody rampage was reportedly broadcast live to Facebook for almost five hours before it was taken down.

The attack happened almost a year after the Christchurch shooter livestreamed 72 minutes of his attack on two mosques that left 51 people dead and 50 injured.

Related: Far-right 'hate factory' still active on Facebook despite pledge to stop it

The latest incident has revived questions about who should be responsible for removing harmful content from the internet: the networks that host the content, the companies that protect those networks, or governments of the countries where the content is viewed.

Australia’s communications minister, Paul Fletcher, wrote in an opinion piece this week that it was “frankly pretty surprising that a government needs to request that measures be in place to protect against the livestreaming of murder”.

Australia is preparing to introduce an online safety act, which will create rules around terrorist-related material, as well as cyber-abuse, image-based abuse and other kinds of harmful content.

But while the question of whether to take down a livestream of murder is an obvious one, decisions about other kinds of take-down requests can be fraught.

“Some of those requests are kind of scary,” says John Graham-Cumming, the chief technology officer of US web security company Cloudflare. “In Spain you have Catalonia trying to be independent, and the Spanish government saying ‘that is sedition, can you remove it?’”

‘I woke up and decided to kick them off the internet’

Cloudflare doesn’t host content itself, rather it protect sites that do from distributed denial of service (DDoS) attacks that could take them offline. Yet, Cloudflare has found itself at the centre of debates around what sort of content is acceptable online, and whether tech companies should be making those decisions.

After American woman Heather Heyer was killed in 2017 while counter-protesting a Nazi rally in Charlottesville, Virginia, Cloudflare came under pressure to stop providing protection for the neo-Nazi website the Daily Stormer.

Cloudflare CEO Matthew Prince ultimately pulled protection for the website, but not without reflecting on the implications of his decision.

Let me be clear: this was an arbitrary decision. It was different than what I’d talked talked with our senior team about yesterday. I woke up this morning in a bad mood and decided to kick them off the Internet … It was a decision I could make because I’m the CEO of a major Internet infrastructure company.

Having made that decision, we now need to talk about why it is so dangerous. I’ll be posting something on our blog later today. Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

After the Christchurch shooting, Cloudflare again debated whether to stop providing services to 8chan, the forum where the perpetrators in the El Paso and Christchurch shootings had posted about their plans. Again the company decided to accede to demands it cut ties, and 8chan was taken offline.

Six months on, Cloudflare’s chief technology officer, John Graham-Cumming, tells Guardian Australia the company would like to see a legal framework in each jurisdiction that sets out what a company’s obligations are – particularly for companies that don’t host the content themselves.

Countries are beginning to legislate

After the Christchurch shooting, Australia quickly passed laws in that could result in company executives being jailed for three years, and the companies fined up to 10% of global revenue, for failing to quickly remove material when alerted by the eSafety commissioner.

In the UK, the government will appoint Ofcom to issue fines to social media companies that fail to remove harmful content.

The online safety act the Australian government is consulting on will give the e-safety commissioner the power to:

  • direct internet providers to block domains containing terrorism material “in an online crisis event”

  • ask search engine providers to de-rank websites that provide access to harmful material

  • force sites to remove cyber abuse or image-based abuse of adults within 24 hours

It will also allow the minister to set via legislative instrument a set of online safety expectations social media companies will need to comply with.

While this will makes things clearer for tech companies, it doesn’t spell the end of their headaches.

Global internet versus local policing

As Graham-Cumming points out, when one government has a law in place, then other governments can make similar demands.

“If the law in Australia says we have to hand over all our [encryption] keys then, for example, China or Saudi Arabia or Russia or Brazil or India or Germany could say ‘well you did it for Australia, how are we different from Australia?’” he said.

“There is this tension between this sense of global internet, and then local policing.”

Related: Facebook commitment to free speech will 'piss people off', Zuckerberg says

Graham-Cumming says the world is still getting to grips what role tech companies should play in determining what should be allowed online.

“We are in the middle of this massive change in the world where everything has gone online – good and bad – and as a society and as governments [we] don’t yet know what the answer is,” he says.

“I think what has happened is some quarter of the public is saying to technology companies: ‘you decide for me’. And that’s an unusual situation where private companies are being asked to make public policy like that.”