Facebook’s chilling revelations reveal users can’t police the social network

LONDON, ENGLAND - MARCH 15: Local people pay tribute to the victims of the New Zealand Mosque attack at an inter faith vigil attend by hundreds of people at Finsbury Park on March 15, 2019 in London, England. 49 people were killed in mass shootings at two mosques in central Christchurch, New Zealand. Muslims attending Friday prayers were shot at by a gunman who live-streamed his actions on Facebook. A man in his twenties has been charged with murder and will appear in court on Saturday morning. (Photo by Guy Smallman / Getty Images)
People pay tribute to the victims of the New Zealand Mosque attack at an inter faith vigil attend by hundreds of people at Finsbury Park on March 15, 2019 in London, England. (Photo by Guy Smallman / Getty Images)

Facebook revealed Tuesday that roughly 200 people watched a gunman’s live video of a New Zealand mosque attack last week but failed to report it to the social network.

It took 29 minutes for a user to flag the video, 12 minutes after the live broadcast ended. Meanwhile, Facebook employs thousands of content moderators working as contractors who failed to flag the video while the terror was still unfolding.

The assessment from Facebook, in the wake of attacks that killed 50 people, points to the limited tools at Facebook’s disposal and the inherent flaws in a content moderation system dependent on users and people employed as contractors.

“Why would 200 people view that and not report it. I think one of the challenges here is as a society, where does the responsibility lie,” said Adam Hadley, director at Tech Against Terrorism, a UN-affiliated organization that works to tackle terrorist use of the internet. “To what extent are users responsible for their own behavior?”

Increased backlash

Social media companies have grappled with that question for years in the face of increased backlash for their inability to keep up with offensive content posted online. In the case of Facebook, the dissemination of misinformation on its Whatsapp messaging service led to the brutal murder of dozens in India last year, prompting the company to place limits on the number of people and groups users can share a message with. In Myanmar, the platform was forced to admit its security policies were insufficient after a human rights group concluded that military officials used Facebook to incite genocide against the Muslim Rohingya minority group, violence that led to the largest forced human migration in recent history.

While Facebook has since devoted 300,000 people, comprised largely of content moderators, and vowed to spend $3.7 billion to improve safety and security on its platform, the carefully orchestrated approach the shooter took last week to disseminate his attacks in Christchurch, New Zealand, points to the persistent flaws in its content-removal system.

The shooter teased the horror on messaging board 8chan and left footnotes to a 74-page manifesto on Twitter. By the time Facebook scrambled to remove the gruesome content, the video had been viewed 4,000 times and copied onto countless other platforms. A statement released by Chris Sonderby, Facebook’s vice president and deputy general counsel, said the platform removed 1.5 million videos of the attack globally — in the first 24 hours alone.

Ultimately, the live-stream first broadcast on Facebook, was copied and shared millions of times to other social media platforms and at least one file-sharing site.

YouTube, which has faced its own challenges with violent extremism and disinformation, said uploads related to the shooting were much more rapid and in greater volume than previous mass shootings. In an interview with the Washington Post, YouTube’s Chief Product Officer Neal Mohan said users were uploading a video related to the attacks every second in the hours following the shooting.

“This was a tragedy that was almost designed for the purpose of going viral,” Mohan told the Washington Post. “The incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done.”

Tech against terrorism

In the aftermath of the shooting, Hadley’s organization, Tech Against Terrorism, has worked closely with the Global Internet Forum to Counter Terrorism, an organization set up by Facebook, Twitter, Google, and Microsoft, to push back against extremist content, to map out a coordinated response. The group has shared more than 800 “visually-distinct” videos to a collective database, along with URLs.

Together, the group has also invested significantly in deep learning, to help detect unwanted content. Hadley says 99.5% of all violent videos are now taken down automatically, but that some terror groups are easier to detect than others.

“Content created by ISIS is often very obvious to identify because it has a logo in it, it has a specific type of script or sound. It’s highly branded,” Hadley said. “A lot of the far-right extremist content is very subtle.”

Hadley adds that governments also need to be held responsible, in part by providing a clearer definition of what constitutes extremist content online and giving tech companies the legal tools to go after those who violate the law. Countries like Germany have put the onus on social media sites, implementing a law last year that requires platforms to remove hate speech within 24 hours of posting.

New Zealand has begun its own crackdown — arresting and charging an 18-year-old for reportedly sharing a live-stream of the shooting and posting a photograph of one of the victims with the message “target acquired.”

Australian telecommunications firm Telstra announced it would temporarily block sites that hosted content related to the attacks.

Australian Prime Minister Scott Morrison is looking to take it a step further, calling for a global conversation at the upcoming G20 summit in Japan to discuss obligations he says tech firms have to prevent violent acts and protect its users. In a letter addressed to Japanese Prime Minister Shinzo Abe, the acting G20 President, Morrison also called for more transparency, saying the public is “entitled to know in detail” how the platforms are managing content.

“It is unacceptable to treat the internet as an ungoverned space,” Morrison wrote. “It is imperative that the community works together to ensure that the technology firms meet their moral obligation to protect the communities which they serve and from which they profit.”

Akiko Fujita is an anchor and reporter for Yahoo Finance. Follow her on Twitter at @AkikoFujita

More from Akiko:

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, YouTube, and reddit.