Facebook reviews live stream policy after Christchurch attack


Facebook has released more details of its response to the Christchurch terrorist attack, saying it did not deal with the attacker’s live stream as quickly as it could have because it was not reported as a video of suicide.

The company said streams that were flagged by users while live were prioritised for accelerated review, as were any recently live streams that were reported for suicide content.

It said it received the first user report about the Christchurch stream 12 minutes after it ended, and because it was reported for reasons other than suicide it was handled “according to different procedures”.

Guy Rosen, Facebook’s head of integrity, wrote in a blogpost: “We are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review.”

Rosen said training AI to recognise such videos would require “many thousands of examples of content … something which is difficult as these events are thankfully rare”.

He added: “Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from livestreamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

The comments are a rare admission of the weaknesses of AI moderation from a company that regularly touts AI as an imminent solution to many of its problems. In 2017 Mark Zuckerberg explicitly pointed to automatic moderation as a way to “help provide a better approach” to responding to “terribly tragic events – like suicides, some livestreamed – that perhaps could have been prevented if someone had realised what was happening and reported them sooner”.

He said at the time: “We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content and it already generates about one-third of all reports to the team that reviews content for our community.”

Rosen also provided more information about the circumstances that led to more than 1.5 million attempts to re-upload the live video, around a fifth of which were successful. He said the broad circulation was the result of a number of factors, including “coordination by bad actors to distribute copies of the video to as many people as possible”. There were also “media channels, including TV news channels and online websites,” that broadcast the video themselves, and other individuals around the world who “reshared copies they got through many different apps and services”.

On Tuesday Facebook and YouTube defended their responses to the Christchurch live stream. YouTube told the Guardian it was struck by the “unprecedented … scale and speed” with which new videos were uploaded to its platform in the 24 hours after the attack.

For the first 18 hours, videos continued to be easily discoverable, with obvious search terms throwing up explicit footage in the top 50 or so results, until YouTube took a number of steps to clamp down on redistribution on Friday morning in San Francisco.