YouTube says machines are better at finding extremist videos than humans

YouTube said it would have 10,000 members of staff who work on policing and reviewing offensive content - REUTERS
YouTube said it would have 10,000 members of staff who work on policing and reviewing offensive content - REUTERS

Violent extremist videos on YouTube are taken down quicker when machines intervene, the video site has claimed in its most detailed report on video removals to date.

The claim follows a pledge to get more humans reviewing harmful clips, with YouTube boss Susan Wojcicki promising that by the end of 2018, 10,000 people will be tasked with reducing the time it takes to remove offensive material.

YouTube, which is owned by Google, removed 8 million videos between October to December 2017, 6.7million of which were raised for review by machines rather than humans, the report on community guidelines and transparency claimed.

This marks the first time YouTube has revealed exactly how many videos it removes for violating its policies, following backlash over how it policed its guidelines.

Most of these videos were spam or adult videos that someone had attempted to upload repeatedly, YouTube added. 

The report appeared to contradict concerns that not enough humans are monitoring when a video is flagged by a user - which could include child abuse, terrorism, criminal acts or adult material.

Following a spate of terror attacks in the UK in 2017, YouTube faced significant pressure from MPs to put more people in charge of watching and removing videos that incited extremist behaviour.  However, YouTube suggested that machine learning is particularly useful not just in low-risk, high-volume areas like spam but high-risk, low volume areas like terrorism.

In 2017, the video site promised that 10,000 people would be charged with ensuring videos are removed quickly, including a range of content reviewers, who will watch clips that have been “flagged” as inappropriate by viewers, along with engineers, policymakers and legal teams.

Susan Wojcicki - Credit: Bloomberg
YouTube chief executive Susan Wojcicki Credit: Bloomberg

But it is fine-tuning algorithms that can detect minute details, which can be applied not only to detect violations but also copyright infringement.

YouTube uses a variety of techniques to automatically detect offensive videos including hashing, which means it can spot when a video has already online like a digital fingerprint. It uses object recognition and is shown offending videos so it can detect patterns in them and spot them in new uploads. This, combined with analysis of video titles, the details of a person - or bot - that uploaded the clip along with location-specific metadata allows a machine to make an informed choice about whether it should be removed. 

At the beginning of 2017, 8pc of the videos flagged and removed for violent extremism were taken down with fewer than 10 views. YouTube introduced machine learning flagging in June 2017 and now, more than half of the videos YouTube removes for violent extremism have fewer than 10 views, the report found.

A YouTube spokesman said: “Deploying machine learning actually means more people reviewing content, not fewer. Our systems rely on human review to assess whether content violates our policies.”

Technology intelligence - newsletter promo - EOA
Technology intelligence - newsletter promo - EOA