Advertisement

New tech helps Facebook remove 99% of IS posts before users flag it up

Facebook claims it is making significant advances in the fight against online extremism.

The tech giant has revealed that 99% of all Islamic State and al Qaeda-related content is being removed from their platform before users flag it up.

In a review of its counter-extremism efforts, Facebook executives acknowledge there is more to do, but the company said it was actively investing in the latest advances in artificial intelligence (AI) systems to help identify and take down extremist content more quickly.

The UK Government said it welcomed the progress Facebook has made, but added all technology companies had further to go in tackling online extremism.

Facebook has also revealed that 83% of Islamic State and al Qaeda content posted on its platform was identified and removed within an hour.

In a special blog ahead of the EU Counter Terrorism Forum on 6 December, Facebook executives Monika Bickert and Brian Fishman detailed the efforts their company was making.

They wrote: "We invest in efforts to prevent terrorist content from ever hitting our site. But when it does, we are working to quickly find it and remove it from our platform.

"We've historically relied on people, our content reviewers to assess potentially violating content and remove it.

"But we've begun to use artificial intelligence to supplement these efforts, in order to more quickly and accurately identify terrorist content.

"In figuring out what's effective, we face the challenges that any company faces in developing technology that can work across different types of media and different types of content.

"For instance, a solution that works for photos will not necessarily help with videos or text.

"A solution that works for recognising terrorist iconography in images will not necessarily distinguish between a terrorist sharing that image to recruit and a news organisation sharing the same image to educate the public."

Although Islamic State is coming under extreme pressure from coalition forces, its members and supporters are continuing to pump out massive amounts of extremist material - propaganda that finds its way across towns and cities and into ordinary homes, through the internet and social media sites.

Even though Facebook claims it is taking down the vast majority of this offending material very quickly, Sky News researchers were able to isolate numerous extremist links on the platform after a relatively simple search.

Extremism expert Raffaello Pantucci, at the Royal United Services Institute, said that despite the investment by tech companies in identifying and removing such material, the terror threat is constantly evolving, making those efforts much more difficult.

"It's a hugely complicated picture that they're looking at.

"The other aspect, which is very difficult for them, is to understand how the relationship between terrorist activity and the internet is evolving.

"How it's moving in some ways, from not just passive viewing of material, to interaction with this material online.

"In some ways, that's where the heart of the threat now lies and that's even harder for tech companies to identify, because there, you're not even necessarily looking at specific radical material, you're more looking at radical contacts and they may be very hard to discern from everything else."

A Home Office spokesman said Facebook's progress was positive.

"This is good news that artificial intelligence and other automation methods are being deployed more widely to tackle terrorist propaganda.

"This government has been at the forefront of the drive for companies to take a more proactive approach to terrorist and extremist content on their platforms.

"There is still more to do but this is promising and we look forward to more updates from the Global Internet Forum to Counter Terrorism.

"We know that terrorist recruiters are deliberately misusing the internet to try and radicalise vulnerable people.

"The scale of the challenge is significant and we are clear that internet companies ultimately need to stop this material from being made available to users in the first place."

Dr Usama Hassan, a counter extremism expert at the Quilliam Foundation, said the fight against online extremism is likely to be a protracted battle.

He added: "We have to recognise that, as has happened before, when one social media platform succeeds in that battle, extremists and terrorists will often find a new technology platform to work on.

"More are springing up all of the time of course - and so the battle actually shifts to another battle ground if you like.

"I think we have to accept that reality, that will go on but it is a very important battle worth fighting."

After the 2013 murder of fusilier Lee Rigby in London, Facebook was heavily criticised for failing to pass on details to authorities of threats made by one of the killers Michael Adebowale against soldiers on his Facebook page.

The social media giant has revealed that over the past year, it has actively worked with law enforcement agencies on real-time investigations, helping disrupt a number of terrorist attacks.

Earlier this week, Facebook also revealed that it has begun using AI to identify users who are expressing suicidal thoughts - allowing them to receive help from the emergency services.