Charity reports rise in takedowns of child abuse imagery

Person pressing delete button
The IWF processed 250,000 reports in 2018, up from 50,000 five years ago. Photograph: Dominic Lipinski/PA

A charity tasked with removing child abuse imagery from the internet has warned of a “horrifying” increase in the amount of material it has had to take down over the past year.

The Internet Watch Foundation, which acts as a de facto watchdog for online child abuse in the UK, said it removed more than 100,000 webpages showing the sexual abuse and sexual torture of children in 2018, an increase of one-third over the year before.

The IWF usually publishes its annual findings in April but brought forward the release this year because it felt the increase was too significant to withhold.

The charity’s chief executive, Susan Hargreaves, said: “These 105,047 webpages each contained up to thousands of images and videos showing the sexual abuse of children. It amounted to millions of horrific images. Virtually all (more than 99%) were hosted outside of the UK.

“It is shocking and deeply upsetting that these images should have been created in the first place. We have set ourselves an ambitious programme of work for 2019. By getting better at finding and combating this material, we offer real hope to the victims whose images are shared online.”

The IWF said four in 10 of the webpages it flagged for removal in 2018 displayed the sexual abuse of children aged 10 and younger, with infants and babies featuring more than 1,300 times.

Five years ago the IWF processed just over 50,000 reports, acting on a quarter of them. Last year there were more than 250,000 reports, and a higher proportion of them were found to be actionable.

Sajid Javid, the home secretary, called on internet companies to take on more of the work themselves.

“The horrifying amount of online child sexual abuse material removed by the IWF shows the true scale of the vile threat we are facing. This is why I have made tackling it one of my personal missions,” he said. “I welcome [the IWF’s] impressive work and have been encouraged by the progress being made by the tech companies in the fight against online predators. But l want the web giants to do more to make their platforms safe.”

When an image depicting child abuse is flagged for removal by the IWF, a record of the file is created, known as a hash, which can be matched automatically to future attempts to upload the same picture on social networks and photo sharing sites.

The IWF says it has created more than 320,000 unique hashes since it began using the technology. The company also operates a blacklist of URLs that contain child abuse imagery, which is distributed to British internet service providers which in turn block the sites on a voluntary basis.

The charity also relies on the manual work of 13 human reviewers, who operate under a memorandum of understanding from the Association of Chief Police Officers, which provides them with protection from the legal consequences of accessing child abuse images.

Catherine, one of the IWF’s analysts, said: “We’ve seen a huge rise in child abuse imagery captured by webcams this year. On commercial sites, where an offender could be making a profit from the material, the ages of the children appear to be getting younger. This certainly makes you more aware of online safety and that’s a message I’m happy to share.”

People who are worried about a sexual image or video online of someone who may be under 18 can make an anonymous report on the IWF’s website.