Rise of AI images ‘reducing trust’ in what people see online, charity warns

The rise of AI-generated images is eroding public trust in online information, a leading fact-checking group has warned.

Full Fact said the increase in misleading images circulating online – and being shared by thousands of people – highlights how many struggle to spot such pictures.

The organisation has expressed concerns about the adequacy of the new Online Safety Act in combatting harmful misinformation on the internet, including the growing amount of AI-generated content, and called on the Government to increase media literacy funding to teach the public to better identify fake content.

The campaign group points to a number of recent incidents, including fake mugshots of former US president Donald Trump and an image of Pope Francis wearing a puffer jacket, as clear instances where many users were fooled into sharing fake content and therefore misinformation.

Full Fact’s fact-checking work has also highlighted fake photographs of the Duke of Sussex and Prince of Wales together at the coronation, which it says were shared more than 2,000 times on Facebook, and an image of Prime Minister Rishi Sunak pulling a pint of beer, which was edited to look worse and viewed thousands of times on X, formerly Twitter.

The charity said it believes most of the influx of low-quality content flagged by fact-checkers is not necessarily intended to get people to believe an individual claim but to reduce trust in information generally.

It also says the large volume of fake or manipulated content could have an impact on the availability of good information online by flooding search results.

The recent, rapid evolution of AI apps means capable, AI-powered image generation or manipulation tools are now readily available online.

Chris Morris, Full Fact chief executive, said: “This year, we have seen repeated instances of fake AI images being shared and spreading rapidly online, with many people unsuspectingly being duped into sharing bad information.

“A great example is the viral AI-generated image of the Pope wearing a puffer jacket, which was shared by tens of thousands of people online before being debunked by fact-checkers and news outlets alike.

“It is unfair to expect the public to rely on news outlets or fact-checkers alone to tackle this growing problem.

“Anyone can now access AI imaging tools, and unless the Government ramps up its resourcing to improve media literacy, and addresses the fact that the Online Safety Act fails to cover many foreseeable harms from content generated with AI tools, the information environment will be more difficult for people to navigate.

“A lack of action risks reducing trust in what people see online. This risks weakening our democracy, especially during elections.”

A Government spokesperson said: “We recognise the threat digitally manipulated content can pose, which is why we have ensured the Act, among the first of its kind anywhere in the world, is future proofed for issues like this. Under our new law, platforms will be required to swiftly remove manipulated content when it is illegal or breaches their terms of service – including user-generated content using AI. Failure to comply with these duties under the Act will incur severe fines.

“The government is also investing to support projects developing media literacy skills, including several projects specifically designed to build resilience to false information.”