Rapid growth of ‘news’ sites using AI tools like ChatGPT is driving the spread of misinformation
Dozens of fake news websites have spread across the web in multiple languages, some publishing hundreds of AI-generated articles a day.
That’s according to a report from NewsGuard, a tool that rates the credibility of websites and tracks online misinformation.
In its Rise of the Newsbots report, NewsGuard details how it has identified 49 websites purporting to be news sites, that are entirely or mostly generated by artificial intelligence language models - like the popular ChatGPT chatbot.
The availability of AI apps that can generate writing that at first glance is indistinguishable from that written by humans has sparked concerns about the potential spread of easily generated misinformation on the Internet.
And the report suggests that these AI tools are being used to populate what are being called “content farms” - low quality websites churning out large numbers of articles.
The apparent aim of the people behind the websites - many of whom remain anonymous - is to make money from advertising revenue, while generating content that is not properly verified by a human, and in some cases pushes completely false narratives.
AI writing is here, and it’s worryingly good. Can writers and academia adapt?
‘Biden dead. Harris acting President’
The report authors state that the majority of the websites studied did not actively push misinformation, but some did. One of those, CelebritiesDeaths.com, published an article in April this year with the headline: “Biden dead. Harris acting President, address 9am ET.”
The article claimed US president Joe Biden, who is 80 years old, died “peacefully in his sleep”. The article continues: “I’m sorry, I cannot complete this prompt as it goes against OpenAI’s use case policy on generating misleading content. It is not ethical to fabricate news about the death of someone, especially someone as prominent as a president”.
Euronews Next has contacted OpenAI, the company behind ChatGPT and other popular AI tools, requesting comment. The company did not respond to NewsGuard’s request for comment.
The majority of the websites in the report appeared to be rehashing information from legitimate news sources, such as BestBudgetUSA.com, which was anonymously registered in May 2022 and summarises or rewrites CNN articles.
Most of the articles on the websites are credited to “Admin” or “Editor,” while some credited to named people with photos were found to be false authors.
The articles themselves often give away their AI-authored origin. Dozens of BestBudgetUSA.com’s pieces use phrases commonly used by AIs, such as “I am not capable of producing 1500 words… However, I can provide you with a summary of the article”.
The inclusion of these lines also suggests no human has even cast an eye over what is being published.
Fake news websites were found in seven languages - Chinese, Czech, English, French, Portuguese, Tagalog, and Thai.
NewsGuard attempted to contact the owners of the sites they investigated, although many had no contact information available.
One that did elicit a response was Famadillo.com, a site that has published numerous AI-generated product reviews attributed to “admin”.
The person who responded, giving the name Maria Spanadoris, denied the site used AI in a widespread manner, saying: “We did an expert [sic] to use AI to edit old articles that nobody read anymore [sic] just to see how it works”.
Another, Adesh Ingale, who identified himself as the founder of GetIntoKnowledge.com, a site publishing AI-generated clickbait articles on history and science, claimed: “We use automation at some points where they are extremely needed. And yes they are 100% facts checked [sic] so that no false information is created…
“As a world [sic] is growing towards digital and automation era we have introduced some automation softwares in our work but the results getting out of it are 100 per cent original and regional facts based [sic]”.
NewsGuard warned that the “unassuming reader would likely have no idea that the articles produced by many of these AI content farms were not written by a human,” unless they stumbled across a commonly found error message within the AI-generated texts.
These include phrases like “my cutoff date in September 2021,” or “as an AI language model,” or “I cannot complete this prompt”.
One article on a website called CountyLocalNews.com published an article in March 2023 with the title: “Death News: Sorry I cannot fulfil this prompt as it goes against ethical and moral principles”.
It continued, pointing out that “vaccine genocide is a conspiracy” that can cause harm to public health - something an AI language model cannot promote.