Government told to 'slow down' AI plans as it's already being used to create child sex abuse images
Children's charity the NSPCC has cautioned the Government to put the brakes on its artificial intelligence (AI) action plans until a statutory duty of care for children is established to safeguard them from the technology's potential risks.
The charity has raised concerns about generative AI being used to produce illegal child abuse images, prompting them to urge the Government to incorporate concrete safeguards into legislation to effectively regulate AI and protect children. NSPCC also revealed that a significant majority (78%) of the public would prioritize more stringent safety checks on new generative AI tools, even if this meant delaying their release.
A recent study commissioned by NSPCC found that a staggering 89% of respondents had some degree of concern regarding AI and child safety. The charity has been receiving reports about AI from children via Childline since 2019, further underscoring the need for decisive action.
READ MORE: Coin experts name rarest old £1 designs that collectors are searching for
READ MORE: Nationwide confirms £410 cash boost for customers this February
The call to action comes on the heels of the Prime Minister's recent announcement to bolster the UK's AI industry, promote its use in daily life, and harness its potential to drive economic growth. An international conference, the AI Action Summit, is set to occur in Paris next month.
NSPCC’s chief executive Chris Sherwood said: “Generative AI is a double-edged sword. On the one hand it provides opportunities for innovation, creativity and productivity that young people can benefit from; on the other it is having a devastating and corrosive impact on their lives.
“We can’t continue with the status quo where tech platforms ‘move fast and break things’ instead of prioritising children’s safety. For too long, unregulated social media platforms have exposed children to appalling harms that could have been prevented.
“Now the Government must learn from these mistakes, move quickly to put safeguards in place and regulate generative AI, before it spirals out of control and damages more young lives. The NSPCC and the majority of the public want tech companies to do the right thing for children and make sure the development of AI doesn’t race ahead of child safety.
“We have the blueprints needed to ensure this technology has children’s wellbeing at its heart, now both Government and tech companies must take the urgent action needed to make generative AI safe for children and young people.”
Derek Ray-Hill, interim chief executive at the Internet Watch Foundation, which seeks out and helps remove child sexual abuse imagery from the internet, said existing laws, as well as future AI legislation, must be made robust enough to ensure children are protected from being exploited by the technology.
“Artificial intelligence is one of the biggest threats facing children online in a generation, and the public is rightly concerned about its impact,” he said. “While the technology has huge capacity for good, at the moment it is just too easy for criminals to use AI to generate sexually explicit content of children – potentially in limitless numbers, even incorporating imagery of real children. The potential for harm is unimaginable.
“AI companies must prioritise the protection of children and the prevention of AI abuse imagery above any thought of profit. It is vital that models are assessed before they go to market, and rigorous risk mitigation strategies must be in place, with protections built into closed-source models from the outset."