Labour vows to force firms developing powerful AI to meet requirements
Labour has said it would urgently introduce binding requirements for companies developing powerful artificial intelligence (AI) after Rishi Sunak said he would not “rush” to regulate the technology.
The party has promised to force firms to report before they train models over a certain capability threshold and to carry out safety tests strengthened by independent oversight if it wins the next general election.
The Prime Minister has said that mitigating the risks of AI should be a global priority, but the Government will not “rush to regulate” and does not want to be “alarmist” about the issue.
The Government’s white paper on AI proposes five “principles” such as “safety” and “accountability” for companies to adhere to, but these will not initially be put on a statutory footing.
Peter Kyle MP, shadow technology secretary, said: “AI has the potential to transform the world and deliver life-changing benefits for working people. From delivering earlier cancer diagnosis, to relieving traffic congestion, AI can be a force for good.
“But to secure these benefits we must get on top of the risks and build public trust. It is not good enough for our ‘inaction man’ Prime Minister to say he will not rush to take action, having told the public that there are national security risks which could end our way of life.”
“The AI summit was an opportunity for the UK to lead the global debate on how we regulate this powerful new technology for good. Instead the Prime Minister has been left behind by US and EU who are moving ahead with real safeguards on the technology.”
It came as the Prime Minister announced that governments and tech companies had reached a new agreement for new AI models to undergo safety checks before their release.
In a press conference at the close of the UK’s AI Safety Summit at Bletchley Park – the home of Britain’s codebreaking efforts in the Second World War – Mr Sunak acknowledged that “binding” rules are likely to be needed for the technology.
But he added that now is the time to move quickly without legislation, with the agreement – which recognises that governments and companies have a role to play in ensuring external vetting of AI – serving as an example.
A spokesperson for the Department for Science, Innovation and Technology said: “As the Prime Minister closed the world’s first ever AI Safety Summit, countries and companies developing frontier AI have agreed a ground-breaking plan on AI safety testing, while our newly announced Safety Institute will act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast-moving technology.
“This builds on the Bletchley Declaration signed yesterday by 28 countries from across the globe including the US, EU and China, agreeing the opportunities, risks and need for international action.”