Government urged to address AI ‘risks’ to avoid ‘spooking’ public
The Government must address the risks associated with artificial intelligence (AI) – including potential threats to national security and the perpetuation of “unacceptable” societal biases – to ensure the public is not “spooked” by the technology, MPs have said.
The Science, Innovation and Technology Committee (SITC) said there are “many opportunities” for AI to be beneficial, but the technology also presents “many risks to long-established and cherished rights”.
Overcoming these are vital to securing public safety and confidence in the technology, as well as positioning the UK “as an AI governance leader”.
The SITC opened its inquiry into how AI should be regulated in October, examining its impact on society and the economy.
It said that while AI has been debated since “at least” the 1950s, it is ChatGPT, launched last November “that has sparked a global conversation”.
SITC chairman Greg Clark said: “Artificial intelligence is already transforming the way we live our lives and seems certain to undergo explosive growth in its impact on our society and economy.
“AI is full of opportunities, but also contains many important risks to long-established and cherished rights – ranging from personal privacy to national security – that people will expect policymakers to guard against.”
Mr Clark said the challenges identified by the committee “must be addressed” if “public confidence in AI is to be secured”.
The 12 major challenges outlined in the SITC report are:
– Bias – AI introducing or perpetuating “unacceptable” societal biases
– Privacy – AI allowing people to be identified or sharing personal information
– Misrepresentation – the generation of material by AI that “deliberately misrepresents someone’s behaviour, opinions or character”
– Access to data – AI requires large datasets which are held by few organisations
– Access to compute – powerful AI requires significant computer power, which is limited
– ‘Black box’ challenge – AI cannot always explain why it produces a particular result, which is an issue for transparency
– Open source challenges – requiring code to be openly available could promote transparency, but allowing it to be proprietary may concentrate market power
– Intellectual property and copyright – Some tools use other people’s content
– Liability – If AI is used by third parties to cause harm, policy must establish who bears liability
– Employment – AI will disrupt jobs
– International co-ordination – the development of AI governance frameworks must be international
– Existential challenges – some people think AI is a “major threat” to human life and governance must provide protections for national security
Mr Clark said no one risk included in the document is a priority and they “all have to be addressed together”.
“It’s not the case if you just deal with one, or half of them, that everyone can relax,” he added.
In March, a white paper outlining a “pro-innovation approach to AI regulation” was presented to Parliament by Michelle Donelan, the Secretary of State for Science, Innovation and Technology.
The document included five principles on AI – safety, security and robustness; fairness; transparency and explainability; accountability and governance; and contestability and redress.
However, Mr Clark said things have moved on from five months ago and the challenges outlined by SITC are more “concrete”.
“The challenges we’ve laid out are much more concrete and the Government needs to address them,” he added.
“It’s a challenge for the Government, but it’s very important that the development of the technology doesn’t outpace the development of policy thinking, to make sure that we can benefit and we’re not harmed by it.
“You need to drive the policy thinking at the same time as the tech development. If the public lose confidence and are spooked by AI, then there will be a reaction standing in the way of some of the benefits.”
The SITC also warned that legislation must be presented to Parliament during its next session and ahead of the general election, which is expected to take place in 2024.
It added that delays “would risk the UK, despite the Government’s good intentions, falling behind other jurisdictions”, such as the USA and European Union.
The Global AI Safety Summit – which is being held at Bletchley Park in November – is a “golden opportunity” for AI governance, according to SITC.
However, Mr Clark added: “If the Government’s ambitions are to be realised and its approach is to go beyond talks, it may well need to move with greater urgency in enacting the legislative powers it says will be needed.”
The SITC will publish its final recommendations on AI policy “in due course”.
A Government spokesperson said: “AI has enormous potential to change every aspect of our lives, and we owe it to our children and our grandchildren to harness that potential safely and responsibly.
“That’s why the UK is bringing together global leaders and experts for the world’s first major global summit on AI safety in November – driving targeted, rapid international action on the guardrails needed to support innovation while tackling risks and avoiding harms.
“Our AI Regulation White Paper sets out a proportionate and adaptable approach to regulation in the UK, while our Foundation Model Taskforce is focused on ensuring the safe development of AI models with an initial investment of £100 million – more funding dedicated to AI safety than any other government in the world.”