EU agrees ‘historic’ deal with world’s first laws to regulate AI
The world’s first comprehensive laws to regulate artificial intelligence have been agreed in a landmark deal after a marathon 37-hour negotiation between the European Parliament and EU member states.
The agreement was described as “historic” by Thierry Breton, the European Commissioner responsible for a suite of laws in Europe that will also govern social media and search engines, covering giants such as X, TikTok and Google.
Breton said 100 people had been in a room for almost three days to seal the deal. He said it was “worth the few hours of sleep” to make the “historic” deal.
Carme Artigas, Spain’s secretary of state for AI, who facilitated the negotiations, said France and Germany supported the text, amid reports that tech companies in those countries were fighting for a lighter touch approach to foster innovation among small companies.
Historic!
The EU becomes the very first continent to set clear rules for the use of AI 🇪🇺
The #AIAct is much more than a rulebook — it's a launchpad for EU startups and researchers to lead the global AI race.
The best is yet to come! 👍 pic.twitter.com/W9rths31MU— Thierry Breton (@ThierryBreton) December 8, 2023
The agreement puts the EU ahead of the US, China and the UK in the race to regulate artificial intelligence and protect the public from risks that include potential threat to life that many fear the rapidly developing technology carries.
Officials provided few details on what exactly will make it into the eventual law, which would not take effect until 2025 at the earliest.
The political agreement between the European Parliament and EU member states on new laws to regulate AI was a hard-fought battle, with clashes over foundation models designed for general rather than specific purposes.
But there were also protracted negotiations over AI-driven surveillance, which could be used by the police, employers or retailers to film members of the public in real time and recognise emotional stress.
The European Parliament secured a ban on use of real-time surveillance and biometric technologies including emotional recognition but with three exceptions, according to Breton.
It would mean police would be able to use the invasive technologies only in the event of an unexpected threat of a terrorist attack, the need to search for victims and in the prosecution of serious crime.
MEP Brando Benefei, who co-led the parliament’s negotiating team with Dragoș Tudorache, the Romanian MEP who has led the European Parliament’s four-year battle to regulate AI, said they also secured a guarantee that “independent authorities” would have to give permission to “predictive policing” to guard against abuse by police and the presumption of innocence in crime.
“We had one objective to deliver a legislation that would ensure that the ecosystem of AI in Europe will develop with a human-centric approach respecting fundamental rights, human values, building trust, building consciousness of how we can get the best out of this AI revolution that is happening before our eyes,” he told reporters at a press conference held after midnight in Brussels.
Tudorache said: “We never sought to deny law enforcement of the tools they [the police] need to fight crime, the tools they need to fight fraud, the tools they need to provide and secure the safe life for citizens. But we did want – and what we did achieve – is a ban on AI technology that will determine or predetermine who might commit a crime.”
The foundation of the agreement is a risk-based tiered system where the highest level of regulation applies to those machines that pose the highest risk to health, safety and human rights.
In the original text it was envisaged this would include all systems with more than 10,000 business users.
The highest risk category is now defined by the number of computer transactions needed to train the machine, known as “floating point operations per second” (Flops).
Sources say there is only one model, GPT4, that exists that would fall into this new definition.
The law tier of regulation still places major obligations on AI services including basic rules about disclosure of data it uses to teach the machine to do anything from write a newspaper article to diagnose cancer.
Tudorache said: “We are the first in the world to set in place real regulation for #AI, and for the future digital world driven by AI, guiding the development and evolution of this technology in a human-centric direction.”
Previously he has said that the EU was determined not to make the mistakes of the past, when tech giants such as Facebook were allowed to grow into multi-billion dollar corporations with no obligation to regulate content on their platforms including interference in elections, child sex abuse and hate speech.
Strong and comprehensive regulation from the EU could “set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who is an expert on the EU and digital regulation. Other countries “may not copy every provision but will likely emulate many aspects of it”.
AI companies who will have to obey the EU’s rules will also likely extend some of those obligations to markets outside the continent, Bradford told the AP. “After all, it is not efficient to re-train separate models for different markets,” she said.