EU and US lawmakers move to draft AI Code of Conduct fast
The European Union has used a transatlantic trade and technology talking shop to commit to moving fast and producing a draft Code of Conduct for artificial intelligence, working with US counterparts and in the hope that governments in other regions -- including Indonesia and India -- will want to get involved.
What's planned is a set of standards for applying AI to bridge the gap, ahead of legislation being passed to regulate uses of the tech in respective countries and regions around the world.
Whether AI giants will agree to abide by what will be voluntary (non-legally binding) standards remains to be seen. But the movers and shakers in this space can expect to be encouraged to do so by lawmakers on both sides of the Atlantic -- and soon, with the EU calling for the Code to be drafted within weeks. (And, well, given the rising clamour from tech industry CEO screaming for AI regulation it would be pretty hypocritical for leaders in the field to turn their noise up at a voluntary Code.)
Speaking at the close of a panel session on generative AI that was held at the fourth meeting of the US-EU Trade & Tech Council (TTC) which was taking place in Sweden this week -- with the panel hearing from stakeholders including Anthropic CEO Dario Amodei and Microsoft president Brad Smith -- the European Union's EVP Margrethe Vestager, who heads up the bloc's competition and digital strategy, signalled it intends to get to work stat.
"We will be very encouraged to take it from here. To produce a draft. To invite global partners to come on board. To cover as many as possible," she said. "And we will make this a question of absolute urgency to have such an AI Code of Conduct for a voluntary signup."
The TTC was established back in 2021, in the wake of the Trump presidency as EU and US lawmakers sought to repair trust and find ways to cooperate on tech governance and trade issues.
Vestager described generative AI as a "seismic change" and "categorical shift" that she said demands a regulatory response in real-time.
"Now technology is accelerating to a completely different degree than what we've seen before," she said. "So obviously, something needs to be done to get the most of this new technology... We're talking about technology that develops by the month so what we have concluded here at this TTC is that we should take an initiative to get as many other countries on board on an AI Code of Conduct for businesses voluntarily to sign up."
While Vestager couched industry input as "very welcome", she indicated that -- from the EU side at least -- the intent is for lawmakers to draw up safety provisions and companies to agree to get on board and apply the standards, rather than having companies drive things by suggesting a bare minimum on standards (and/or seeking to reframe AI safety to focus on existential future threats rather than extant harms) and lawmakers swallowing the bait.
"We will not forget that there are other sorts of artificial intelligence, obviously," she went on. "There are a number of things that need to be done. But the thing is that we need to show that democracy is up to speed because legislative procedures they must take their time; that is the nature of legislation. But this is a way for democracies to answer in real time a question that is really, really in our face right now. And I find this very encouraging to do it and I'm looking very much forward to work with as many as possible in depth and very fast."
The EU is ahead of the regulatory curve on AI since it already has draft legislation on the table. But the risk-based framework the Commission presented, back in April 2021, is still winding its way through in the bloc's co-legislative -- looping in lawmakers in the European Council and Parliament (and for a sense of how live that process is parliamentarians recently proposed amendments targeted at generative AI) -- so there's no immediate prospect of those hard rules applying to generative or any other type of AI.
Even with the most optimistic outlook for the EU adopting the AI Act, Vestager suggested today that it would be two or three years before those hard rules bite. Hence the bloc's urgency for stop-gap measures.
"... a pace like no other"
Also present at the TTC meeting was Gina Raimondo, US secretary of state for commerce -- who indicated the Biden administration's willingness to engage with discussion toward shaping a voluntary AI Code of Conduct. Although she kept her cards close to her chest on what kind of standards the US might be comfortable to push onto what are predominantly US AI giants.
"[AI] is coming at a pace like no other technology," observed Raimondo. "Like other technologies, we are already seeing issues with data privacy, misuse, what happens when the models get into the hands of malign actors, misinformation. Unlike other technology, the rate of the pace of innovation is at a breakneck pace, which is different and a hockey stick that doesn't exist in other technologies.
"In that respect, I think the TTC could play an incredibly relevant role because it will take a little bit of time for the US Congress or the parliament or other regulatory agencies to catch up. Whereas the risk for some of AI is today. And so we are committed to making sure that the TTC provides a forum for stakeholder engagement, engagement of the private sector, engagement of our companies, to figure out what can we do in the here and now to mitigate the risks of AI but also not to stifle the innovation. And that is a real challenge"
"As we figure out the benefits of AI, I hope we're all really eyes wide open about the costs and do the analysis of whether we should do it," she also warned. "I think if we all are honest with ourselves about other technologies, including social media, we probably wish we had not done things even though we could've. You know, we could have but should have we? And so let's work together to get this right, because the stakes are a whole lot higher."
As well as high level lawmakers, the panel discussion heard from a handful of industry and civil society groups, chipping in with perspectives on the imperative for and/or challenge of regulating such a fast moving field of technology.
Anthropic's Amodei heaped praise on the transatlantic conversation taking place around AI rule-making. Which likely signals relief that the US is actively involving itself in standards-making which might otherwise be exclusively driven by Brussels.
The bulk of his remarks sounded a sceptical note over how to ensure AI systems are truly safe prior to release -- implying we don't yet have techniques for achieving reliable guardrails around such shape-shifting tools. He also suggested there should be a joint commitment from the US and EU to fund the development of "standards and evaluation" for AI -- rather than calling for any algorithmic auditing in the here and now.
"When I think about the rate at which this technology is bringing new sources of power into the world, combined with the resurgent threat from autocracies that we're seeing over the last year, it seems to me that it's all the more important that we work together to prevent [AI] harms and defend our shared democratic values. And the TTC seems like a critical for forum for doing that," he said early in his allotted time before going on to predict that developments in AI would continue to come at a steady clip and setting out some of his major concerns -- including highlighting "measurement" for AI safety as a challenge.
"What we're going to be able to do in one to four years are things that seem impossible now. This is, I would say, if there's a central fact about the field of AI to know, this is the central fact to know. And though there will be many positive opportunities to come from this, I worry greatly about risks -- particularly in the domain of cybersecurity, biology, things like disinformation, where I think there's the potential for great destruction," he said. "In the longer term, I worry even about the risks of truly autonomous systems. That's a little further out.
"On measurement, I think we're very used to -- when we think about regulating technologies like automobiles or aeroplanes -- we're measuring safety as a secure field; you have a given set of tests you can run to tell if the system is safe. AI is much more of a wild west than that. You can ask an AI system to do anything at all in natural language and it can answer in any way it chooses to answer.
"You might try to ask a system 10 different ways whether it can conduct, say, a dangerous cyber attack and find that it won't. But you forgot to ask it an 11th way that would have shown this dangerous behaviour. A phrase I like to use is 'no one knows what an AI system is capable of until it's deployed to a million people'. And of course, this is a bad thing, right? We don't want to deploy these things in this cowboy-ish way. And so this difficulty of detecting dangerous capabilities is a huge impediment to mitigating them."
The contribution looked intended to lobby against any hard testing of AI capabilities being included in the forthcoming Code of Conduct -- by seeking to kick the can down the road.
"This difficulty of detecting dangerous capabilities a huge impediment to mitigating them," he suggested, while conceding that "some kind of standards or evaluation are a crucial prerequisite for effective AI regulation" but also further muddying the water by saying "both sides of the Atlantic have an interest in developing this science".
"US and EU have a long tradition of collaborating on [standards and evaluation] which we could extend and then maybe more radically, a commitment to adopt an eventual set of common standards and evaluations as a sort of raw material for the rules of the road in AI," he added, gazing into the eventual distance.
Microsoft's Smith used his four minutes' speaking time to urge regulators to "move forward the innovation and safety standards together" -- also amping up the AI hype by lauding the potential benefits for AI to "do good for the world" and "save people's lives", such as by detecting or curing cancer or enhancing disaster response capabilities, while conceding safety needs focus with an affirmation that "we do need to be clear eyed about the risks".
He also welcomed the prospect of transatlantic cooperation on AI standards. But pressed for lawmakers to shoot for broader international coordination on things like product development processes -- which he suggested would help drive forward on both AI safety standards and innovation.
"Certain things benefit enormously from international coordination, especially when it comes to product development processes. We're not going to advance safety or innovation if there's different approaches to, say, how our red team should work in the safety product process for developing a new AI model," he said.
"Other things there's more room for divergence and there will be some because the world -- even the countries that share common values -- we'll have some differences. And there's areas around licensing or usage where one can manage with that divergence. But in short, there's a lot that we will benefit from learning now and then putting into practice."
Towards external audits?
No one from OpenAI was speaking during the TTC panel but Vestager had a videoconference meeting with CEO Sam Altman in the afternoon.
In a read-out of the meeting, the Commission said the pair shared ideas for the voluntary AI code of conduct that was launched at the TTC -- with discussion touching on how to tackle misinformation; transparency issues, including ensuring users are made aware if they communicate with AI; how to ensure verification (red teaming) and external audits; how to ensure monitoring and feedback loops; and the issue of ensuring compliance while avoiding barriers for startups and SMEs.
The Commission added that there was "a strong overall agreement to advance on the voluntary code of conduct as fast as possible and with G7 and other key partners, as a stopgap measures until regulation is in place", adding there would be "a continued engagement on the AI Act as the legislative process progresses".
In a subsequent tweet Vestager said discussions with OpenAI's Altman and Anthropic's Amodei had featured talk of external audits, watermarking and "feedback loops".
Watermarking, external audits, feedback loops - just some of the ideas discussed with @AnthropicAI and @sama @OpenAI for the #AI #CodeOfConduct launched today at the #TTC in #Luleå @SecRaimondo Looking forward to discussing with international partners. pic.twitter.com/wV08KDNs3h
— Margrethe Vestager (@vestager) May 31, 2023
In recents days Altman has ruffled feathers in Brussels with some flatfooted lobbying in which he seemingly threatened to pull his tool out of the region if provisions in the EU's AI Act targeted at generative AI aren't watered down.
He then quickly withdrew the threat after the bloc's internal market commissioner tweeted a public dressing down at OpenAI, accusing the company of attempting to blackmail lawmakers. So it will be interesting to see how enthusiastically (or otherwise) Altman engages with the substance of the Code of Conduct for AI.
(For its part, Google has previously indicated it wants to work with the EU on stop-gap AI standards -- as part of a so-called "AI Pact" which appears to be a separate EU initiative to the Code of Conduct; per a Commission spokesperson the AI Pact is focused on getting companies to agree to front-load the implementation of key AI Act provisions on a voluntary basis, whereas the Code aims to promote guardrails for the use of generative AI or "advanced GPAI" (general purpose AI) models on a global level.)
While AI giants have been relatively reluctant to focus on current AI risks and how they might be reined in, preferring talk of far-flung fears of non-existent "superintelligent" AIs, the TTC meeting also heard from Dr. Gemma Galdon-Clavell, founder and CEO of Eticas Consulting -- a business which runs algorithmic audits for customers to encourage accountability around uses of AI and algorithmic technology -- who was eager to school the panel in current-gen accountability techniques.
"I am convinced [algorithmic auditing] is going to be the main tool to understand quantify and mitigate harms in AI," she said. "We ourselves are hoping to be the first auditing unicorn that puts the tools [on the table] that maximise engineering possibilities while taking into account fundamental rights and societal values."
She described the EU's recently adopted overhaul of ecommerce and marketplace rules, aka the Digital Services Act (DSA), as a pioneering piece of legislation in this regard -- on account of the law's push to require transparency from very large online platforms on how their algorithms work, predicting algorithmic audits will become the go-to AI safety tool in the coming years.
"The good news is that audits are informally becoming the consensus on one of the potential ways of regulating AI," she argued. "We have audit in the wording of the DSA, an absolute pioneer in Europe. We have audits in the New York City regulation of the use of AI hiring systems. The very recent NTIA consultation process is focused on audit. The FTC keeps asking about what are the standards and the thresholds that need to be used in audits. So there's an emerging consensus that audits -- call them inspection mechanisms... validations... the name doesn't really matter -- the thing is that we need inspection mechanisms that allow us to land the concerns of the policymakers, while understanding the technologies that are making change possible."
"I am convinced, in a few years from now -- not many years, three, five years from now -- we'll be amazed there was a time where we released AI systems without audits. We will not believe it," she added, comparing the current wild west of unregulated AI safety and transparency to the 19th Century when a consumer could walk into a pharmacy and buy cocaine.
Despite spotlighting a regulatory trajectory she suggested is headed towards auditing, Galdon-Clavell was also critical of a tendency for policymakers to lock on to talk of far-flung theoretical harms -- what she dubbed "science fiction debates" -- which she skewered as a distraction from addressing current AI-driven harms, warning: "There's an opportunity costs when we talk about science fiction in the long distance impact. What happens to the current impacts that we are seeing right now? How do we protect people today? This generation from the harms that we already understand and know are happening around those systems?"
She also urged lawmakers to get on with passing legislation with "teeth". "The industry needs to do better and right now the incentives to do better are not there," she emphasized. "Our clients come to us to be audited because they think that's what they need to do. And not because they are forced by anyone. And those that don't audit have no incentive to start doing it if they don't want to.
"The regulation that needs to come needs to have teeth. So my plea to the TTC would be to please start listening to the emerging consensus that's already there. It is transatlantic, it is very much global. There's things on the table they can already help us protect right now -- tomorrow, today -- the current generation that is suffering the negative impacts of some of these new technological developments."
Another speaker on the panel, Alexandra Reeve Givens, the president & CEO of the US-based human rights focused not-for-profit, the Center for Democracy and Technology, also urged policymakers to direct their attention on "real" AI risks that she said are "manifesting already".
"We're seeing already the professional, reputational and potential physical harms when people rely on generated text results as accurate, unaware of the likelihood of hallucinations or fabricated results," she warned. "There's the risk that generative AI tools will supercharge fraud, as tools make it easier to quickly generate personalised scams or to trick people by impersonating a familiar voice. There are risks of deep fakes that misrepresent public figures in a way that threatens elections, national security, or general public order. And there are risks of fake images being used to harass, exploit and extort people. None of these harms are new but they're made cheaper, faster and more effective by the ease and accessibility of generative AI tools."
She also urged lawmakers not to limit their focus to only generative AI and ignore harms being generated by less viral flavors of AI -- which she said are "directly impacting people's rights, freedoms and access to opportunity today" -- such as AI tools being used to determine who gets a job or receives public benefits, or the use of AI surveillance tools by law enforcement.
"Policymakers cannot lose sight of those core issues, even as they expand their focus to generative AI," she said, calling for any AI assessment initiatives to be similarly comprehensive and address a full-spectrum of real-world harms.
"Policymakers must be crystal clear that efforts to evaluate and manage AI risk must meaningfully address a full spectrum of real world harms. What I mean by that is policymakers must ensure AI audits and assessments are rigorous, comprehensive, and escape issues of capture. Policymakers must address the danger that frameworks for measuring risks often address only those harms that can be easily measured, which privileges economic and physical harms over equally important harms to people's privacy, dignity and the right not to be stereotyped or maligned."
"As policymakers in the US and the EU look to industry efforts or technically oriented standards bodies to consider questions of measuring and managing risk they must ensure these fundamental rights-based concerns are appropriately addressed," she added.
Reeve Givens also called for transparency plans in the TTC roadmap, which calls for joint tracking of emerging AI risks and incidents of harms, to be expanded to tackle information asymmetries to establish a common foundation for regulators to work from -- and for civil society voices and marginalized communities who may face disproportionate risk and harm from AI outputs to be involved in any processes to draw up standards.
This report was updated with additional detail about the difference between the EU's AI Pact initiative and the AI Code of Conduct