Could AI disrupt 50 elections this year? Here’s what experts say
As multiple large technology companies signed a pact to stop AI tools being used to disrupt elections, the technology has evolved to a point where it poses a significant threat.
At the Munich Security Conference, executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok signed an accord to make efforts to stop generative AI tools being used to disrupt elections.
The accord targets images, audio and video "that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.”
But how might the technology be used in elections - and what can be done to mitigate such attacks?
Has generative AI already been used in election attacks?
In January this year, voters in New Hampshire received an audio message in the voice of Joe Biden telling them to ‘save’ their votes, and advising: "Your vote makes a difference in November, not this Tuesday."
The voice was not Joe Biden, but an AI-generated fake, thought to have been made using tools from AI start-up Eleven Labs.
Eleven Labs’ software allows users to create a highly convincing fake of any voice based on ten minutes of uploaded audio.
ElevenLabs later suspended the account believed to have created the fake message.
Microsoft has said that nation-states such as Iran, North Korea, China and Russia are now starting to use generative AI for hacking.
Recommended reading:
How AI will have changed the world by 2030 (Yahoo News UK)
OpenAI value triples in nine months as it releases transformative ‘Sora’ video tool (Independent)
How Facebook’s Like button powered today’s internet (Yahoo News UK)
Could it really make a difference?
Research by George Washington University suggests that disinformation attacks using AI will become daily by this summer, and could affect election results in up to 50 countries this year.
The researchers based their conclusion on previous cyber and automated attacks.
Professor Neil Johnson said: “Everybody is talking about the dangers of AI, but until our study there was no science of this threat.
“You cannot win a battle without a deep understanding of the battlefield.”
Why is generative AI particularly dangerous?
The danger of generative AI comes from the speed and ease of use - and also the fact there are no regulations to prevent it, says Simon Bain, AI expert and CEO of OmniIndex, who previously developed e-voting systems
Bain told Yahoo News: "Generative AI tools can quickly and easily produce fake content designed to influence elections through disinformation campaigns.
"This includes both expensive campaigns attacking a candidate or party over a period of months and deepfakes made for free online by individuals with no specific motivation other than their own entertainment.
"What is alarming is that we can do nothing to stop this content being made today.
"Why? Because the AI boom has happened so quickly that there are simply no regulations or restraints in place to contain it – despite all the high-profile talk about what these regulations might be!"
What can individuals do?
Bain says that taking an ‘old school’ approach to verifying stories - and refraining from sharing things until you know they are true can help.
Bain told Yahoo News: "There are a number of longstanding practices and pieces of technology in place to help determine if it is real (though clever AI can fake details and make this more difficult).
"These include reverse image searching to see where an image has previously been posted online and if it originated from a legitimate source, metadata analysis to see when, where and how the content was made, and simple story verification where the people involved in the content are asked if it is them!"