US regulators declare AI-generated robocalls illegal

US regulators declare AI-generated robocalls illegal

Calls made with artificial intelligence (AI) generated voices are now illegal in the United States, following a government agency's unanimous decision.

The US Federal Communications Commission (FCC) said on Thursday that these types of calls increased during the last few years and could confuse consumers with misinformation by imitating different voices.

The ruling comes as authorities in the US state of New Hampshire advanced their investigation into AI-generated robocalls that mimicked US President Joe Biden’s voice to stop people from voting in the state's first-in-the-nation primary last month.

Effective immediately, the regulation empowers the FCC to fine companies that use AI voices in their calls or block the service providers that carry them.

It also opens the door for call recipients to file lawsuits and gives state attorney generals a new mechanism to crack down on violators, according to the FCC.

"Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” the agency's chairwoman, Jessica Rosenworcel, said in a statement.

US state authorities "will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation," she added.

Under US consumer protection law, telemarketers generally cannot use automated dialers or artificial or prerecorded voice messages to call mobiles and they cannot make such calls to landlines without prior written consent from the call recipient.

The new ruling classifies AI-generated voices in robocalls as “artificial” and thus enforceable by the same standards, the FCC said.

Those who break the law can face steep fines, with a maximum of more than $23,000 (€21 ,351) per call, the FCC said.

The agency has previously used the consumer law to clamp down on robocalls interfering in elections, including imposing a $5 million (€4.6 million) fine on two conservative hoaxers for falsely warning people in predominantly Black areas that voting by mail could heighten their risk of arrest, debt collection and forced vaccination.

The law also gives call recipients the right to take legal action and potentially recover up to $1,500 (€1,392) in damages for each unwanted call.

'Technology will get better'

Josh Lawson, director of AI and democracy at the Aspen Institute, said even with the FCC’s ruling, voters should prepare themselves for personalised spam to target them by phone, text and social media.

"The true dark hats tend to disregard the stakes and they know what they’re doing is unlawful," he said. "We have to understand that bad actors are going to continue to rattle the cages and push the limits."

Kathleen Carley, a Carnegie Mellon professor who specialises in computational disinformation, said that to detect AI abuse of voice technology, one needs to be able to clearly identify that the audio was AI-generated.

She said that is now possible "because the technology for generating these calls has existed for a while. It’s well understood and it makes standard mistakes. But that technology will get better.”

Sophisticated generative AI tools, from voice-cloning software to image generators, are already in use in elections in the US and around the world.