Political ads on Instagram and Facebook can use deepfake technology, Meta says – but they must say so

Social Media Teens Whistleblower (Copyright 2022 The Associated Press. All rights reserved)
Social Media Teens Whistleblower (Copyright 2022 The Associated Press. All rights reserved)

Ads on Instagram and Facebook can use artificial intelligence technology to create photos, videos and audio of events that don’t actually exist, Meta has said.

But those advertisers must make clear that they are not actually real if they are advertising on political or social issues, Meta said. When they do so, Meta will add a small note on the ad that gives information about the fact that it has been created with artificial intelligence.

Meta said that it was introducing the new policy “to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI”. It will go into effect in the new year, across the world, it said.

The new policy will require advertisers to make clear if their political ads have an image, video or audio that looks real but was digital created or altered so that it looks like someone is saying something they didn’t, shows a person or event that is not actually real, or poses as a depiction of a real event but is actually fake.

If the content is digitally created or altered by in ways that “are inconsequential or immaterial to the claim, assertion, or issue raised in the ad”, Meta said. It gave examples such as using technology to adjust the size or sharpen their image, but noted that those could still be problematic if they change the claim in the ad.

But it also said that those fake videos, images and audio will still be allowed to be posted on the site. Instead, Meta will “add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered”, it said, and that same information will appear in Meta’s Ad Library.

It said that it would give further information about that process later. It did not say how advertisers will flag such ads, what will be shown to users when they are flagged, and how those who do not flag them will be punished.

Meta did say that it would remove any ads that violates its policies, when they are created by artificial intellgience or real people. If its fact checkers decide that a piece of content has been “altered”, then it will stop it from being run as an ad, the company said.

“In the New Year, advertisers who run ads about social issues, elections & politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AI, to show real people doing or saying things they haven’t done or said,” said Nick Clegg, Meta’s president for global affairs in a series of tweets that announced the new policy.

“This builds on Meta’s industry leading transparency measures for political ads. These advertisers are required to complete an authorisation process and include a ‘Paid for by’ disclaimer on their ads, which are then stored in our public Ad Library for 7 years.”