Experts worry X's new policy on AI-generated adult content will lead to more deepfake porn

X owner Elon Musk with the former and current X logos.
X owner Elon Musk with the former and current X logos. (Jaap Arriens/NurPhoto)

In a video posted by a Megan Thee Stallion fan on June 9, the singer is seen trying to hold back tears and collect herself while onstage in Tampa, Fla. Earlier that day, the 29-year-old rapper had addressed the “hurt” she felt after a sexually explicit, AI-generated video that allegedly showed her likeness circulated on X.

Copies of the deepfake — a term to describe videos, pictures or audio made with AI — appear to have been taken down. Neither X nor owner Elon Musk has made a public statement regarding the video.

Megan Thee Stallion’s deepfake comes on the heels of X’s policy change that now allows “adult nudity or sexual behavior” content on the platform as long as it is “consensually produced and distributed.” The policy update, announced at the end of May, specifically said the rules apply to adult “AI-generated photographic or animated content.”

However, AI experts who spoke to Yahoo News argue that X's language is too vague to effectively police what is or isn’t allowed on the platform. They question how X will be able to determine whether the content was “consensually produced,” especially if the source material for the AI-generated explicit content is a photo or video someone put up of themselves or, in the case of Megan Thee Stallion, is a public figure.

“We’re seeing people’s innocent photos are being manipulated into AI-generated nude photos and they’re being blackmailed for that,” Yaron Litwin, digital safety expert and chief marketing officer at Canopy, a technology company that works to protect children from inappropriate content online, told Yahoo News.

Even though the nude images aren’t real, Litwin argues that they still leave "this horrible mark on the victims.”

X has been in the news over the last few months for several instances where celebrities were the victims of pornographic deepfakes. AI-generated suggestive images of Taylor Swift went viral in January, garnering millions of views before the platform intervened. That same month, 17-year-old Marvel star Xochitl Gomez begged for sexual deepfakes with her face to be taken off X. Countless others have been victims of nonconsensual explicit deepfakes — many of whom aren’t famous and don’t have the financial or legal resources to stop their spread.

Now that X has said it will allow some explicit AI content, the question of whether the platform is equipped to prevent the spread of nonconsensual deepfakes has become even more pressing. The platform has 550 million total monthly active users and a content moderation team of fewer than 2,000 employees, according to an April analysis report by Social Media Today. It’s the smallest content moderation team of any social media company; each moderator has to keep track of almost 300,000 users.

“Every piece of content that is removed or flagged, there are many, many others which fly under the radar,” Henry Ajder, an AI expert and tech adviser to companies like Adobe and Meta, told Yahoo News. “Twitter doesn’t have the best track record of moderating effectively against the violative kind of content that this new update still prohibits.”

X is not the only social platform grappling with how to label and flag AI-generated content promptly, but the platform formerly known as Twitter does seem to be the only one with a policy that now seems to encourage the creation of sexually explicit deepfakes.

“It’s really important to be very, very explicit when saying that digital sexual abuse is sexual abuse,” Ajder said. “People who are creating deepfake, nonconsensual pornography should be spoken about in the same breath as people who are physically, sexually abusing or harassing people.”

X has said the policy update was based on the company’s belief that “sexual expression, whether visual or written, can be a legitimate form of artistic expression.” Some have argued the update is at least a step in the right direction that could help adult content creators. They note that other social media platforms with stricter rules, like Instagram, have been accused of censoring sex workers’ voices and making it impossible for them to make money, whether by shadow banning their profiles or by shutting down their accounts entirely.

Litwin suggests that the new policy may be motivated by the potential business growth that comes with allowing this type of content. X posts do not require age verification to access them unless they are specifically flagged, in which case users who are under 18 or who don’t have their birthday on their profile won’t be able to see it. (X users can also adjust their content settings to avoid seeing any adult content.)

“I think what AI is going to create is just an abundant amount of porn — it’s very cheap and easy and quick and you can satisfy any type of porn that someone would be interested in in very little time,” Litwin said. “There’s a lot of money in porn and I think [X is] trying to tap into that.”

Some sex workers on the platform have argued that the policy update is a “PR stunt” and that porn has been a part of X for years. Before Musk took over X, leaked internal documents from October 2022 showed that X, then known as Twitter, was losing active users but seeing growing interest in “not safe for work” content. Musk has also toyed with the idea of revamping X’s communities feature, with a focus on adult content, as well as making the “likes” tab on user profiles private.

“There are ethical questions and there are meaningful societal questions about whether AI-generated pornography in high volumes on one of the biggest social media platforms in the world — is that something we should be treating neutrally?” Adjer asked.