The White House called the circulation of explicit images generated by artificial intelligence (AI) of pop superstar Taylor Swift “alarming” as critics cite the incident as the latest example of the growing risks of deepfakes.
“We are alarmed by the reports of the circulation of the … false images,” press secretary Karine Jean-Pierre told reporters Friday.
“While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people,” Pierre continued.
She added, “Sadly, though, too often we know that lax enforcement disproportionately impacts women and they also impact girls, sadly, who are the overwhelming targets of online harassment and also abuse.”
Fake sexually explicit images of Swift circulated across the internet this week, leading to backlash among her fans and renewing calls from federal lawmakers for social media companies to enforce their rules.
It has also spurred fresh conversation about the potential risks associated with artificial intelligence and AI-generated content, often called deepfakes.
President Biden in October signed a sweeping executive order on artificial intelligence focused on seizing on the emerging technology and managing its risks.
The order included several new actions, which focus on areas like safety, privacy, protecting workers, and protecting innovation.