Deepfakes impact all of us – not just Taylor Swift
Chances are you’ve read about the latest in online misogyny but let’s recap. Last week, X was flooded with non-consensual deepfake porn of Taylor Swift that went on to trend for hours before it was taken down. One set of explicit images had over 45 million views before being removed from the former Twitter.
It’s not the first time deepfakes have hit headlines. We’ve witnessed celebrities from Tom Cruise (a joke video showed ‘him’ tripping over a rug) to Emma Watson (doctored audio made it sound as though she supported Nazi ideology) targeted by this AI-generated, increasingly accessible technology. A video of Ukraine’s President, Volodymyr Zelensky, appearing to surrender to Russia did the rounds in 2022, too. And back in that same year, I investigated how easily deepfake technology can be manipulated and weaponised against women – exposing how men are paying for their colleagues, relatives and even girlfriends to be superimposed into porn. Somehow, it’s taken until now for people to listen up.
Following the Swift storm on X, Joe Biden’s press secretary described the trend as “alarming” and US lawmakers are at last discussing the possibility of a DEFIANCE (Disrupt Explicit Forged Images and Non-Consensual Edits) Act, which would look to punish non-consensual deepfakers. Here in the UK there are some laws already in place; towards the latter part of 2023, after pressure campaigns from the likes of Not Your Porn and Georgia Harrison, the government passed the law which included deepfakes in its Online Safety Act. Sharing non-consensual deepfake porn is illegal. Perpetrators may be punished with up to two years in jail.
There’s still a gaping loophole that needs to be closed though: it’s legal to create this sort of content in the first place, leaving space for excuses and defences such as ‘it was taken from my phone and distributed without my knowledge’. As for the laws around sharing non-consensual deepfakes that aren’t intimate, but which may still damage a person’s reputation or cause distress, the rules remain murky. It’s also one thing to have a law and another to see it properly implemented.
But why did it have to take a digital assault of this scale on a major celebrity for the world to sit up, take the cybercrime seriously and have this much-needed global discussion? Deepfake survivors often report feeling trauma, humiliation and broken trust akin to an in-person assault after being targeted.
The sad attack on Swift shouldn’t have come as a surprise. Experts have been warning that it was never an ‘if deepfakes take hold it’ll be bad’ but a ‘when’ situation. The BBC documentary, Deepfake Porn: Could You Be Next? which I consulted on, warned that the rise of deepfake porn sites are now seeing millions of visitors per month, and a 2023 report found the spreading of deepfakes has increased by 550%.
It’s sadly not a shock that it hasn’t been taken seriously until now. One often-cited study is research carried out by Sensity AI, a fraud detection company, back in 2018. It discovered that over 90% of deepfaked content online is non-consensual porn featuring women. As we know, women are typically the primary targets of such online harassment, and as of yet are still barely supported by safety legislation at all. In the last twelve months, search around the specific term 'deepfake porn' has risen by 22% too (as per Cosmopolitan UK data sourced via SEMrush).
As well as actors and TikTok stars, female gamers and Twitch streamers are also increasingly being deepfaked without their consent. One, Pokimane (real name Imane Anys), recently left the streaming platform after a decade due to being a victim of deepfake porn and thanks to other harmful ideology being allowed to run rife in that space. She later explained on her podcast that her decision was down to "the rise of so much manosphere, red pill bullshit… I feel like that stuff has flourished within the male-dominated live-streaming sphere”.
Whilst positive steps (on the surface at least) are being taken to counteract this – for instance, EU laws are also set to change, with AI creators soon needing to watermark their creations to make them more easily traceable and in China you already have to obtain consent before deepfaking someone – we all also know that having a law and it being properly implemented are two very different things.
Platforms like X, Instagram, Facebook, Reddit, Telegram and 4Chan (where the Swift content is believed to have originated from) must also join the fight and invest more into moderating content – and deftly removing it if needs be. If we have the technology to superimpose women into sexual scenarios without their consent, to put man on the moon and have the likes of ChatGPT write entire scripts, how can we not also have the smarts to put barriers in place to protect people?
It begs the question that perhaps it’s a lack of desire that’s the problem for Big Tech, rather than a lack of resource or ability. Explicit photos of Swift gained hundreds of thousands of likes and shares before X stepped in and suspended the accounts behind them and placed a temporary block on users searching ‘Taylor Swift’. The incident saw activity on X increase, and that’s surely something those in charge of the platform aren’t going to want to lose out on. When contacted by Cosmopolitan UK for comment, X’s press line responded with “Busy now, please check back later” (which I suppose is an upgrade on the poo emoji that it used to auto-reply with).
What we need alongside law changes and better reaction times from platforms is also a shift in our culture: is it really any wonder that digital sexual assaults on women are on the up during an era where the manosphere is thriving, women and girls are reporting rapes at an alarming rate and our trust in the authorities is at an all-time low? Deepfakes are the tip of a very shitty iceberg; let’s not let this conversation melt, but instead keep pushing it forward until the real change is delivered.
Follow Jennifer on X and Instagram
You Might Also Like