Assault in the Metaverse: is it the start of a dark new future?


It sounds like something straight out of a nightmare: a 16-year-old girl puts on a VR headset and logs onto one of her favourite games. Mere minutes later, she’s been sexually assaulted by several anonymous avatars, and left traumatised. Now, UK police are investigating what is thought to be their first-ever case of rape in the Metaverse, and are calling for new legislation to tackle a growing problem.

Though the whole scenario might sound like something straight out of a dystopian sci-fi film, this is a very real issue now and the case could in fact be the first of many, as more and more move into the Metaverse.

Described by Forbes as “shared, 3D virtual spaces linked into a perceived virtual universe”, the Metaverse has come to mean anywhere that people can hang out online in a 3D space.

Think games like Fortnite and Minecraft, or even social media spaces like Instagram Live or Twitch. They’re frequented by hundreds of millions of users – and though Meta’s own app Horizon Worlds only has around 200,000 monthly users, it spells trouble for this budding industry.

The alleged attack itself was brazen. Taking place in an online ‘room’ populated with other users, all potential witnesses, it’s a testament to how commonplace sexual assault seems to be in this online space – and a problem that is leaving legal and moral systems struggling to keep up.

Nina Patel can relate. The author and journalist published a blog post on Medium in which she recounted being attacked “within 60 seconds of joining” Meta VR app Horizon Worlds.

“I was verbally and sexually harassed – three to four male avatars, with male voices, essentially, but virtually gang raped my avatar and took photos,” she wrote. “As I tried to get away they yelled – ‘don’t pretend you didn’t love it’ and ‘go rub yourself off to the photo.’”

Mark Zuckerberg in an early iteration of the Metaverse (Facebook)
Mark Zuckerberg in an early iteration of the Metaverse (Facebook)

She hasn’t been on the app since, she tells me now, and calls the entire experience a “nightmare”.

Though the internet has always had a problem with social spaces and improper conduct (just look at children’s online game Habbo Hotel, which let virtual avatars interact anonymously and which a 2012 Channel 4 investigation found was "full of pornographic sexual chat"), the Metaverse is a relatively new construct, picked up and championed by Facebook founder Mark Zuckerberg as the future of human interaction.

In 2019, he launched a VR world called Facebook Horizon, where social media users could work, play and socialise in an entirely digital space. In 2021, Facebook rebranded as Meta and announced a commitment to building a workable Metaverse – something the company has spent billions of dollars on making a reality as competitors hurry to catch up. Unfortunately, that also means Meta’s own Metaverse serves as a testing ground for issues that will likely spread with the Metaverse itself.

“We believe the metaverse will be the successor to the mobile internet, we’ll be able to feel present – like we’re right there with people no matter how far apart we actually are,” Zuckerberg said at the time.

But things quickly started going wrong. In December 2021, Meta launched Horizon Worlds, an open-access media platform. People could log on remotely in the guise of anonymous ‘avatars’ and interact with other strangers online.

What wasn’t publicised at the time was that on November 26, a beta tester had already reported that she had been groped by a stranger on the platform. Vivek Sharma, Horizon’s vice president, called the incident “absolutely unfortunate”: an internal enquiry was conducted.

However, the same thing has happened multiple times since, seemingly with impunity: users making inappropriate or lewd gestures with their avatars or interacting with other peoples’ avatars in unwelcome ways. In many instances, Meta has vowed to improve the way in which it and users deal with this... and yet it keeps happening. Research conducted in 2021 by the Centre for Countering Digital Hate found there had been 100 potential violations of Meta’s VR policies in the timespan of 11 hours and 30 minutes. In addition to sexual harassment, the report also listed racism, bullying, threats and “content mocking the 9/11 terror attacks” had been registered on the platform.

Currently, Meta has a series of safeguards in place, including voice and world filtering software (which limits what underage users see) and parental supervision controls. Most significantly, there is the “Safe Zone”, which users can activate if they’re feeling unsafe, and which prevents other avatars from interacting with them in any way. However, attacks often happen so quickly that the person doesn’t have time to switch it on.

Of course, it’s not ‘real’ – no physical injuries are sustained – but that doesn’t mean it feels any less real emotionally. In fact, realism is kind of the point.

“Since day one, [the Metaverse] has been designed to be as real as possible,” Patel says. “And we're at a point now with technology that it does feel very real, because since day one, it was designed to replace reality… every piece of the technology has been designed to authenticate human interaction and human communication.”

And yet, though the technology has evolved, the laws around it have not.

A virtual meeting via Meta Quest Pro (Meta)
A virtual meeting via Meta Quest Pro (Meta)

“We're very quick to innovate,” Patel adds. “But as we knew and saw with the internet, we are very slow to regulate… so you know, we're here in 2024, finally dealing with some of the issues of social media.

“Whether we want to call it the metaverse or spatial computing, we are going to be engaging with each other and digital representations of each other. And we will need to set up laws and policies and regulation about how we choose to interact with each other in these spaces... putting our children at risk every time they engage with the digital world is no longer acceptable.

“It's fundamentally impinging on their human rights as children to be free to play, to learn, to grow, to explore, in environments that are free from harm,” she continues. “And we are not doing that; we never have done that with technology. And now is the time because the impact of getting this wrong is going to be significant.”

In 2022, campaign group Ekō published a report that highlighted how Meta was failing to address inappropriate behaviour on its platforms – specifically on Horizon Worlds, which in 2023 lowered its age rating from 18 to 13 (though, Meta has stressed, controls have been put in place that prevent underage users from seeing mature content).

According to the report, a researcher logged on and within an hour had been graphically sexually assaulted by a user in a private room “all while another user in the room watched and passed around a vodka bottle.”

The report also cited several other cases, including that of Chanelle Siggins, who was assaulted in Meta-owned app Population One, where a user “simulated groping and ejaculating onto her avatar.”

“It is already seriously lagging behind with content moderation on its metaverse platforms,” the report concluded. “With just 300,000 users, it is remarkable how quickly Horizon Worlds has become a breeding ground for harmful content.”

Facebook declined to comment for this article, but following the attack on Patel, they told Business Insider via email that “Horizon Venues should be safe, and we are committed to building it that way. We will continue to make improvements as we learn more about how people interact in these spaces, especially when it comes to helping people report things easily and reliably."

Could things be about to change? It may be that the case mentioned earlier involving the 16 year-old girl will be heard this year, thought to be the first case of virtual sexual assault investigated by police – but who knows where it all goes from here.

Rather than a breakthrough in law governing online spaces, it could instead create a knotty new problem for the UK legal system, and a very 21st-century ethical quandary, as the concept of the Metaverse expands beyond Meta itself and into new areas. The next generation of children is set to spend an estimated 10 years in VR over the course of their lifetimes, which adds up to three hours a day: how can we protect them?

More importantly, the question is this: if somebody is attacked online, does it still count as an attack? Does sexual assault still count as assault if it happened online?

“The general principle is that whatever is illegal in offline spaces is illegal in online spaces,” says Professor Emma Barrett, a specialist in Psychology, Security and Trust at Manchester University. “And that argument doesn't really hold up.”

In 2020, Barrett was part of a team commissioned by GCHQ to examine how child exploitation and abuse could evolve with the widespread adoption of immersive ‘eXtended Reality’ (XR) technologies. Four years later, she says, the law is only now playing catch-up when it comes to these kinds of cases.


“The interactions that people have within these kinds of Metaverses… everything is not automatically recorded,” she says.

“These are ephemeral interactions… can you be in court and have a witness, who was someone else in that virtual room with you? Then the defence barrister is going to have a field day saying, ‘Well, you know, that avatar was in there. But was that really you?’

“There are lots of these kinds of challenges, with how you actually gather evidence in virtual reality, that we really haven't thought through either. There’s no point having laws if you can't actually bring anyone to justice with them.”

This is obvious in the case of Siggins, who later told the New York Times that her attacker “shrugged as if to say: ‘I don’t know what to tell you. It’s the metaverse – I’ll do what I want.’”

As technology becomes increasingly sophisticated, so too do the dangers become more acute: Barrett cites the growth of haptic (sensory feedback) technologies as a new area of concern.

Haptic controllers vibrate according to on-screen actions that impact the player’s character (for instance, when a gun ‘fires’ in their hand), but new forms are in development like “fully body haptic technologies”, which could register where a user is touched by another. Fascinating, but you don’t need much foresight to see the very obvious potential for serious harm.

It’s true that the Metaverse, into which Meta has ploughed billions of dollars, is used by a relatively small section of the population. But the concept is not just the purview of Mark Zuckerberg: many big tech companies are increasingly racing to create their own versions of what it might look like.

Apple has just released its brand-new Vision Pro helmet (for a whopping $3,500), which is widely expected to herald the start of its own metaverse, while both Snapchat and Google are working on AR glasses that promise users a seamless way to interact with both the physical and digital world.

Meta runs Facebook as well as online app Horizon Worlds (Brett Jordan / Unsplash)
Meta runs Facebook as well as online app Horizon Worlds (Brett Jordan / Unsplash)

How will these companies manage to moderate their users’ behaviour, and who is doing it? “There is more effort, certainly, from the big players in thinking about, within social virtual reality spaces, how do you empower users to block other users or report them? That historically has been really difficult and cumbersome [for users] to do, and not very effective,” Barrett says.

“There's a lot of discussion about automated content moderation. To be honest, I think there's quite a lot of bullshit around that. Although automated moderation works when you've got verbal or textual data, it's really difficult when you've got behavioural data, to try and interpret a particular gesture as being harassing or unwanted. So yeah, there's a whole load of issues.”

With more and more cooks adding to the broth, will anything change? Vicky Wyatt, who worked on the Eko report, is sceptical. “The thing about companies like Facebook, they're obviously very profit driven models,” she says. “The basis of their business model is to sell advertising... but there are ways in which you could construct the metaverse that build community and build connection. It is just that these really powerful big tech companies are able to dominate these markets.”

When Patel published her piece on Medium, the comments below the article were scathing. “don’t choose a female avatar,” one wrote; another added “don’t be stupid, it wasn’t real”. But it is real – and in a world that is increasingly lived online, these issues are becoming urgent.

“We are seamlessly moving to the virtual world. And through digital mediums, all of our infrastructures are connected to the metaverse through the Internet of Things.” Patel says: mixed realities, blockchain, cryptocurrencies, they’re all part of this.

“The technology is coming, whether we want to call it the metaverse or spatial computing, we are going to be engaging with each other and digital representations of each other,” she adds. “This is not science fiction anymore, that this is our future, [and] we need to fundamentally answer the question, ‘Is it real or is it not?’ That's what we need to come to terms with.”