‘Don’t Pretend You Don’t Love It’—The Dark, Misogynistic Underside of AI
From biased chatbots to sexist and aggressive avatars, AI already comprises a whole set of tools capable of harassing and harming women.
I don’t know about you, but almost every conversation I have with friends and family these days seems to end up at AI. In most cases the talk is related to jobs and how AI might render certain roles obsolete. (My job, as a journalist, and other jobs predominantly held by women, including administrative positions, very much included.)
No question all that is worrying, but I can’t help but wonder if we’re missing the bigger issue here. What if the tech frontier we’re speeding toward—including large language models that have been shown to be inherently biased, including virtual reality, augmented reality and beyond—is actually ushering in a new level of sexism and even violence against women?
AI-generated misogyny
You’ve probably already heard about the many ways in which AI can propagate racism and gender stereotypes. A UNESCO study published last year found “unequivocal evidence of bias against women in content generated by Large Language Models.” When asked to generate stories, Meta’s large language model, Llama 2, was four times as likely to describe women working in domestic roles than men.
UNESCO director general Audrey Azouley called on governments to enforce clear regulatory frameworks on private companies to monitor and evaluate systemic biases in AI. But the trouble goes beyond mere biases: At the end of September, it emerged that a YouTube channel featuring AI-generated videos of women being shot in the head, called Woman Shot AI, had garnered 175,000 views since launching in June of this year. It was taken down when 404 Media requested a comment about its existence from YouTube.
“This is a clear example of what happens when we do not sufficiently regulate AI,” says Clare McGlynn, a law professor at Durham University. “There is an evident lack of safety by design. Despite all the rhetoric from tech companies, they simply do not care enough to take steps to ensure this sort of video cannot be generated.”
This is one extreme example of how AI can be misused. But it certainly isn’t the only one. From across the spectrum of AI services there are stories of how the technology is being harnessed to play out (often misogynistic) fantasies. One prolific theme: Men enacting their sexual desires on “subservient” women.
“I recently had a conversation with someone who's developing a chatbot to talk about anxiety, and for his pilot testing he used a female generated voice meant to sound comforting and warm,” says Dr. Nejra van Zalk, an associate professor in design psychology at Imperial College London. “It turned out that the men who signed up just wanted to hear the voice talk dirty to them.”
Dr. van Zalk has expressed concerns about the speed at which these tools have entered the mainstream: “The technology has been released very quickly, without consultation and we don’t know how to ensure it’s not misused. I think it’s going to be a nightmare.”
Some might say that it’s already a nightmare.
Is sexism written into AI’s DNA?
Let’s consider for a moment how AI has been trained. The big tech companies have been feeding their large language models pre-existing, often historical, content from across the internet. “They’re trained on large libraries of human writing which mostly reflect our past; there’s a much smaller proportion taken from times and cultures where women are considered equals” says Bríd-Áine Parnell, a Ph.D. candidate at the University of Edinburgh whose focus is on designing responsible natural language processing. “In systems like ChatGPT and Google Gemini, which are essentially mirrors, it's easy to see how this data could create a world representation in which men dominate.”
Perhaps even more concerning: The content AI is trained on is not just biased but also skewed toward extremes—gleaning information from the conversational cesspool that is the Internet, including social media sites and message boards like Reddit.
Add to that online news, which is often about war and violent crime and you can see that AI models are learning “a distorted version of the real world, where relatively violent events and extreme views are seemingly more common than they really are,” says Parnell.
Could AI encourage misogynistic behavior IRL?
Given its authoritative tone and how effortless it is to use, it’s easy to see how AI might have an outsized influence on people’s actions and decisions in the real world. Recently, Mark Zuckerberg spoke about users turning to Meta’s AI as a sounding board on everything from relationship advice to work issues. Meanwhile, a recent survey by the U.S. nonprofit Sentio found that 49% of respondents who used AI and reported mental health challenges were also using large language model chatbots (like ChatGPT, Claude, or Gemini) for therapeutic support, making AI chatbots the most widely-used “therapists” in the U.S.
While this might help to clear the backlog of people waiting to access mental health support, we already know that there is no way to ensure that AI “therapists” deliver the right messages to vulnerable people. Case in point: In response to advice from a chatbot, people have taken their own lives; one even allegedly encouraged a Texas teen to murder his parents because they were imposing limits on his screen time. It’s not unreasonable to wonder, then, what might happen if AI, trained on historical data with an anti-women bent might end up encouraging someone to harm a woman they’re seeking advice about.
“I think AI might actually suggest courses of action that are totally inappropriate,” warned Professor Dame Til Wykes, head of mental health and psychological sciences at King’s College London in reference to a chatbot on an eating disorder support site that was suspended in 2023 after giving dangerous advice.

An ugly virtual realty
And then there’s the metaverse—the virtual world underpinning platforms like game creation platform Roblox, which attracts scores of children. These immersive environments have also attracted unwanted sexual behavior toward women.
Psychotherapist Nina Jane Patel discovered this herself when, as part of her Ph.D. research in the psychological and physiological responses of children and young people using virtual reality, she decided to try Mark Zuckerberg’s metaverse Horizon Worlds.
“I walked into the ‘lobby’ and within a minute, I had four male avatars making very explicit sexual comments, and then gestures, and then taking photos,” she tells me. Patel asked them to stop. They responded by saying “Don’t pretend you don’t love it!” and “Go rub yourself off ….”
“It was shocking and alarming,” she says. “I responded as though it was happening to me in the physical room. My heart rate increased and I started feeling very uncomfortable, so much so that I suddenly forgot how to use my controllers.”
Why not take the headset off, you may ask? VR isn’t real, after all. But VR headsets are designed to make you feel as if you are, physically, in a different world, courtesy of haptic feedback—buzzing or vibrations that result in real physical sensations. Indeed, research from UCL has found that the more vivid our imagination, the more likely that our brain will treat our thoughts as real. Apply this to VR and you can see how experiences in the virtual universe could become extremely distressing.
“If we continue down the route of allowing people to engage in behaviors virtually that are illegal in the physical world, we’re going to be causing a lot of damage and trauma,” warns Patel.

New language for a new world
The kind of behavior experienced by Patel now has a name: “meta rape,” a term coined by the Durham law professor, McGlynn, in an article in the April 2025 “Oxford Journal of Legal Studies.”
She proposed the new terminology to refer to sexual violence in the metaverse (i.e, the virtual universe, not Meta, the company), because, as McGlynn put it, “Discussions of virtual rape make it sound otherworldly—as if it’s not about real life.” McGlynn wants to break down the distinction between the digital and real world because both “impact people in the here and now.”
In the first case of its kind, in early 2024, for instance, British police investigated the “virtual rape” of a 16-year-old in the metaverse, and found that the teenager had the “same psychological and emotional trauma” as someone who had been physically assaulted.
In response to such “unwanted interactions,” in 2022, Meta introduced the personal boundary, which, when activated, can stop strangers from coming within four feet of users. “[That] kind of behavior has no place on our platform,” a spokesman for Meta told The Daily Mail. Measures like these may not go far enough. “These are a first line of defense,” says Patel. “But often, you’ll see that people turn them off so that they can engage with high fives and other interactive features. This sets us up for another victim-blaming scenario, as in, ‘You should have had the bubble/safe zone on.’ But it shouldn’t be the responsibility of the potential victim to prevent abuse.”
While there are limited figures on the extent of these kinds of behaviors, plenty of anecdotal evidence suggests they are not a rare occurrence. Stories of lewd, aggressive comments aimed at women, non-consensual touching, taunting by large groups of males and image-based sexual abuses are rife. “It’s difficult to know how many incidents there have been; we just don’t have that information,” says McGlynn. “What we do know is that children can wander into the metaverse and there’s almost no way to control who they are engaging with and what that engagement looks like–which makes it a high risk activity.”

Feminist AI fights back
Eva Blum-Dumontet is head of movement building and policy at Chayn, a nonprofit that creates online resources and services for survivors of abuse. She is currently leading the launch of Survivor AI—a free, anonymous tool that produces formal takedown letters to websites hosting non-consensual intimate images or videos of survivors.
"Generative AI has led to an explosion in online harms for women and girls,” she explains. “But AI doesn’t have to be all doom and gloom for women and gender minorities. Feminist AI is also possible and can be a tool to challenge the very issues that AI creates.”
A number of organizations are investing in the idea of feminist AI as a buffer zone and to pushback against the status quo, including the Feminist Generative AI Lab, a joint research center between Delft University of Technology and Erasmus University of Rotterdam. Its aim: to apply feminist AI principles in generative AI design and development.
But what exactly is feminist AI? Think of it as an intentional approach to designing, building and deploying AI that is “rooted in feminist principles, such as equity, inclusivity, transparency, and social justice,” explains Blum Dumontet. Then there are specific products and services with a feminist bent, including a South African company called GRIT (Gender Rights in Tech), that has created a trauma-informed AI chat bot specifically for survivors of gender-based violence, a problem rampant in that country.
But there’s still a long way to go. Getting politicians and policy makers to focus on the safety of women in this digital frontier is difficult, says McGlynn: “I worry whether the next pandemic will be the metaverse pandemic, and we’ll all wake up one morning thinking, ‘Why didn’t we regulate this space!?’”


