‘Because it Feels So Urgent’
We are on the edge of an AI revolution that is going to transform our society. What are we doing about it?

Laura Bates has been calling out misogyny since at least 2012, when she launched the Everyday Sexism Project. Then, back in the comparatively innocent days of the internet, she invited women to share their experiences online of catcalling in the street, being groped, or being ignored simply because they are women, alongside other indignities.
How the digital world has changed.
In Bates’s latest book, “The New Age of Sexism”—out in the U.K., coming soon in the U.S.—Bates turned her lens onto the world of AI and emerging technologies.
What she found was bleak: Everywhere she looked misogyny and racism were being hardcoded into our digital future; and, she says, not nearly enough is being done to stop it. Bates is clear-eyed about who is to blame. (Short answer: billionaires.) She is furious, and thinks we should all be furious, too. But she also has ideas about what can be done by governments, tech companies, and parents to put a stop to AI’s offensive hardwiring.
Bates discussed the dark places her research took her, and also where she finds hope for change.
Your new book, “The New Age of Sexism,” looks at the ways in which misogyny is being embedded in tech and the ways in which technology reveals where misogyny exists and how it operates. Why this book now?
Because it feels so urgent. We are at a precipice. We are on the edge of an AI revolution that is going to transform our society in ways we can’t quite yet grasp.
This is a unique opportunity to act, to shape what that new world will look like, who it will affect, and how. If we don’t seize this moment for regulation before these [technologies] are rolled out, then we will then be in a mess where we are, for decades, trying to gradually unpick the foundations. If the foundations of that future world are systemically misogynistic and racist, it will be really difficult to row back from that later.
It's not an easy book to read. Was there anything that surprised you as you were reporting the book?
In the U.S., AI tools have been shown to downgrade the number of Black patients identified for needing extra care by more than half. That particular tool affects 200 million patients. That was shocking.
A significant proportion of big companies are using [these technologies] in their recruitment processes, even though we know that those tools have actively been shown to discriminate against women in marginalized groups.
There were also elements of the ways in which technology is being used to enable sexual abuse, stalking in particular, that are really scary as well.

You open the book with a chapter about deepfakes. Women are being victimized en masse by deepfake pornography and nobody’s talking about it as an urgent issue.
Yes, there’s a sense that the risk from deepfakes comes from threat to male politicians and to potential democratic erosion, which is, of course, an important future threat to think about.
But it’s so frustrating when you see the Europol report, for example, on how we should police and tackle deepfakes that’s 22 pages long and mentions women once and has two small paragraphs on sexual deepfakes, when the reality is that 98% of all deepfakes are pornographic in nature and 99% of those feature women, including women in politics.
Thirty female politicians in the U.K. had hundreds of these images and videos made of them and put on a website where they were viewed 12.6 million times. There’s a threat to democracy that’s happening now.
What are big tech companies doing to protect women and minorities?
There has always been a dramatic gap between the things that these big tech companies say and the things that they do when it comes to safety.
There was a study recently that found that if you start a new TikTok account in the name of a teenage boy, it takes less than 30 minutes before the first piece of extreme misogynistic content will be promoted into your feed. You don’t have to go looking for it. It will come to you. And yet, I’m quite sure that TikTok would say, there is no place for misogyny on our platform.

Is that the responsibility of government or does it have to come from within these tech companies themselves?
It’s never going to come from within tech companies themselves. We have seen repeated cases where tech companies have been aware of a problem, they’ve been aware of potential fix, and they’ve chosen not to do it because essentially it will negatively impact engagement, which is the holy grail of every tech company because engagement is what leads to advertising revenue and profit.
The only way that this is going to change is if we have regulation that comes from governments, who are ideally using a global framework for regulation.
That doesn’t mean stopping innovation or progress. It just means bringing tech in line with, frankly, every other sector where the idea of common sense regulation at the point of release of products to the public is considered completely normal.
The only reason that we baulk at this is because the tech sector has always played by different rules.
Are there hints of any further legislation discussion around regulation in the U.K.?
There are meaningful conversations happening around deepfake abuse specifically. But it’s very disappointing that earlier this year at the AI [Action] Summit in Paris, when around 60 countries signed an agreement saying that AI should be safe and ethical, the U.S. government said that it would refuse to sign it, [and] the U.K. government swiftly followed suit.
There was a period when the U.K. government indicated that it would be prepared to consider watering down some of the terms of the Online Safety Act, in order to secure a favorable trade deal with the U.S.
Trump is currently pushing for a kind of 10-year moratorium on all state regulation of AI in the U.S. It’s a worrying trend.
You work with parents, with teachers, with children. Can you talk a bit about the conversations you have in schools with young people?
When I’m working in schools with teachers and parents, one of the challenges is that we are currently experiencing a unique moment in history, where a generation of non-digital natives are parenting and educating a generation of digital natives.
When we talk about online porn, for example, we are not talking about an online version of what might have been the FHM [magazine] or Nuts or Zoo centerfolds of those adults’ teenage lives. We are talking about an online landscape in which pornography is frequently depicting sex as something violent and abusive that is done by men to women, whether they like it or not. When I work in schools, it’s not unusual to hear teenagers say things like “rape is a compliment really,” “it’s not rape if she enjoys it,” “it’s not rape if he’s your boyfriend.”
The backlash against the idea that we should talk to young people about these issues is ludicrous, in light of the Children’s Commissioner’s findings that the average age at which young people first see online porn is 13. We know that young people are bombarded by extreme content online every day, many of them before the age of 12.
Are we going to stick our heads in the sand and refuse to deal with that? Or are we going to give them such a robust, wider understanding, in an age-appropriate way, of what healthy relationships and boundaries look like, so that we take some of the power away from misinformation that they will inevitably come across online?

Do you feel like things have changed for the better in any way since you started your work?
There are, of course, glimmers of hope and there have been positive changes. There's always been a sense in which progress is followed by backlash and [it's] two steps forward and one step back. But never before in history has that backlash been facilitated by immensely powerful algorithms. We’re seeing opinion polling suggesting that the most outdated, regressive, conservative attitudes towards women and their bodies are most likely to be held by the youngest cohort of men surveyed.
You say towards the end of this book that writing it made you angry. But what gives you hope about the future?
Well, I think optimism and hope are linked to the anger. Women’s anger has been stigmatized for centuries. But we are on the brink of something akin to the Industrial Revolution. We should all be absolutely furious that it is being hijacked by this incredibly small number of unelected billionaires, using it as their personal playground to enrich themselves further, at enormous cost to the planet [and] to the most vulnerable communities.
If enough of us get angry enough, I feel hopeful that there is still power in democracy, in collective protest, in calling on our elected representatives to recognize that they have the power to act and that we want them to regulate now before it’s too late.
And seeing the efforts of incredible frontline feminist organizations, academics, campaigners, young women who are fighting so hard to give voice to that anger in a very righteous way, that gives me hope.
This is an excerpt from an interview on the Prospect Podcast. It has been edited for length and clarity. Listen to the full episode here or watch the episode on YouTube. Reprinted with kind permission of Prospect.