A New Movement Wants to Silence Women In Public Life. AI Is Enabling It.

Women activists and journalists are being targeted in increasingly scary ways, forcing many to self-censor, as AI supercharges levels of online abuse.

A New Movement Wants to Silence Women In Public Life. AI Is Enabling It.
Photo: Christin Hume for Unsplash
The Persistent is available as a newsletter. Sign up here to get it delivered to your inbox. 💛

Women in public life — notably journalists, media professionals, human rights defenders, and activists — are facing increasing levels of online abuse, which AI is facilitating and making more damaging.

According to a new report, more than one in 10 has experienced the non-consensual sharing of personal images, including intimate or sexual content, while nearly one in three has received unsolicited sexual advances through digital messaging.

Just as troubling, there are clear indications that this kind of harassment is designed to silence women in public life while undermining their professional credibility and personal reputation. And the tactics are working: More than 40% report self-censoring to avoid such abuse. 

The report, “Tipping point: Online violence impacts, manifestations and redress in the AI age,” was commissioned by UN Women as part of a broader series examining how online violence is constraining women’s participation in public life in the AI era.

The report was produced in partnership with TheNerve’s Information Integrity Initiative, a a digital forensics lab established by the Nobel Laureate Maria Ressa, and City St. George’s, University of London. It was based on answers to a 2025 survey from 641 women-identifying respondents from 119 countries. 

We asked two of the study’s authors, Julie Posetti and Nabeelah Shabbir, to tell us more. Their responses have been condensed and lightly edited for clarity.

The report is entitled “Tipping Point.” In what sense are we at a tipping point, and what are the implications?

Posetti: We have moved from raising the alarm about the chilling effect of online violence on women journalists to a point where this threat has spilled into offline harm. It has also imperiled women in public life more broadly — from politicians and activists to human rights defenders and climate scientists. 

Generative AI is supercharging these threats, with the latest manifestations of online violence including ‘nudification’ at scale, and deepfake ‘virtual rape.’ Perpetrators have never been able to humiliate, harass, and endanger their targets more easily, cheaply or quickly.

Perpetrators have never been able to humiliate, harass, and endanger their targets more easily, cheaply or quickly

All of this is happening against a backdrop of gender rights being rolled back and democratic erosion. Misogyny is being weaponized by political leaders from the West to the Global South, as a feature of the authoritarian playbook. 

So, we’re at the tipping point, and we risk sliding irreversibly into a world in which women’s subjugation and erasure is [normalized]. Online violence creates the enabling environment for this rights rollback.

Are there countries where women are being particularly affected, and if so, does that track with media freedom more broadly?

Shabbir: Our research indicates there is a global [strategy] to technology-facilitated gender-based violence. Women-identifying respondents in public life, whether in Uganda, the U.S., Costa Rica or India, are targeted online on a similar basis. Looking at emblematic cases of [online violence against] women journalists in Mexico, Lebanon, Qatar or South Africa shows us, too, what respondents broadly report: They have generally experienced what their peers in other countries have experienced.  

What surprised you about the findings?

Shabbir: 12% of [women in public life] — in law, journalism, activism and human rights defense — reported experiencing the nonconsensual sharing of personal images, including sexually explicit or intimate image-based material.

There is a general consensus that the issue of [online violence against women in public life] is much more prominent and obvious than it ever was. It’s also why we highlight the problem of self-censorship: In the context of online violence incidents, 45% of women journalists and media workers say they self-censor today versus 30% in 2020 — a 50% increase.  

What can and should be done?

Posetti: One area for focus is law enforcement. While double the number of women journalists reported online violence incidents to the police in 2025 (22%) compared with 2020, 27% of respondents who reported to law enforcement agencies said they faced reluctance or refusal from police to investigate their cases. Additionally, 24% reported experiencing treatment from law enforcement that they perceived as victim blaming, including being subjected to pointed questions such as, “What did you do or say to trigger the abuse?”

Linked to this, is that law enforcement tends to hold survivors responsible for protecting themselves. The same number of respondents (24%) said they were made to feel responsible for shielding themselves against further victimization by removing themselves from social media; avoiding speaking publicly about controversial issues; moving into less visible roles at work; or taking leave from their respective careers. [All of this] further entrenches efforts to silence women in public life and renders women less visible. 

We need more effective education and training of law enforcement and judicial actors to help [take] action in these kinds of cases. We need the political will to effectively regulate Big Tech companies that propagate this violence. 

Why Women Don’t Trust AI
It’s less about safety, more about the ethical consequences.
Claire Cozens is a London-based writer and editor, and a recovering foreign correspondent. She's reported from more than a dozen countries and lived in New Delhi, Beijing, Kathmandu and Paris.