It has been observed that whenever new technology is introduced, bad actors are often the first to exploit it. Or, more accurately, they are the first to use the new technology to exploit others.
In late December, X (formerly Twitter) introduced a new image and video edit tool for Grok, the platform’s proprietary AI interface. Predictably, it did not take long for bad actors to use the tool to sexualize images of women and children.
Almost immediately, requests for Grok to digitally alter photos to make the subjects appear nearly nude surged on the social media site, in what Reuters has termed a “mass digital undressing spree.” Reuters reports, “A review of public requests sent to Grok over a single 10-minute-long period at midday U.S. Eastern Time on Friday tallied 102 attempts by X users to use Grok to digitally edit photographs of people so that they would appear to be wearing bikinis. The majority of those targeted were young women.” Reuters also identified several cases where Grok created sexualized images of children.
These non-consensual images take on a life of their own as Grok fills the comments under the innocuous original photo with realistic-looking, quasi-pornographic images depicting your face, or your child’s face, on a near-naked body. Even if you delete your original post, the AI-generated images won’t go away, and those images can now be seen by parents, teachers, employers, classmates, or co-workers.
This was both foreseeable and preventable.
In August, a coalition of consumer protection, privacy, and online safety organizations sent a letter to state Attorneys General and the Federal Trade Commission warning that Grok was poised to “unleash a torrent of obviously nonconsensual deepfakes.”
Tech-industry defenders are, of course, dismissive of concerns about this appalling misuse of AI. Everyone knows these are AI-generated images, so why does it matter?
Those who have been victimized by deep-fake pornography and online abuse feel very differently about it.
One young woman in the UK told the BCC she felt “dehumanized and reduced into a sexual stereotype” after Grok was used to digitally remove her clothing. “While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” she said.
It doesn’t take much imagination to see where this could lead.
In recent months there has been considerable maneuvering by Tech lobbyists to shut-down attempts to regulate AI. They typically cite the potentially “chilling effect” regulation could have on free speech.
Imagine you are a young, politically engaged woman who uses X to advocate on issues of conscience or to express your own deeply held convictions on issues ranging from the right to life to protecting girls’ sports. Your opponents can, with impunity, use Grok to flood your page with realistic-looking AI-generated quasi-pornographic images – images designed to shame or humiliate you, images that threaten to damage your reputation, relationships, livelihood or future chances of employment. Images designed to intimidate and bully you into posting less frequently, reducing your audience reach, or perhaps even deleting your account entirely.
Surely this also has a chilling effect on speech?
Theodore Roosevelt said, “Men can never escape being governed. Either they must govern themselves or they must submit to being governed by others.”
More than any other industry, Tech companies must be keenly aware of the likelihood of their product being used in the creation and distribution of pornographic content. This has been the pattern with every technological advancement since the printing press.
If AI companies want to avoid being governed by others, they must show they are capable of governing themselves—by building and enforcing common sense protections that prevent foreseeable abuse.
If they refuse to do so, they should not be surprised when lawmakers, regulators, and the public conclude that someone else must.



