Apple and Google App Stores Driving Deepfake Nudes Crisis in Schools

When smartphones entered the hands of teenagers, they also introduced a new and devastating phenomenon: sexting. By the early 2000s, news reports warned parents that intimate images of high school students were being rapidly shared within schools, often beyond the consent of the person in the photo, and that if their teen participated in storing or sharing these images, they could be charged with possession and distribution of child pornography.

In some instances, a girl would send an intimate photo of herself to a boy, and the boy would share the photo with his friends, and they would share it with their friends, and so on. In other instances, the images would be shared as an act of revenge. Often, the hurt and humiliation caused by the dissemination of these images would force the girl to change schools. Too often, it would lead to suicide.

We now face an even greater crisis because of “nudifying” apps that are readily available in the Google and Apple app stores. Classmates, often teenage boys, are using these easily accessible “nudify” apps to transform ordinary photos pulled from social media into explicit images, which then spread rapidly through group chats and school networks.

What makes these incidents fundamentally different from earlier forms of sexting or image sharing is that the subject’s participation is no longer required at all. The images are fabricated entirely without consent, often without the victim’s knowledge until after they have circulated.

A recent case in Beverly Hills, California, shows how real the risk is, and how young the victims can be. In February 2024, administrators at Beverly Vista Middle School discovered that students had used readily available AI tools to create fake nude images of their classmates by grafting real students’ faces onto AI‑generated nude bodies. The images were then shared among peers. Sixteen eighth‑grade students were identified as victims. You’re around 13 when you are in eighth grade! None of the children involved had taken or shared intimate photos of themselves.

This is not an isolated episode. A recent investigation by WIRED, conducted with the research firm Indicator, documented AI‑generated nude images of students affecting nearly 90 schools and 600 students across at least 28 countries since 2023; although writers acknowledge that the true scale of deepfake sexual abuse taking place in schools is likely much higher.

One survey by United Nations children’s agency Unicef estimates that 1.2 million children had sexual deepfakes created of them last year. One in five young people in Spain told Save the Children researchers that deepfake nudes had been created of them. Child protection group Thorn found one in eight teens know someone targeted, and in 2024, 15 percent of students surveyed by the Center for Democracy and Technology said they knew about AI-generated deepfakes linked to their school.

This crisis is being fueled by technology companies that claim to prohibit such tools. According to a January investigation by the Tech Transparency Project, Apple and Google are not only hosting dozens of “nudify” and undressing apps, but their app‑store search and advertising systems frequently direct users to them. Searches for terms like “nudify,” “undress,” and “deepnude” in app stores produced multiple apps capable of digitally removing clothing from photos. This is happening despite platform policies that ban non‑consensual sexual imagery and prohibit sexual exploitation of minors.

The nudifying apps identified by the TTP investigation have been downloaded 483 million times and made more than $122 million in lifetime revenue, according to data from app analytics firm AppMagic.

The Tech Transparency Project investigation also shows that app‑store safeguards fail at the point of entry—where prevention would be most effective—allowing tools capable of generating child sexual abuse material to remain easily accessible. One of the most alarming findings of the investigation is that 31 of the nudifying apps included in the analysis were rated suitable for minors.

The consequences of this failure are not abstract. In the case of the Beverly Hills middle school, parents and students described a chilling aftermath … ordinary photos suddenly felt dangerous, and simply attending school felt unsafe. The case underscores a stark reality of AI‑enabled abuse. Sexual exploitation no longer depends on a child’s choices, behavior, or online activity. Visibility alone is enough.

The result is a system in which innocent victims bear the consequences while the infrastructure enabling the harm remains largely intact.

Related