Teen girls in the US confront an epidemic of deepfake nudes in schools

The use of non-consensual, AI-generated images to harass, humiliate and bully young women can harm their mental health, reputations and physical safety. PHOTO ILLUSTRATION: PIXABAY

WESTFIELD, New Jersey - Westfield Public Schools held a regular board meeting in late March at the local high school, a red brick complex in Westfield, New Jersey, with a scoreboard outside proudly welcoming visitors to the “Home of the Blue Devils” sports teams.

But it was not business as usual for Ms Dorota Mani.

In October, some 10th-grade girls at Westfield High School – including Ms Mani’s 14-year-old daughter, Francesca – alerted administrators that boys in their class had used artificial intelligence (AI) software to fabricate sexually explicit images of them and were circulating the faked pictures.

Five months later, the Manis and other families say, the district has done little to publicly address the doctored images or update school policies to hinder exploitative AI use.

“It seems as though the Westfield High School administration and the district are engaging in a masterclass of making this incident vanish into thin air,” Ms Mani, the founder of a local pre-school, admonished board members during the meeting.

In a statement, the school district said it had opened an “immediate investigation” upon learning about the incident, had immediately notified and consulted with police, and had provided group counselling to the sophomore class.

“All school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere,” Mr Raymond Gonzalez, superintendent of Westfield Public Schools, said in the statement.

Blindsided in 2023 by the sudden popularity of AI-powered chatbots such as ChatGPT, schools across the US scurried to contain the text-generating bots in an effort to forestall student cheating. Now a more alarming AI image-generating phenomenon is shaking schools.

Boys in several states have used widely available “nudification” apps to pervert real, identifiable photos of their clothed female classmates, shown attending events including school proms, into graphic, convincing-looking images of the girls with exposed AI-generated breasts and genitalia.

In some cases, boys shared the faked images in the school lunchroom, on the school bus or through group chats on platforms such as Snapchat and Instagram, according to school and police reports.

Such digitally altered images – known as deepfakes or deepnudes – can have devastating consequences for the victims.

Child sexual exploitation experts say the use of non-consensual, AI-generated images to harass, humiliate and bully young women can harm their mental health, reputations and physical safety as well as pose risks to their college and career prospects.

In March, the FBI warned that it is illegal to distribute computer-generated child sexual abuse material, including realistic-looking AI-generated images of identifiable minors engaging in sexually explicit conduct.

Yet the student use of exploitative AI apps in schools is so new that some districts seem less prepared to address it than others. That can make safeguards precarious for students.

At Beverly Vista Middle School in Beverly Hills, California, administrators contacted police in February after learning that five boys had created and shared AI-generated explicit images of female classmates.

Two weeks later, the school board approved the expulsion of five students, according to district documents. (The district said California’s education code prohibited it from confirming whether the expelled students were the students who had manufactured the images.)

Mr Michael Bregy, superintendent of the Beverly Hills Unified School District, said he and other school leaders wanted to set a national precedent that schools must not permit pupils to create and circulate sexually explicit images of their peers.

“That’s extreme bullying when it comes to schools,” Mr Bregy said, noting that the explicit images were “disturbing and violative” to girls and their families. “It’s something we will absolutely not tolerate here.”

Schools in the small, affluent communities of Beverly Hills and Westfield were among the first to publicly acknowledge deepfake incidents.

The details of the cases – described in district communications with parents, school board meetings, legislative hearings and court filings – illustrate the variability of school responses.

The Westfield incident began last summer when a male high school student asked to friend a 15-year-old female classmate on Instagram who had a private account, according to a lawsuit against the boy and his parents brought by the girl and her family. The Manis said they are not involved with the lawsuit.

After she accepted the request, the male student copied photos of her and several other female schoolmates from their social media accounts, court documents say.

Then he used an AI app to fabricate sexually explicit, “fully identifiable” images of the girls and shared them with schoolmates via a Snapchat group, court documents say.

Westfield High began to investigate in late October.

While administrators quietly took some boys aside to question them, Francesca said, they called her and other 10th-grade girls who had been subjected to the deepfakes to the school office by announcing their names over the school intercom.

That week, Ms Mary Asfendis, principal of Westfield High, e-mailed parents alerting them to “a situation that resulted in widespread misinformation”.

The e-mail went on to describe the deepfakes as a “very serious incident.”

It also said that, despite student concern about possible image-sharing, the school believed that “any created images have been deleted and are not being circulated”.

Ms Mani said Westfield administrators had told her that the district suspended the male student accused of fabricating the images for one or two days.

Soon after, she and Francesca began publicly speaking out about the incident, urging school districts, state lawmakers and Congress to enact laws and policies specifically prohibiting explicit deepfakes.

“We have to start updating our school policy,” Francesca, now 15, said in a recent interview. “Because if the school had AI policies, then students like me would have been protected.”

Parents including Ms Mani also lodged harassment complaints with Westfield High last autumn over the explicit images.

During the March meeting, Ms Mani told school board members that the high school had yet to provide parents with an official report on the incident.

Westfield Public Schools said it could not comment on any disciplinary actions for reasons of student confidentiality.

In a statement, Mr Gonzalez, the Westfield superintendent, said the district was strengthening its efforts “by educating our students and establishing clear guidelines to ensure that these new technologies are used responsibly”.

Beverly Hills schools have taken a stauncher public stance.

When administrators learned in February that eighth-grade boys at Beverly Vista Middle School had created explicit images of 12- and 13-year-old female classmates, they quickly sent a message – subject line: Appalling Misuse Of Artificial Intelligence – to all district parents, staff and middle and high school students.

The message urged community members to share information with the school to help ensure that students’ “disturbing and inappropriate” use of AI “stops immediately”.

It also warned that the district was prepared to institute severe punishment. “Any student found to be creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary actions,” including a recommendation for expulsion, the message said.

Mr Bregy, the Beverly Hills district superintendent, said schools and lawmakers needed to act quickly because the abuse of AI was making students feel unsafe in schools.

“You hear a lot about physical safety in schools,” he said. “But what you’re not hearing about is this invasion of students’ personal, emotional safety.” NYTIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.