AI Deepfake Cyberbullying: A Growing Threat to Schools
Students utilising artificial intelligence to turn benign photos of peers into sexually explicit AI deepfakes is becoming a bigger issue for schools.
The victims may experience nightmares as a result of the consequences of the dissemination of the altered images and videos.
This autumn, AI-generated nude photos went viral in a middle school in Louisiana, highlighting the problem for educational institutions. In the end, two lads were charged, but not before one of the victims was sent out for picking a fight with a boy she claimed was responsible for the pictures of her and her pals.
In a news release, Lafourche Parish Sheriff Craig Webre stated, “While the ability to modify images has been available for decades, the rise of A.I. has made it simpler for anyone to modify or generate such images with little to no training or experience.” “This incident brings to light an important issue that every parent should discuss with their kids.”
Read: BBVA Announces Record $4.6B Share Buyback Program
The Growing Impact of AI-Generated Deepfakes in Schools
These are the main conclusions from AP’s article about the proliferation of AI-generated nude photos and the reactions of educational institutions.
More states adopt laws to combat AI deepfakes.
According to Republican state senator Patrick Connick, who drafted the legislation, the prosecution resulting from the deepfakes at the Louisiana middle school is thought to be the first under the state’s new statute.
The law is one of several that target AI deepfakes nationwide. According to the National Conference of State Legislatures, at least half of the states passed laws in 2025 that addressed the use of generative AI to produce artificially generated noises and pictures that appeared realistic. A few of the regulations deal with content that simulates child sexual abuse.
New Laws and Legal Actions Targeting AI Deepfake Abuse
Additionally, students have been expelled in states like California and prosecuted in Florida and Pennsylvania. Additionally, a Texas fifth-grade teacher was accused of employing AI to produce child pornography of his pupils.
As technology advances, deepfakes become simpler to produce.
Deepfakes were initially used to degrade young celebrities and political opponents. According to Sergio Alexander, an assistant professor at Texas Christian University who has written on the problem, until recently, people needed some technical abilities to make them believable.
He stated, “You don’t need to have any technical knowledge at all because you can do it on an app and download it on social media.”
How Advancing Technology Is Making AI Deepfakes Easier to Create
He called the problem’s scale astounding. The amount of AI-generated photos of child sexual assault that were reported to the National Centre for Missing and Exploited Children’s cybertipline increased from 4,700 in 2023 to 440,000 in just the first half of 2025.
Experts worry that schools aren’t doing enough.
Co-director of the Cyberbullying Research Centre, Sameer Hinduja, advises schools to revise their deepfake rules and improve their explanations. In this manner, he explained, “students don’t think that the staff, the educators, are completely indifferent, which might make them feel like they can act with impunity.”
According to him, a lot of parents believe that schools are dealing with the problem when, in fact, they are not.
Experts Warn Schools Are Unprepared for Deepfake Threats
Hinduja, a professor at Florida Atlantic University’s School of Criminology and Criminal Justice, remarked, “So many of them are just so naive and so ignorant.” “They sort of bury their heads in the sand, hoping that this isn’t happening among their youth, after hearing about the ostrich syndrome.”
AI deepfakes can cause especially dangerous trauma.
According to Alexander, AI deepfakes differ from traditional bullying in that they involve a video or image that frequently gets viral and then keeps coming up, resulting in a traumatising cycle, rather than a harsh text or rumour.
According to him, many victims experience anxiety and depression.
The Severe Psychological Trauma Caused by AI Deepfakes
“They literally shut down because it seems like there’s no way they can even prove that this is not real—because it does look 100% real,” he remarked.
Parents are urged to speak with their children.
According to Alexander, parents can initiate the subject by casually asking their children if they have come across any amusing phoney videos on the internet.
He stated, “Take a moment to laugh at some of them, like Bigfoot racing after hikers.” “Have you considered what it would be like if you were in this video, even the funny one?” is a question that parents can then pose to their children. Parents can then inquire as to whether a classmate has created a phony film, even if it is harmless.
How Parents Can Help Children Navigate Deepfake Risks
He declared, “I’m sure they’ll claim to know someone based on the numbers.”
According to Laura Tierney, founder and CEO of The Social Institute, which teaches people how to use social media responsibly and has assisted schools in creating rules, children should know that they may communicate with their parents about things like deepfakes without facing consequences. Many children worry that their parents will overreact or take away their phones, she said.
She bases her response on the acronym SHIELD. “Stop” and “don’t forward” are represented by the letter “S.” “H” stands for “huddle” with a responsible adult. The “I” stands for “inform” any social media sites where the picture is shared. “E” is a signal to gather “evidence,” such as who is disseminating the image, but not to download anything. The letter “L” stands for “limit” access to social media. The “D” serves as a reminder to “direct” victims to assistance.
“I believe that the fact that that acronym consists of six steps indicates that this problem is extremely complex,” she stated.