AI-generated deepfakes pose new risks for students and teachers. Media scholar Dr Michael Dezuanni and the eSafety Commissioner outline why media literacy and proactive policies are vital for educators.
In Australian schools, technology has long been both a tool for learning and a source of anxiety. Now, with the rise of artificial intelligence–generated deepfakes, the challenges are becoming more complex, more urgent, and more personal.
Deepfakes – AI-manipulated images, audio or video – have grown beyond the realm of political misinformation and celebrity hoaxes. Increasingly, they are emerging in classrooms, playgrounds, and social media networks frequented by young people. From altered nudes targeting students to fabricated videos designed to bully or humiliate, the issue is no longer hypothetical.
As the eSafety Commissioner has warned, the number of reported deepfake incidents in schools has doubled in the past year. Schools are being urged to treat this as a serious safety matter and to report cases promptly.
“I’m calling on schools to report allegations of a criminal nature, including deepfake abuse of under-aged students, to police and to make sure their communities are aware that eSafety is on standby to remove this material quickly,” eSafety Commissioner Ms Inman Grant said.
“It is clear from what is already in the public domain, and from what we are hearing directly from the education sector, that this is not always happening.
“Our Toolkit for Schools helps schools prepare for and manage deepfake incidents, taking into account the distress and lasting harms these can cause to those targeted.
“It also encourages schools to openly communicate their online safety policies and procedures, and the potential for serious consequences, including criminal charges in some instances, for perpetrators who may be creating synthetic child sexual abuse material,” Ms Inman Grant said.
But what can schools do when truth itself is so easily distorted?
Dr Michael Dezuanni, Professor in the School of Communication at Queensland University of Technology (QUT), has been tracking the intersection of young people, learning and digital media for decades. His insights come not only from research but from lived experience: before entering academia, he spent nearly 14 years teaching English and Media Studies in Queensland secondary schools.
Today he works at QUT’s Digital Media Research Centre, where his projects focus on how digital technologies shape learning, literacy, and participation in everyday life. His work ranges from studying early childhood media practices to exploring how young people engage in online book communities such as TikTok’s BookTok. He remains a registered teacher, still connected to schools through research partnerships.

“My research is still very much about learning and education, but always in relation to the technologies in people’s lives,” he says. “It’s about how people learn with media, how they produce it, and how they critically reflect on it. Deepfakes sit right at the intersection of those issues.”
For Dr Dezuanni, deepfakes cannot be separated from the broader field of media literacy. “There are two key areas: one is misinformation, where manipulated media is used to push misleading narratives into the world. The other is sexual exploitation – students using apps to create nude or sexualised images of their classmates. Both are incredibly concerning, and both require education as well as regulation.”
While the term ‘deepfake’ may dominate headlines, Dr Dezuanni is not convinced it’s widely used by students themselves.
“They’re more likely to talk about the specific apps, or just refer to it as something you can ‘do with AI’,” he says. “Even before AI, there were ways of creating misrepresentative, sexualised content of classmates. Unfortunately, those practices are already familiar to young people, whether or not they know the word ‘deepfake’.”
He is concerned that schools often don’t act until a scandal forces them to. “What I see is schools ending up in crisis mode, dealing with a deepfake scandal or online abuse rather than building resilience in advance. Too often, technology policies are just lists of restrictions. A student signs a form, and if they break the rules, that’s it. We need policies that are far more proactive and that involve staff, parents and students in conversations about what kind of digital community we want to be,” he says.
That, he believes, means embedding media literacy throughout the curriculum. “I’m a strong proponent of introducing media literacy in the very early years,” he says. “We know children are being handed phones and tablets from a young age. Even if parents worry about screen time, the reality is that kids are growing up with devices. Schools need to be responsive to that.”
But this doesn’t mean overburdening teachers, he adds. “Teachers already have so much on their plates. But often it’s not about adding something entirely new – it’s about tweaking what’s already happening. You can build media literacy into existing lessons, whether it’s in English, history, or even science. What matters is cultivating that critical attitude and ethical framework over time.
“Too many young people are the victims of a form of sexualised online abuse,” Dr Dezuanni says. “That may or may not be part of bullying, but it is harmful in itself. Helping young people understand why that’s so problematic, and building ethical frameworks from a young age, is crucial.”
This is where the ethical side of media literacy matters as much as technical skills. “If young people are seeing ‘nudified’ content of classmates, they should be reporting that to an adult,” he says. “That’s really what it comes down to. They need to understand that this is abuse, not a joke.”
The challenge, however, is that students often hesitate to speak up. As the eSafety Commissioner notes, many worry they’ll lose device privileges if they report.
New data reveals reports to eSafety’s image-based abuse scheme about digitally altered intimate images, including deepfakes, from people under the age of 18 have more than doubled in the past 18 months, compared to the total number of reports received in the seven years prior. Four out of five of these reports involved the targeting of females.
While the rapid rise in reports is cause for concern, the reality may be worse, Ms Inman Grant warned.
“We suspect what is being reported to us is not the whole picture,” she said.
“Anecdotally, we have heard from school leaders and education sector representatives that deepfake incidents are occurring more frequently, particularly as children are easily able to access and misuse nudify apps in school settings,” Ms Inman Grant said.
“With just one photo, these apps can nudify the image with the power of AI in seconds. Alarmingly, we have seen these apps used to humiliate, bully and sexually extort children in the school yard and beyond. There have also been reports that some of these images have been traded among school children in exchange for money.”
So what can teachers do in practice? Dr Dezuanni says one approach is to teach students to verify information by consulting multiple sources, a strategy he describes as ‘lateral reading’.
“Fact-checking often gets talked about in relation to news,” Dr Dezuanni says. “But it’s really about any situation where you’re unsure about the information in front of you. If a student sees a post that doesn’t seem right – whether it’s about a celebrity or a political issue – they should learn to look at other reliable sources to verify it.”
He acknowledges the limits of this method: “If we’re talking about a deepfake nude of a classmate, lateral reading doesn’t apply. In those cases, the priority is ethical decision-making. Young people need to understand what they’re seeing is abusive, and the appropriate response is to tell an adult.”
For Dr Dezuanni, this underlines the need to teach both skills and values.
“Media literacy is about cultivating critical dispositions,” he says. “It’s about being able to step back and say: this doesn’t feel right. It’s about knowing when to fact-check, but also knowing when something crosses an ethical line and must be reported.”
Not all uses of AI and synthetic media are harmful. In fact, many are highly creative, offering students opportunities for self-expression and storytelling. This duality complicates the picture for educators.
“Teachers have always had to balance young people’s enjoyment of media with the need to help them think critically about risks and harms,” Dr Dezuanni says. “Media literacy is not about taking the fun away. Young people learn a lot from gaming, social media, and creative play online. We need to recognise those positives while also preparing them to manage risks.”
His research into online reading communities illustrates this. “On TikTok, you have this ‘BookTok’ corner where young people are enthusiastically sharing their love of reading. As an English teacher, I find that incredibly exciting,” he says. “It shows that digital platforms can foster deep engagement, not just distraction.”
For schools, the task is to hold both truths: celebrating the creativity digital tools allow, while acknowledging the risks of manipulation and misuse.
Confidence is often the missing ingredient for educators. While curriculum bodies like ACARA have introduced resources such as the Curriculum connection: Media consumers and creators guide, Dr Dezuanni says teachers need more concrete tools.
“That resource is terrific in outlining content descriptors and elaborations,” he explains. “But what we lack are the specific classroom materials and lesson plans that teachers can use straight away – especially when the topics are controversial or sensitive.”
He points to the Common Sense Media Digital Citizenship curriculum, widely used in the United States, as a strong model. “We trialled it here with students from Prep through Year 7, and it was very effective in helping teachers introduce these issues at age-appropriate levels.”
Teacher training is another gap, he says. “We don’t do enough focus on this in pre-service teacher education. Most teachers enter the profession without training in how to teach media literacy or digital citizenship. We need much stronger emphasis both in teacher preparation and in ongoing professional learning.”
Ultimately, addressing deepfakes in schools requires more than detection software or punitive discipline. It calls for a shared commitment across educators, parents, policymakers and students themselves.
As Dr Dezuanni puts it: “If we want young people to understand the risks of deepfakes – or to resist the temptation to misuse the technology – we can’t wait until a crisis occurs. We need to be building these skills, these ethical frameworks, from the earliest years.”
Responding to deepfakes in schools: practical guidance
Set expectations early
- Build on existing digital literacy, respectful relationships, and consent education by including conversations about deepfakes and emerging technologies. Make sure this includes clear guidance on the use of AI tools and image-based abuse.
- Talk to students about how consent applies in online spaces – and how technology can be used to fake or manipulate images. Help them understand that even when content is fabricated, it can still cause real harm.
Have a response plan
- Use eSafety’s new guide for responding to deepfakes, part of its Toolkit for Schools, to help manage incidents. This resource is designed to work alongside your existing school and education sector policies and procedures.
- Prioritise the wellbeing of affected young people or staff above all other considerations.
- Establish clear steps for responding to deepfake incidents. Make sure all staff know how to support students, record what has happened, and work with families, police, and eSafety.
- Ensure your school’s wellbeing and leadership teams are prepared and confident in handling these situations with care, consistency, and sensitivity.
Support student wellbeing
- Recognise that these incidents can be traumatic. Make sure students who are affected have access to safe reporting channels and wellbeing support.
- Engage the school community.
- Communicate with parents and carers about deepfake risks and school policies. Use newsletters, assemblies, or parent forums to raise awareness and reinforce prevention.
Encourage students to be ‘upstanders’
- Help them know how to report abuse, support peers, and model respectful behaviour online.
Join eSafety Commissioner’s webinar: AI assisted image-based abuse: Navigating the deepfake threat on Wednesday 4 March 2026 3:45pm-4:30pm AEDT.




