eSafety Commissioner Ms Julie Inman Grant has issued an urgent call for schools to report deepfake incidents to appropriate authorities as the rapid proliferation of ‘nudify’ apps online takes a growing toll on communities around Australia.
The Commissioner has written to education ministers urging them to ensure schools adhere to state and territory child protection legislation and mandatory reporting obligations.

To help address the threat of AI-generated abuse in Australian classrooms, reports of which have steadily increased over the past 18 months, eSafety has released an updated Toolkit for Schools including a step-by-step guide for dealing with deepfake incidents.
The guide strongly encourages educators to prioritise the wellbeing of children and targeted staff and report any potential criminal offence to local police.
“I’m calling on schools to report allegations of a criminal nature, including deepfake abuse of under-aged students, to police and to make sure their communities are aware that eSafety is on standby to remove this material quickly,” Ms Inman Grant said.
“It is clear from what is already in the public domain, and from what we are hearing directly from the education sector, that this is not always happening.”
“Our response guide helps schools prepare for and manage deepfake incidents, taking into account the distress and lasting harms these can cause to those targeted.
“It also encourages schools to openly communicate their online safety policies and procedures, and the potential for serious consequences, including criminal charges in some instances, for perpetrators who may be creating synthetic child sexual abuse material,” Ms Inman Grant said.
eSafety has also issued a new Online Safety Advisory to alert parents and schools to the recent proliferation of open-source AI ‘nudify’ apps that are easily accessible by anyone with a smartphone.
“Creating an intimate image of someone under the age of 18 is illegal. This includes the use of AI tools. Parents and carers can help educate their children that this behaviour can lead to criminal charges.”
Additionally, eSafety is hosting a series of webinars throughout July and August for parents, educators and youth-serving organisations on AI-assisted image-based abuse and navigating the deepfake threat.
AI proliferation
New data reveals reports to eSafety’s image-based abuse scheme about digitally altered intimate images, including deepfakes, from people under the age of 18 have more than doubled in the past 18 months, compared to the total number of reports received in the seven years prior. Four out of five of these reports involved the targeting of females.
While the rapid rise in reports is cause for concern, the reality may be worse, Ms Inman Grant warned.
“We suspect what is being reported to us is not the whole picture,” Ms Inman Grant said.
“Anecdotally, we have heard from school leaders and education sector representatives that deepfake incidents are occurring more frequently, particularly as children are easily able to access and misuse nudify apps in school settings,” Ms Inman Grant said.
“With just one photo, these apps can nudify the image with the power of AI in seconds. Alarmingly, we have seen these apps used to humiliate, bully and sexually extort children in the school yard and beyond. There have also been reports that some of these images have been traded among school children in exchange for money.”
“We have already been engaging with police, the app makers and the platforms that host these high-risk apps to put them on notice that our mandatory standards come into full effect this week and carry up to a $49.5 million fine per breach, and that we will not hesitate to take regulatory action,” she said.




