Australia’s eSafety Commissioner has already received a number of complaints about non-consensual distribution of deepfake intimate images, and expects this type of abuse to grow in volume as artificial intelligence (AI) technology becomes more accessible.
“Looking ahead, I’m concerned AI-related harms may morph and combine with those we’re also starting to see in the metaverse, especially harms affecting children,” Commissioner Julie Inman Grant says.
“There is potential, for example, for generative AI to automate child grooming at scale and in a highly personalised way.”
A deepfake is a false, but seemingly realistic photo, video or sound file made using AI.
While some uses are relatively harmless, such as the deepfake images of Pope Francis wearing a puffer jacket, the technology can also be used for sinister purposes such as manipulation and abuse.
The Australian Government is currently consulting on how best to address the potential harms and risks from generative AI, including issues such as the potential for deepfake misinformation and abuse, risks to privacy, bias and lack of transparency.
Grant warns about the potential for AI-generated sexual abuse material, including material based on real images of children sourced online.
“We are already aware of paedophiles scraping children’s images from social media and using generative AI to create child sexual abuse imagery according to their predatory predilections,” Grant says.
A variety of generative AI tools – image, video, text and more – have been released without safety guardrails in place to prevent this kind of abuse.
Grant is calling for the technology industry to prioritise safety from the outset.
Having consulted broadly with Australian and global AI experts, the Commissioner’s office says its next tech trends brief will address the safety implications and mitigations needed around generative AI. This will include safety advice for industry and the public.
Grant encourages Australians suffering any kind of image-based abuse including deepfakes to report it to eSafety.gov.au.
Originally published by Cosmos as Generative AI could automate sexual abuse and child grooming, eSafety Commissioner says
Petra Stock
Petra Stock is a journalist and engineer. She has previously worked in climate change, renewable energy, environmental planning and Aboriginal heritage policy.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.