
Used under a Creative Commons Licence
Could It Be You Next? The law relating to Dangerous AI Images
Imagine you’re just checking your phone, and out of nowhere, you see a photo of yourself online. But you know it’s not really you—except, to everyone else, it sure looks like it. That’s the new reality for everyday Australians, thanks to “deepfakes” made using AI.
These disturbing images—sometimes R-rated, sometimes something far worse—aren’t just for celebrities anymore. They’re showing up in schools, workplaces, family chats, and business Google searches.
The question is: could you be next? And, more importantly, what can you really do about it legally speaking?
How Does AI-Generated Image Abuse Happen?
AI image tools like Grok Imagine and Stable Diffusion let anyone, anywhere, create a super-realistic fake photo with a few clicks. All it takes is one of your social media selfies and a little imagination: “make this person nude at a party”, “put my mate’s face on a silly body”—the computer does the rest, creating a fake that’s almost impossible for anyone to spot.
These images are then shared on group chats, social media, or shady internet forums. Often, the victim is the last to know.
Why Is This So Dangerous?
Fake AI images can ruin reputations, wreck relationships, and cause enormous emotional harm.
Even knowing something is fake, people can still believe it or share it. For kids and adults alike, the embarrassment, shame or threat of extortion (“pay me, or it goes viral”) can be overwhelming—and once an image is out there, it spreads faster than most people can react.
Privacy breached: Anyone can become a victim—students, teachers, workers, business owners, parents and their children.
Serious risk: Victims have lost jobs, friends, even their sense of safety at home, school or work.
Legal problems: In Australia, it’s illegal to make, share, or threaten to share these kinds of fake intimate images—but catching the fake-maker or stopping the damage isn’t easy.
So to understand how this can become such as problem, imagine this
- Sally, a uni student, finds out her face has been pasted onto an explicit photo, which is then sent around her campus group chat as a “joke.” Her friends know it isn’t real, but others don’t. She’s humiliated and wonders if it will hurt her part-time job.
- Dave owns a small business. A competitor wants revenge and uses an AI site to make fake pictures showing Dave behaving badly. Some new customers see the images online, believe they’re real, and back away. Dave’s reputation—and his income—take a big hit.
Why Is This So Dangerous?
Don’t panic, but do act quickly:
1. Take screenshots: Save the pictures, dates, web addresses, and anyone sharing the images.
2. Don’t pay blackmailers: It rarely stops the images from being shared.
3. Get legal advice: Laws are changing, and you have rights.
Can Lawyers Really Help?
Absolutely. Lawyers can.
Sharon Givoni Consulting can:
- Advise you about your rights under privacy, online safety, or defamation laws.
- Help you collect and preserve proof for potential police or court action.
- Contact platforms directly for urgent removals and eSafety action.
- Support you legally throughout the process of removal to make it happen quicker .
At Sharon Givoni Consulting®, we use normal English (not “legalese”!) and understand the stress and embarrassment—there’s no judgment, just clear steps to get your life back on track.
Why Is Acting Fast So Important?
AI fakes can go viral in hours.
The sooner you collect evidence and make complaints, the higher your chance of limiting the damage or even getting images removed completely. Delaying can make images harder to erase and cases harder to win.
How We Help – Real Solutions
- Practical, jargon-free legal advice, fast
- Letters of demand and urgent platform takedown request
- Privacy and defamation law strategies
- Evidence collection tailored for court, police, or workplace issues
- Clear support about what you should (and shouldn’t) say in public
Further Reading:
Addressing deepfake image-based abuse – https://www.esafety.gov.au/newsroom/blogs/addressing-deepfake-image-based-abuse
6 shocking AI images X’s Grok shouldn’t be able to make
Grok AI Misused to Create Non-Consensual Deepfake Images –
How Australia’s new privacy tort affects marketers, advertisers and photographers
What Australia’s New Privacy Tort means
The Art of Consent: Navigating Australia’s New Privacy Tort Using people in your designs? Be careful of the law
Please note the above article is general in nature and does not constitute legal advice.
Please email us info@iplegal.com.au if you need legal advice about your brand or another legal matter in this area generally.