
Joanne Sy
Help! Someone Made a Fake Nude of Me—What Next?
People often come to us in shock, urgently searching for answers to questions like, “Someone made a deepfake of me—what can I do?”, “My face is in a fake nude image in Australia!”, or “How do I get explicit AI-generated pictures removed?” If this sounds familiar, know that you’re not alone—these phrases are now some of the most commonly searched by victims of deepfake abuse.
The reactions we hear range from fear, embarrassment and confusion to outrage and a desperate need for quick action. Our clients need clear advice, reassurance, and real solutions—not just legal jargon or generic information. That’s why our blog speaks directly to those lived experiences and answers the exact questions ordinary Australians are asking when they realise their privacy, reputation, and wellbeing have been put at risk by someone’s use of AI.
If you’re here because you’ve been targeted by fake images or AI deepfakes, this article is designed to help you understand your rights and options at the moment you need them most.
Let’s break it down
AI-powered tools like Grok Imagine and Stable Diffusion started off as fun, even artistic, experiments—type in a prompt, let the machine “imagine” and voilà, digital art! But it didn’t take long for these programs to cross over into the controversial. Now, the same tools can conjure up incredible, hyper-realistic images of real people—including celebrities and, worryingly, everyday Aussies—doing things they never did, in places they’ve never been, sometimes in deeply private or sexualised scenarios.
What does “NSFW” mean, exactly? It stands for “not safe for work,” but in this context it means images or content that are sexually explicit or highly graphic—stuff that should never pop up on a work computer, and may be distressing or reputationally damaging, especially if you never gave consent in the first place. These aren’t just silly Photoshop pranks. Thanks to AI, the images look so real they can easily trick even a careful eye.
Why Is This So Dangerous?
Because anyone—whether it’s a troll with a grudge, a bored student, or a blackmailer—can create convincing fakes from a single selfie. The end result? AI has turned the internet into a wild west of privacy and identity abuse, and the targets can be anyone: Taylor Swift, Aussie schoolgirls, or even you.
How AI Deepfakes Are Hurting People
While celebrities have long been targets of so-called “deepfake porn” (think fake sex tapes or explicit snaps), the pain is now hitting closer to home in Australia.
For example, in 2024, dozens of Victorian high school girls discovered their innocent social media snaps had been misused to create and share humiliating explicit fakes among classmates. Adults haven’t been spared either—teachers, influencers, and private citizens have already suffered online abuse and psychological distress from AI-forged intimate images. Cases of coercion, blackmail, and deep embarrassment are on the rise.
The legal challenges?
For starters, proving defamation or privacy breaches when the “photos” look near-authentic but are AI-generated is no longer straightforward. Serious harm must now be proven under new laws, and the lines between a mean-spirited meme and a reputationally crushing deepfake are getting blurrier.
Australia’s law is scrambling to catch up. In September 2025, New South Wales passed the Crimes Amendment (Intimate Images and Audio Material) Bill, making it a crime to create, share, or threaten to share sexually explicit deepfakes of real people—even if the images, video, or audio are entirely AI-made and obviously fake.
Similar reforms are rolling out nationally as AI image abuse becomes a regular headline.
And it’s not only criminal law: victims are using defamation, the Online Safety Act, and privacy law to demand takedowns, ban abusers from platforms, or seek damages. But it’s a maze—especially if the perpetrator hides behind fake accounts, posts from overseas, or dodges Australian law entirely.
How can we assist you?
We can help but first = don’t panic. Save any evidence—screenshots, URLs, even unlisted messages.
We can assist with takedown requests and privacy complaints: both social media companies and the Australian eSafety Commissioner have the power to order removal and investigate online abuse.
At Sharon Givoni Consulting, we offer proactive support and decisive action:
Legal advice on defamation, privacy
Using takedown mechanisms to get images removed swiftly.
Guidance on the latest AI, deepfake, and online safety laws
Strategic letters
Negotiating with platforms
Taking legal action
Client education on how to protect your online image.
Sharon Givoni Consulting—we tackle AI-powered image abuse before it spirals out of control.
Further Reading:
The Art of Consent: Navigating Australia’s New Privacy Tort –
How Australia’s new privacy tort affects marketers, advertisers and photographers
https://www.bing.com/search?q=sharon+givoni+privavy+tort&cvid=056a31917999403aa186e12d070d394f&gs_lcrp=EgRlZGdlKgYIABBFGDkyBggAEEUYOdIBCDMwMzJqMGo0qAIIsAIB&FORM=ANAB01&PC=HCTS
Please note the above article is general in nature and does not constitute legal advice.
Please email us info@iplegal.com.au if you need legal advice about your brand or another legal matter in this area generally.