Used under a Creative Commons Licence
Brains Before Bots: Sharon Givoni Consulting’s AI Policy
AI is everywhere – in search engines, emails, and the apps you use every day. It’s also made its way into the legal world. We’re now seeing clients arrive with contracts written by ChatGPT, “legal letters” from bots, and screenshots of AI saying they “definitely have a case.”
AI can be clever and quick, but it also gets things wrong. Fixing those mistakes often costs more than getting proper advice from the start.
That’s why we created a clear policy on how we handle AI‑generated documents and instructions. Lately, we’ve seen AI‑drafted cease‑and‑desist letters that exaggerate IP rights, U.S.-style contracts used in Australian deals, and privacy policies that look polished but ignore local law.
Reviewing these takes time. It’s like trying to turn a baked cake back into eggs and flour — harder and costlier than starting again properly.
Why AI isn’t enough on its own
Tools like ChatGPT, Gemini, and Copilot are trained on huge amounts of text, not on your specific matter or Australian law.
They don’t know your goals, your risk tolerance, or your relationships. They can make up case law, apply the wrong country’s rules, or produce legal-looking text that’s flat-out wrong.
AI can be helpful for ideas, but it can’t replace a lawyer. It shouldn’t draft letters or tell you what to send to the other side.
What happens when you bring us AI content
Here’s what we do — and why — when you send us something made by AI.
We treat your AI document like any other draft.
If you send us an AI‑generated contract, letter, or policy, we go through it line by line. We fix mistakes, remove unsafe language, and check everything against Australian law and your goals.
Sometimes we find it’s faster and cheaper to rewrite it from scratch. This review is billed like normal legal work — and often, that “free” AI document ends up costing more to fix than to do properly the first time.
Your instructions must come from you, not a chatbot.
We sometimes hear, “Just file what ChatGPT wrote.” But our job is to act in your best interests, not in the best interests of an algorithm. We need to understand what you want and what level of risk is acceptable to you.
You make the big decisions — we give legal guidance. The chatbot can’t represent you in court. We can.
When we have to say “no.”
Sometimes AI text makes threats, exaggerates rights, or includes false information. If it’s legally wrong or misleading, we can’t use it. That’s not us being difficult — that’s our duty as professionals. You don’t want a lawyer who follows a robot off a cliff.
The hidden risk of sharing information with AI
When you copy and paste your business details, contracts, or disputes into a public AI tool, you’re not talking to a private adviser. You’re giving information to a company that might store your words, use them for training, or be based overseas.
That could risk exposing:
- Trade secrets or design ideas
- Personal details or images
- Contract terms, prices, and business strategies
Our advice: use AI for general brainstorming, not for confidential details. Bring the real facts to us — they’ll stay private and protected by law.
Our AI and Client Instructions Policy (in plain terms)
AI is not your lawyer. Only our qualified lawyers give legal advice.
- If you bring AI‑generated documents, we’ll review or rewrite them — that work is billed like any other legal service.
- Your instructions must come from you, not a chatbot.
- We may refuse to use AI content that’s wrong, misleading, or unsafe.
Don’t put confidential details into public AI tools — and we won’t either.
If we use AI, it’s a minor background tool, not a replacement for legal thinking.
AI is powerful and exciting, but when it comes to your rights, brand, and business, you need more than clever. You need careful.
Case study: When AI “invented” Australian court cases
In 2025, an Australian lawyer appeared in the Federal Circuit and Family Court in a migration case — Valu v Minister for Immigration and Multicultural Affairs (No 2) FedCFamC2G 95.
The lawyer had asked an AI tool to find Australian cases to support their argument. The problem? Some of the cases — and even the quoted lines — didn’t exist.
When the judge couldn’t find those cases in legal databases, the lawyer admitted relying on AI without checking. The court had to remove the false material and issued a warning about the “growing problem” of fake AI case references.
This example shows why our policy matters. AI might produce confident answers, but in law, accuracy and integrity come first. Even one fake citation can damage a case — and your credibility.
Please note the above article is general in nature and does not constitute legal advice.
Please email us info@iplegal.com.au if you need legal advice about your brand or another legal matter in this area generally.

