Used under a Creative Commons Licence
How Does Australian Law Treat AI‑Enabled Goods and Services?
This article is for Australian businesses that build, sell or rely on AI‑enabled goods and services – from SaaS platforms and apps to devices, vehicles and “smart” consumer products. It looks at real and emerging examples, in Australia and overseas, to show how AI issues often end up as very familiar problems under the Australian Consumer Law (ACL): misleading conduct, safety, unfair practices and poor disclosure.
Who should read this and why?
If you are:
- a tech or SaaS company using AI for recommendations, pricing, triage or automation
- a hardware business selling “smart” or connected devices into homes or workplaces
- a marketer or founder tempted to badge your product as “AI‑powered” or “fully autonomous”
then this article is for you.
This is by no means a summary of every section of the Australian Consumer Law. Instead, it uses real‑world examples to show how AI problems usually come back to simple questions: What did your business say? What did you actually sell? How safe is the product or service? Were you upfront with customers about what the AI can and cannot do?
AI can exaggerate and even lie. The law is unforgiving
Overseas, there has already been a case where an AI chatbot allegedly invented false claims that a real person had committed fraud.
Even though the court treated the AI as just a tool, the person’s reputation was still affected because people believed what the system said.
The lesson for Australian businesses is that if your AI says something damaging or misleading, you can’t simply blame “the algorithm” – you need controls in place to stop it saying things you would never put in writing yourself.
So if a chatbot on your website gives misleading information about your product, prices, availability, refunds or performance, you remain responsible for the overall impression created, particularly where the bot is part of your sales and support.
Legal considerations about AI tools
- Before you rely on an AI system to talk to customers, it’s worth asking a few simple questions:
- Do customers clearly know they are dealing with a bot, not a human expert?
- Have you built in safeguards so the AI cannot confidently give wrong information about features, prices or people’s legal rights?
Legally speaking are you keeping an eye on things?
Courts and regulators tend to look much more favourably on businesses that can show there is a real person keeping an eye on their AI tools, rather than just “setting and forgetting”.
From a legal perspective, human oversight is often what turns an AI system from something opaque and risky into something that looks more like an ordinary, well‑managed process.
If a complaint or investigation arises, being able to say:
“we have a team member who regularly reviews the prompts, checks sample conversations, updates the rules and intervenes when the system goes off‑track”
is far more persuasive than saying “we plugged it in and hoped for the best” (as you can imagine!).
Under the Australian Consumer Law, regulators such as the ACCC are already interested in governance frameworks around AI and algorithms – who is responsible, what checks are in place, and how quickly problems are detected and fixed.
This is where we can add value.
In short, the law “likes” moderators because they show that the business is taking reasonable care. Having a named person or team reviewing and updating your AI prompts, rules and responses – with legal input into the guardrails – can make a big difference if you ever need to justify your use of AI to a regulator, a court or your own customers.
We can help by:
- Advising on what yiu need to do to reduce risk
- Drafting legal policies;
- working with your team to set clear “no go” areas;
- training staff to spot odd or risky AI behaviour early and know what steps to take when it happens.
- Classify your AI uses into low‑risk and high‑risk and decide where human review is required.
- Design simple escalation rules
- Update website/app terms and disclaimers
- Check your marketing claims about AI features
- Review contracts with AI vendors and cloud providers so you can moderate, log and fix AI outputs
“Full self‑driving” and AI‑washing
On the automotive side, a class action in California has been allowed to proceed against Tesla on the basis that drivers say they were misled about the capabilities of “Full Self‑Driving” features.
The core allegation is simple: marketing and public statements painted a picture of genuine self‑driving capability, while in reality the vehicles allegedly did not have the sensors or performance needed for high‑level autonomy.
For an Australian audience, this translates directly to the risk of AI‑washing – overstating or falsely claiming AI capabilities. Regulators here have already turned their attention to exaggerated technology and “greenwashing‑style” claims, and commentary suggests AI‑washing is firmly on their radar. It could apply to:
- a “smart” baby monitor that claims to “detect all signs of distress” using AI;
- a “risk‑free” AI investment tool that promises to beat the market; or
- a business that sprinkles “AI‑powered”, “machine learning” and “deep learning” through its materials when the product is actually a basic rules engine.
The law in Australia says that the labels on your website, app store listing, packaging and sales scripts matter more than the buzzwords.
The test is not “does it use AI somewhere in the stack?”, but “would a reasonable consumer be misled about what this product can actually do?”.
What should Australian AI businesses do next?
The Federal Government’s Final Report on AI and the ACL found that the existing law is broadly able to deal with AI‑enabled goods and services, but highlighted grey areas and signalled that regulators like the ACCC will be active.
As AI can blur lines around product safety and responsibility, its best not to wait for new legislation to act.
Remember – AI‑enabled products don’t stay the same – they can change with software updates and often involve several businesses (the brand selling the device, an overseas AI developer and a cloud provider). Think of a smart baby monitor sold by an Australian company but using overseas AI to analyse sound and video.
If an update adds a bug that stops important alerts and a child is hurt, parents will usually blame the brand they bought from, not the hidden suppliers. Under the Australian Consumer Law, the key question is what an ordinary customer was entitled to expect about safety and performance, so the product can still be treated as defective even if no one meant for the problem to occur.
AI is moving quickly – businesses need to as well
AI is moving quickly, but the law it sits under – especially the Australian Consumer Law – is already very real.
Treating your AI tools like any other product or service, with honest marketing, clear terms and sensible checks, is the best way to unlock their benefits without inviting avoidable legal and reputational headaches.
As Elon Musk himself has put it, AI is a “double‑edged sword” – powerful, but risky if not handled carefully.
- Online retailers recommending products
- Banks and fintechs detecting fraud or scoring credit
- Supermarkets and airlines using dynamic pricing
- Streaming platforms personalising what you watch or hear
- Cars with “autopilot” or driver‑assist features
- Smart home devices (speakers, lights, security)
- Fitness wearables and health apps with “AI coaching”
- Customer service chatbots and virtual assistants
- Property and rental sites suggesting prices
- AI tools creating ads, emails and social posts
Further Reading– Government and official materials
Final report – Review of AI and the Australian Consumer Law
Australian Government, Treasury – final report on how the ACL deals with AI‑enabled goods and services.
https://treasury.gov.au/publication/p2025-702329
Discussion paper – Review of AI and the Australian Consumer Law
Australian Government, Treasury – original discussion paper framing the issues around AI and consumer protection.
https://treasury.gov.au/sites/default/files/2024-10/c2024-584560-dp.pdf
Voluntary AI Safety Standard
Australian Government, Department of Industry, Science and Resources – voluntary standard outlining AI safety expectations for Australian organisations.
https://www.industry.gov.au/publications/voluntary-ai-safety-standard
The ACCC’s approach to colluding robots
Speech by the ACCC on algorithmic pricing, competition risks and “colluding robots”.
https://www.accc.gov.au/about-us/news/speeches/the-accc%E2%80%99s-approach-to-colluding-robots-address
Fitness for purpose in a different sense: is the Australian Consumer Law prepared for AI?
Overview of the Treasury review and what it means for AI‑enabled goods and services.
https://www.claytonutz.com/insights/2025/november/fitness-for-purpose-in-a-different-sense-is-the-australian-consumer-law-prepared-for-ai
Further Reading– Sharon Givoni articles
AI & Copyright in Australia: What Artists and Businesses Need to Know
https://sharongivoni.com.au/ai-and-copyright-in-australia-what-artists-and-businesses-need-to-know/
Australia Says No to Free AI Use of Copyright Works
https://sharongivoni.com.au/copyright-stays-strong/
The Artist vs. The Algorithm – AI and Copyright Law
https://sharongivoni.com.au/the-artist-vs-the-algorithm-ai-and-copyright-law/
AI and Copyright Decision for Australian Photographers
https://sharongivoni.com.au/ai-copyright-decision-for-australian-photographers/
How Does Copyright Law Work With AI?
https://sharongivoni.com.au/protecting-your-creativity-why-copyright-matters-for-australian-creators/
Please note the above article is general in nature and does not constitute legal advice.
Please email us info@iplegal.com.au if you need legal advice about your brand or another legal matter in this area generally.

