Used under a Creative Commons Licence
AI, the TGA and social media advertising for therapeutic goods in Australia
AI, the Therapeutic Goods Administration (TGA) and social media now collide in ways that most Australian health brands did not have to think about even two years ago. Business owners who sell vitamins, skincare, medical devices or wellness products are asking new questions such as “Can I use AI to write my Instagram captions?” and “Is it legal to use a chatbot to recommend our products?” The short answer is that you can use AI, but only if humans stay firmly in charge of anything that could be an “advertisement” under Australian therapeutic goods law.
What counts as therapeutic goods advertising in Australia?
Under the Therapeutic Goods Act 1989 and the Therapeutic Goods Advertising Code, any content that promotes a therapeutic good – from prescription medicines and devices through to over‑the‑counter products, vitamins and some “cosmeceuticals” – must be accurate, balanced, not misleading, and must support the safe and proper use of the product. That includes social media posts, influencer content, website copy, testimonials, videos, hashtags and even comments that you allow to remain under your posts. The TGA’s recent guidance on AI makes it clear that using AI‑generated advertising or AI‑assisted content is not unlawful in itself, but the legal responsibility for compliance always rests with the business and advertiser, not the tool. If there is a product reference, a link to a shop, a discount code or a call to action (“buy now”, “ask your pharmacist about X”), it is very likely to be treated as advertising, even if you thought you were just sharing “education”..
When “education” becomes unlawful AI advertising
AI tools are excellent at turning dry information into chatty, “engaging” text, but they are not trained on the Therapeutic Goods Advertising Code. A supplements company might ask an AI system for ten Instagram captions about “boosting immunity with our vitamin C gummies” and receive lines such as “Say goodbye to winter colds forever” or “Clinically proven to keep you well all year”. Those captions move from gentle education into strong therapeutic claims that imply guaranteed outcomes, which are almost certainly non‑compliant. The same issue arises where cosmetics start to promise to “treat eczema” or “fix acne” and therefore stray into therapeutic territory. As soon as an AI post promotes a therapeutic good and makes a health benefit claim, it must meet the Code’s requirements, regardless of who or what wrote the words.
AI that “learns” your old mistakes
Many marketing teams now “train” AI tools on their own historic content. That can be efficient, but only if the material you feed in has been legally reviewed. If last year’s posts casually claimed that a cosmeceutical cream “treats acne and rosacea” or that a sleep supplement “cures insomnia”, the AI will happily repeat and amplify those phrases. A brand that previously had a handful of borderline posts can suddenly find itself with a year‑long calendar of content promising to “treat”, “cure” and “fix your skin forever”. Regulators and competitors will look at the overall pattern, not just one caption in isolation, so “train on your own content” is only safe when that content is compliant.
Personalised AI targeting and privacy red flags
AI‑driven ad platforms allow very granular targeting based on behaviour or inferred health status. A pain‑relief device brand might use AI to show “Finally, relief for your chronic back pain” to people who have visited pain‑clinic websites, and “Stop arthritis in its tracks” to older users in certain postcodes. That sort of message can quickly edge into alarmist territory, especially when coupled with data about sensitive conditions. The TGA will care about both what is said and who it is being said to. At the same time, privacy and spam regulators will be interested in how those audiences were built. In practice, this means watching not just the wording of AI‑generated creative, but also the logic behind your targeting.
“Set and forget” AI pipelines and recycled claims
One of the biggest temptations with AI is to generate a month of posts, schedule them and move on. Some tools even offer automatic “evergreen” recycling of high‑performing content, tweaking the wording each time. For therapeutic goods, that can quietly re‑publish old problems. A vitamins company might start with compliant copy like “supports general health”, but as the AI keeps trying to improve engagement, the language drifts toward “prevents heart disease” or “fixes your anxiety without meds”. Elsewhere, a scheduling tool might reach back to a 2021 “miracle cure” post that had been corrected and put a refreshed version back into the feed. The TGA has stressed that businesses are responsible for both current and historical content. Audit any auto‑repost settings and treat every AI‑generated headline or caption as if a human wrote it from scratch.
Synthetic influencers, AI “doctors” and endorsements
Some brands are already experimenting with AI‑generated influencers and virtual doctors. A telehealth startup might create “Dr Ava”, an animated GP in a white coat who appears in Reels and stories explaining why a certain prescription‑only treatment is “better than anything else on the market” and answers questions in the comments. Even with fine‑print disclaimers, the overall impression is that a health professional is endorsing a therapeutic good directly to the public. The Code places strict limits on testimonials by health professionals and on how prescription‑only medicines can be promoted to consumers. When you combine a synthetic persona designed to look especially trustworthy with strong therapeutic claims, you can easily end up with a piece of advertising that is more problematic, not less, than if a real doctor had said the same thing.
Comments, chatbots and AI saying the quiet part out loud
The TGA already expects advertisers to moderate social media comments about therapeutic goods and to remove obviously false or exaggerated claims. AI moderation tools can speed this up but are not yet subtle. An algorithm that deletes every comment mentioning “side effects” as “negative sentiment” can leave a feed that looks unrealistically positive. At the other end of the spectrum, a simple profanity filter might leave up comments such as “This product cured my cancer” or “I threw away my asthma meds thanks to this spray” because they are polite and grammatically correct. Similar issues arise with AI chatbots on brand websites. A supplement company might install a bot to “help customers choose products” and find that, when asked about depression or chronic illness, it suggests stopping prescription medication and trying a “natural alternative” instead. That moves beyond advertising into unqualified health advice and is exactly the kind of scenario the TGA wants brands to avoid.
Evidence, over‑confident AI and the paper trail
The Advertising Code requires that therapeutic claims be supported by appropriate evidence. AI tools, by design, are very good at generating confident statements without checking whether any trial, study or approval actually exists. A device brand might be delighted to see a draft blog post claiming that “clinical trials show this device reduces wrinkles by 90% in four weeks” when no such data is on file. A simple internal rule can reduce this risk: no study, no claim. Every “clinically proven”, “best on the market”, percentage, or disease‑related claim generated by AI should be checked against actual evidence before publication. It is also worth remembering that regulators will ask who approved the content. Saying “the AI wrote it and the intern clicked publish” will not satisfy the TGA. Keeping an approval log – even a simple one – helps demonstrate you have real human oversight.
FAQs about AI, the TGA and social media
Clients often ask whether AI is banned altogether for therapeutic goods marketing in Australia. It is not; the TGA’s own guidance accepts that AI‑assisted advertising is here to stay, provided humans review outputs before they go live. Another common question is whether “educational” posts with product references are treated differently. In practice, once a therapeutic good is identified and a benefit is claimed, the Code will almost certainly apply. Brands also ask if they are responsible for what AI chatbots on search engines say about their products. You cannot control external tools, but you are responsible for your own advertising and you should provide corrective information if you become aware of serious misinformation circulating. Finally, many clients ask how far back they need to go with old posts. The TGA’s position is that both current and historical content can be relevant, so it is sensible to review and, if needed, clean up older high‑visibility posts, especially if you are about to scale up AI‑driven activity.
Further Reading
TGA – Advertising therapeutic goods on social media
TGA – TGA releases updated social media advertising guidance to support improved compliance
TGA – Understanding social media advertising rules
TGA – Advertising hub (overview of advertising requirements)
Can Influencers Still Promote Health Products?
Viral Food Trends vs Australian Food Labelling Laws
TGA’s New Laws on Cosmetic Injectables
Please note the above article is general in nature and does not constitute legal advice.
Please email us info@iplegal.com.au if you need legal advice about your brand or another legal matter in this area generally.

