AI and mental health services

Monday 11 May 2026

Alison Choy Flannigan
Partner, Hall & Wilcox, Sydney, Australia
alison.choyflannigan@hallandwilcox.com.au

Introduction

The use of artificial intelligence for mental health services (including eating disorders) poses unique legal and ethical challenges.

In mental health this can include the use of AI in:

  • online services, such as the use of chatboxes;
  • using AI to filter through people’s social media channels (such as Instagram or Facebook), both text and images to direct advertising to people and directing them to weight loss programs or medical weight loss such as Ozempic.

The internet is full of images which promote ‘the perfect’ body image. With the increase in the use of GLP-1 medications such as Ozempic (semaglutide) and Mounjaro and the money that can be made from them, there are moderate to high risks that the promotion of certain products to people who are mentally unwell (including with eating disorders) can do more harm than good.

The same could be said for promotion drugs of addiction or drugs which can lead to addiction (such as medicinal cannabis) to people who are mentally unwell.

With the prevalence of telehealth, the risk of ‘doctor shopping’ is high.

In Australia, the use of AI for mental health services is regulated by several laws, including the Therapeutic Goods Act 1989 (Cth), the Therapeutic Goods Regulation 1990 (Cth) and the Therapeutic Goods Advertising Code.

What are the opportunities for the use of AI for mental health services?

The opportunities for the use of AI for mental health services include more personalised services; and people being able to access services more conveniently or online without stigma.

What are the risks?

The key legal and ethical considerations clinicians need to understand include duty of care, privacy protection and security and transparency and consent.

Eating disorders can lead to depression, anxiety and self-harm.

AI can be used, for example, to develop strategies to diagnose people who are susceptible to eating disorders.

AI chatboxes can be used in telehealth concerning weight loss programs etc. There is a risk of misuse of medications such as Ozempic.

Therefore, it is very important in using AI models in relation to the diagnosis and treatment of people with eating disorders that they are products specifically customised with eating disorders and mental health in mind. Transparency, bias, duty of care and data protection are also important.

What should the public and professionals be cautious about when using AI in eating disorder contexts, and how can we ensure these tools are used safely and ethically?

Telehealth and the use of AI assisted chatbots can make assistance more accessible, but it is important that they provide the correct advice based upon evidence-based medicine and are not biased.

Further, people should be advised if they are talking to a computer instead of a human, this is transparency.

What are the regulatory and legal issues?

According to TGA Guidelines, software will be considered a medical device where its intended medical purpose includes one or more of the following:

  • diagnosis, prevention, monitoring, prediction, prognosis or treatment of a disease, injury or disability;
  • compensation for an injury or disability;
  • investigation, replacement, or modification of the anatomy or of a physiological process or state; or
  • to control or support conception.

The Therapeutic Goods Act regulates software-based medical devices, including software that functions as a medical device in its own right and software that controls or interacts with a medical device.

On 25 February 2021, the Therapeutic Goods Administration (TGA) implemented reforms to the regulation of software-based medical devices, including new classification rules for software-based medical devices according to their potential to cause harm through the provision of incorrect information.

The changes include:

  • clarifying the boundary of regulated software products (including ‘carve outs’);
  • introducing new classification rules; and
  • providing updates to the essential principles to clearly express the requirements for software-based medical devices.

Certain software-based medical devices have been ‘carved out’ from the scope of the TGA Regulation either by exclusion or exemption:

  • exclusion means that the devices are completely unregulated by the TGA; and
  • exemption means that the TGA retains some oversight for advertising, adverse events and notification.

However, registration of the device on the Australian Register of Therapeutic Goods (ARTG) is not required.

Certain clinical decision support systems have been exempted.

Excluded products include consumer health products involved in prevention, management and follow up that do not provide specific treatment or treatment suggestions, enabling technology for telehealth, healthcare or dispensing, digitisation of paper based or other published clinical rules or data including simple calculators and electronic patient records, population-based analytics, and laboratory information management systems.

On 3 July 2024, the TGA published new guidance that explains when software will be excluded from the TGA Regulation and provides further commentary to assist in the interpretation of exclusion criteria.

Excluded software can be categorised into:

  • consumer health products;
  • digital mental health tools;
  • enabling technology for telehealth or supporting healthcare delivery;
  • middleware;
  • digitisation of paper-based data or other published clinical rules;
  • population based analytics; and
  • laboratory information systems and management systems.

When large language models (LLMs) have a medical purpose and are supplied to Australians, they may be subject to medical device regulations for software and need approval by the TGA. It is important to note that regulatory requirements are technology-agnostic for software-based medical devices and apply regardless of whether the product incorporates components like AI, chatbots, cloud, mobile apps or other technologies. In these cases, where a developer adapts, builds on or incorporates a LLM into their product or service offering to a user or patient in Australia, the developer is deemed the manufacturer and has obligations under section 41BD of the Therapeutic Goods Act 1989.1

In addition to general software requirements, for software that uses AI or machine learning (ML), the manufacturer is required to possess evidence that is sufficiently transparent to enable evaluation of safety and performance of the product.

Transparency means that a ‘black box’ approach would not be considered acceptable (ie, we will not accept that no evidence can be provided because it is ‘black box’ technology). While this continues to be a rapidly evolving area, this will typically include artefacts for the following:

  • overarching statement of the objectives of the AI/ML model;
  • algorithm and model design, including tuning techniques used;
  • data used for training and testing – and generalisability where applicable;
  • size of data sets must be sufficiently large to be statistically credible.

Information about populations that this data is based on and justification for how this data would be appropriate for the Australian population and sub-populations for whom the AI is intended to be used. Independent global draft consensus standards  have been developed for datasets used in health AI, which could provide a basis for structuring this information.

Risk management to address risks including, but not limited to, overfitting, bias, and performance degradation such as data drift.

In this case, the manufacturer is the organisation or individual that develops the software.

Selling a product which should be registered, but is not registered, on the ARTG (and an exception does not apply) can result in penalties under both the Therapeutic Goods Act and the Australian Consumer Law, see below.

Australian Competition and Consumer Commission v Lorna Jane Pty Limited [2021] FCA 852

On 17 July 2020, the TGA issued three infringement notices to Lorna Jane, totalling AU$39,960 for alleged unlawful advertising in relation to COVID-19.

In response, the ACCC initiated proceedings in the Federal Court. On 23 July 2021, the Federal Court ordered Lorna Jane to pay AU$5m in penalties for engaging in conduct liable to mislead the public and making false and misleading representations to consumers about its ‘LJ Shield Anti-virus Activewear’.

Secretary, Department of Health v Peptide Clinics Australia Pty Ltd [2019] FCA 1107

Similarly, in 2018-19, the TGA investigated Peptide Clinics Pty Ltd for breaches of the advertising rules for medicines, including the ban on advertising prescription-only medicines to the public and the Federal Court of Australia ordered an AU$10 million penalty against Peptide Clinics.

What are the proposed regulatory changes in Australia?

The TGA has published its report on clarifying and strengthening the regulation of ‘Medical Device Software including Artificial Intelligence (AI)’ report in July 2025. The report provides a detailed overview of the process, outcomes and key findings from the TGA consultation: ‘Clarifying and strengthening the regulation of Artificial Intelligence (AI)’.

Finding 8 of this report has called for the greater regulation of the use of AI in mental health services.

While the majority of the software exclusions remain appropriate, guidance is needed to better support stakeholders with understanding the conditions of exclusion.

The report concludes that digital mental health tools exclusion is no longer appropriate and urgent review is needed, in collaboration with the Australian Commission on Safety and Quality in Health Care.

Ongoing monitoring and review are needed for health and wellness applications with claims or functionality that may meet the definition of a medical device.

The Australian Commission on Safety and Quality in Health Care has issued an AI Clinical Use Guide in August 2025. As with all healthcare technologies, clinicians must meet their professional and legal obligations, including Australian Health Professionals Regulatory Authority (Ahpra) and National Boards guidance in relation to patient safety and best practice in the application of AI tools.

Commentary

When using AI for healthcare services, providers must be aware that there are certain vulnerable populations who may be provided with those services, including those with mental health issues.

It would be negligent to provide health advice or health information which would cause them harm (including self-harm or harm to others) without taking reasonable steps to avoid that harm.

It therefore becomes incumbent for health service providers to undertake a risk assessment as to whether their services are appropriate in those circumstances, including: (1) vetting what information is provided to whom; and (2) to ensure that there is appropriate clinical governance (by qualified health professionals using evidence-based medicine) over what advice and treatment is provided.

Further, the Australian Consumer Law prohibits conduct which is misleading or deceptive or likely to mislead or deceive, so information on mental health websites must be accurate.

If the website promotes therapeutic goods, there are restrictions on promoting prescription medicines to consumers.

Unfortunately, there is a prevalence of telehealth or health information services that are providing health information by unqualified people. More regulation is required in this area.

Notes

1     The TGA has published an AI and medical device webpage at: www.tga.gov.au/how-we-regulate/manufacturing/manufacture-medical-device/manufacture-specific-types-medical-devices/artificial-intelligence-ai-and-medical-device-software.