Scientific Articles

How is AI use by therapists regulated? Part 1: Australia - competence as a new professional duty

How is AI use by therapists regulated? Part 1: Australia - competence as a new professional duty

This is the first post in a series exploring how different countries are regulating the presence of AI in psychotherapy practice. We begin with Australia - one of the countries that has approached the subject comprehensively, and, importantly, has not focused primarily on data protection. Data sits in the background. In the foreground is a different question: what does a therapist need to understand for AI to become genuine support in their work, rather than a risk?


AI has been written into digital competence, which in turn runs through several core areas of clinical practice, including ethical practice, diagnostic assessment, interventions and communication.

What does this mean in practice? That digital competence is not an additional skill alongside clinical work, but a part of it. The approach is close to how a therapist thinks about other elements of their clinical toolkit. Every new instrument - a questionnaire, a technique, a model of case conceptualization - requires an understanding of where it makes sense, what its limitations are, how it changes the process. AI fits within the same logic. It can be valuable support in clinical work, provided it is consciously chosen and integrated into the toolkit with attention to how it interacts with everything else.

What, specifically, does a therapist need to understand?

Behind the general requirement to use digital technology in a lawful, ethical and professional manner sit several concrete dimensions.

The first is an understanding of how the tool works - on what basis a result is produced, what its limitations are, when it is less reliable. The second is familiarity with the legal and ethical frameworks - GDPR, professional law, the code of conduct. If a therapist uses an AI tool that processes patient data, it is worth knowing where that data is processed and how responsibility is distributed. The third, and the most nuanced, is reflection on the influence of AI on the clinical process. And the fourth: awareness of one’s own role. The Australian guidelines state it plainly: clinical responsibility remains with the therapist.

Co-creating knowledge: PACFA and a new model of professional development

The Australian recommendations are not only about training - training is a predictable extension of what a therapist does anyway through Continuing Professional Development. What matters more is how the therapist’s own role in relation to a developing technology is understood.

PACFA (Psychotherapy and Counselling Federation of Australia), in its guidelines, describes this through two related dimensions. The first is familiar: practitioners are to keep their knowledge of advances in AI and its ethical implications up to date - through training, reading, formal CPD. This is the classic approach.

The second dimension is less obvious and considerably more engaging. PACFA encourages therapists to take part in the evaluation of the effectiveness and safety of AI tools - both in their own practice and in formal research projects.

What does this change? It changes the therapist’s position in relation to the technology. They are no longer merely its recipient or user. They become a co-participant in a process in which the technology is assessed, corrected and developed. Knowledge about what works in the clinic and what does not is not produced solely on the side of technology companies or the academy. It is produced in the encounter between the tool and the practice.

The therapist is not a passive consumer of methods; they are a critical witness and often a co-creator of them. PACFA extends this logic into the domain of AI - as a natural extension of the professional role.

What role AI may play, and what role it may not

This is perhaps the most practical point in the Australian guidelines. The guidelines do not say in general terms that AI is a supportive tool. They say specifically where AI has a place and where it does not.

AI’s place is wherever it supports the therapist’s work without replacing it. This is, above all, the area of organisation and continuity: structured notes, documentation, maintaining context between sessions, the availability of working hypotheses at the moment the therapist needs them. AI can also help in the preparation of clinical assessments and reports, and in data analysis - always on condition that the therapist retains full control over interpretation and decision-making. These are tasks that genuinely lighten the cognitive and organisational load while not encroaching on what stands at the heart of clinical work.

Where AI has no place, the line is equally clear. AI does not replace clinical judgment, cannot be presented to a patient as a therapist, does not make therapeutic decisions, and does not operate outside human oversight.

The AI/therapist boundary is built structurally into the Australian guidelines - and it ought to be built in the same way into the tool a therapist chooses for their work. Not as a layer of communication that can be added or removed, but as the way the application functions. AI proposes structure. The therapist decides.

What this means for clinical work

The Australian recommendations say something important, quietly: competence in AI is not an “add-on” to the professional repertoire. It is part of responsible practice in 2026 and beyond. A therapist who does not know the limitations of the tool they use, or its influence on the clinical process, does not meet the professional standard.

This does not mean that every therapist has to use AI. It means that every therapist should understand what AI is, where it makes sense and where it does not - also in order to make a conscious decision not to use it. And the one who does understand the tool gains real support from it - support that does not compete with their clinical judgment but frees them from a portion of the invisible work.

Therapy Support has been designed with the same approach in mind, with the AI/therapist boundary built into the way the application functions. AI proposes structure, the therapist decides. Our website includes a Materials section, where we gather content supporting the acquisition of knowledge about AI in therapeutic work - exactly in the spirit of what the Australian recommendations encourage. And therapists who want to go deeper and co-create the development of the tool can join the beta tester programme.

In the next instalments of the series we will look at how the same subject takes shape in other countries: the USA, Canada, New Zealand, Germany and the United Kingdom. And then at where guidelines do not yet exist - and what this means for therapists who want to practise responsibly before their professional organisation formulates its own position.

Proves beta · Uneix-t'hi ara

Recupera temps per a tu
i els teus pacients

Ets terapeuta de TCC?
Descobreix com la plataforma dona suport al teu treball diari.
Resums de sessions que organitzen el material clínic. Administració que no s'interposa.