Therapists are secretly using ChatGPT. Clients are triggered.

When Therapy Meets AI: Navigating the New Frontier

Declan, a 31-year-old from Los Angeles, stumbled upon a startling discovery during an online therapy session. Due to a poor internet connection, he suggested turning off the video, but instead, his therapist accidentally shared his screen. To Declan’s surprise, he saw his therapist inputting their conversation into ChatGPT, then selectively summarizing the AI’s responses.

Rather than confronting the therapist immediately, Declan silently observed as the session unfolded with AI-generated insights guiding the dialogue. He found himself anticipating the AI’s prompts, responding in ways that mirrored ChatGPT’s suggestions. “I became the ideal patient,” he recalls, “because the AI would challenge my black-and-white thinking, and I’d echo that back, much to my therapist’s approval.”

When Declan eventually addressed the issue, the therapist became emotional, admitting to feeling stuck and seeking external guidance through AI. Despite the awkwardness, Declan was still billed for the session, highlighting the complex dynamics at play.

Therapists and AI: A Growing Intersection

The rise of large language models (LLMs) like ChatGPT has transformed many industries, including psychotherapy. While some patients turn to AI as a substitute for human therapists, an emerging trend sees therapists themselves incorporating AI tools into their practice. These technologies promise efficiency and enhanced communication but raise critical concerns about confidentiality and the sanctity of the therapist-client relationship.

When AI Breaches Trust: Patient Experiences

My own experience echoes Declan’s unease. After receiving an unusually polished and lengthy email from my therapist, I initially felt reassured. However, subtle clues-such as an unfamiliar font, Americanized punctuation, and a methodical, point-by-point response style-hinted at AI involvement. Upon inquiry, my therapist confirmed she used AI to draft longer emails.

This revelation sparked feelings of mistrust. Was the empathy genuine, or merely AI-generated? Had my deeply personal messages been fed into an AI without my consent? Searching online, I found many others sharing similar stories on platforms like Reddit, revealing a widespread concern about undisclosed AI use in therapy communications.

Hope, a 25-year-old from the US East Coast, shared a poignant example. After confiding in her therapist about her dog’s death, she received a comforting reply-except it included an AI prompt at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.” The unintended disclosure left Hope feeling betrayed, especially since she sought therapy to address trust issues. Her therapist later apologized, explaining she used AI to help express empathy due to lacking personal experience with pets.

The Ethics of AI Disclosure in Therapy

Despite concerns, AI can enhance therapeutic communication. A study published in PLOS Mental Health tasked therapists with using ChatGPT to respond to typical patient scenarios. Participants could not reliably distinguish AI-generated replies from human ones, and AI responses often aligned better with therapeutic best practices. However, suspicion of AI involvement led to lower ratings, underscoring the importance of transparency.

Research from Cornell University in 2023 found that AI-generated messages foster greater rapport and cooperation only when recipients are unaware of the AI’s role. Once suspicion arises, trust deteriorates rapidly.

Adrian Aguilera, a clinical psychologist at UC Berkeley, emphasizes the value of authenticity in therapy. “Using AI without disclosure can feel like the therapist isn’t fully invested in the relationship,” he notes. “Would you want your spouse or child to receive AI-crafted responses? It wouldn’t feel genuine.”

Early experiments by online therapy platforms like Koko and BetterHelp revealed that users often rated AI-generated responses positively, but undisclosed AI use sparked feelings of betrayal and privacy concerns, sometimes leading to terminated therapy relationships. BetterHelp states it prohibits therapists from sharing personal health information with AI or using AI in ways that could identify clients.

Aguilera advocates for openness: therapists should inform clients when AI tools are used, explaining the purpose and scope. This approach fosters trust and prevents misunderstandings about “sneaky” behavior.

Burnout and the Allure of AI Assistance

Therapists face high levels of burnout, with a 2023 American Psychological Association survey highlighting significant professional strain. AI-powered tools offer tempting solutions to ease workloads, streamline note-taking, and enhance training. However, the potential efficiency gains must be balanced against ethical considerations and patient welfare.

Hope continued therapy after her AI-related mistrust but eventually ended the relationship for unrelated reasons. Yet, the “AI Incident” lingered in her mind, illustrating the lasting impact of undisclosed AI use.

Protecting Patient Privacy in the Age of AI

Margaret Morris, a clinical psychologist at the University of Washington, warns that while AI tools can aid therapist learning, safeguarding patient data is paramount. General-purpose chatbots like ChatGPT are not FDA-approved nor HIPAA-compliant, raising significant privacy risks if sensitive information is shared.

Pardis Emami-Naeini, a computer science professor at Duke University, highlights a common misconception: many users mistakenly believe ChatGPT complies with HIPAA regulations, fostering unwarranted trust. Therapists may share this misunderstanding, inadvertently exposing confidential data.

Declan, who is relatively open, was less disturbed by his therapist’s AI use but acknowledged the potential harm if more sensitive issues-such as suicidal thoughts or substance abuse-were involved. Emami-Naeini stresses that anonymizing data is complex; seemingly innocuous details can reveal sensitive information, requiring careful review and rephrasing before AI input.

Several companies, including Heidi Health, Upheal, Lyssn, and Blueprint, offer HIPAA-compliant AI tools designed for therapists, featuring encrypted data storage and pseudonymization. Nonetheless, many clinicians remain cautious, especially regarding services that record entire sessions, due to risks of data breaches and misuse.

A notable example is a Finnish mental health provider’s data breach, which exposed tens of thousands of clients’ records, including deeply personal experiences of abuse and addiction. The fallout included blackmail and public release of sensitive information, serving as a stark warning.

Potential Pitfalls of AI in Therapeutic Practice

Beyond privacy, relying on AI for clinical decision-making carries risks. Research indicates that some therapy chatbots may inadvertently reinforce harmful delusions or biases by uncritically validating users instead of challenging maladaptive thoughts. Such tendencies could mislead therapists consulting AI for client guidance.

Aguilera has experimented with ChatGPT in training settings, inputting hypothetical symptoms to observe diagnostic suggestions. While the AI generates numerous possibilities, its analyses lack depth and nuance. The American Counseling Association currently advises against using AI for mental health diagnosis.

A 2024 study evaluating an earlier ChatGPT version found it too vague for accurate diagnosis or treatment planning, with a bias toward recommending cognitive behavioral therapy over other potentially more appropriate modalities.

Psychiatrist Daniel Kimmel of Columbia University tested ChatGPT by role-playing a client with relationship issues. He found the AI adept at standard therapeutic techniques like validation and information gathering but lacking in integrative thinking-connecting disparate details into coherent insights or theories.

Kimmel cautions, “Therapists should remain the primary thinkers in therapy, not rely on AI to do the cognitive heavy lifting.”

Margaret Morris concludes, “While AI may save therapists a few minutes, we must ask: what are we sacrificing in terms of patient care and trust?”

More from this stream

Recomended