'Part of what makes human therapy special is that it's imperfect,' said AI ethicist

Artificial intelligence has rapidly permeated nearly every industry, and the mental health field is certainly no exception. As AI enters this space, some AI ethicists have begun to argue whether an AI chatbot can ultimately deliver the kind of care a trained therapist provides.
Rachel Katz, an AI ethicist and PhD candidate at University of Toronto, whose research focuses on AI tools in psychotherapy, has spent several years examining AI-based mental health apps like Woebot and MoodMission. But she believes the implications go far beyond basic assessments as Katz is already sceptical of AI’s ability to truly substitute human therapists.
“I worry that there are arguments being made in favour of using chatbots, which are cheaper and don’t require multi-degree training but I don’t actually think these apps are providing people with proper psychotherapy,” she explained, noting that some are more focused on self-guided cognitive behavioural therapy (CBT) and others are more conversational.
“From a philosophical perspective, I’m looking at where this technology is headed and what that means for the therapeutic relationship,” she added.
Katz believes that trust and human connection are fundamental to effective therapy and AI can’t replicate the nuanced emotional understanding of human therapists. She argues a therapeutic relationship requires genuine empathy and dynamic interaction.
“The therapeutic relationship, ideally, is founded on trust, and I don’t think it’s actually possible for something like a chatbot to form a trust relationship with humans,” she said.
Jamil Jamal shares that concern, particularly when it comes to the face-to-face connection.
“If you have a chatbot that speaks to somebody, that dehumanizes the experience and takes away the human aspect. What we crave the most is the ability to speak to somebody about [mental health],” asserted Jamal, senior benefits consultant at People Corporation.
“Nothing will ever supersede human-to-human conversation in the medical field,” he added.
Additionally, Candice Alder, president of the BC Association and of Clinical Counsellors and an AI ethicist also underscored that sentiment. She’s been monitoring AI development closely as therapists begin experimenting with AI for clinical notetaking but also cautions against overreach.
“There’s a natural desire for human beings to want to connect with other human beings, and we’re not going to see that go anywhere anytime soon,” she asserted.
From a benefits and claims perspective, Jamal acknowledged that AI could be useful in triaging or screening plan members for health-related symptoms similarly to virtual care providers like TELUS Health, Maple or Dialogue. But he emphasized these platforms come with a limit.
“I've spoken to many physicians, and they said, ‘While those applications are great, they can only go so far because if you actually have a physical ailment, I need to be able to see you in person,” he explained.
“Human interaction has to step in to have the most effective outcome,” he added.
Privacy and data misuse are understandably growing concerns in both camps. Katz pointed to cases where private mental health companies have mishandled sensitive data.
“These are not tools being developed by public health,” she said. “People need to think about the consequences of going with a private company’s product for such sensitive healthcare needs.”
Alder pointed to the suicide of a 14-year-old boy in the US after interactions with a gamified chatbot on Character.ai as an example.
“Chatbots are really terrible with dealing with crisis situations,” she emphasized. “When we’re dealing with suicidal ideation… that’s a big deal. Somebody really needs to become involved.”
Despite the risks, Canadians are already turning to AI tools to answer deeply personal questions.
“People are mostly looking at using something like ChatGPT as a conversational partner or talk therapy replacement, not necessarily as a diagnostic,” said Katz.
“Post-pandemic, we saw a flurry of companies introduce virtual-based platforms… and some of those have AI components built in,” highlighted Jamal. He recalled using a tool that delivered a preliminary PTSD and OCD assessment based on a 20-minute questionnaire, adding, “It was quite accurate, I think, in its delivery.”
Still, AI tools have a place in the mental health space as Katz sees potential in using AI as a “supplementary tool,” particularly in areas like exposure therapy for phobias where chatbots or AI tools could act as supplements.
“Maybe you see a human therapist, and then you have this exposure bot that helps you construct and work through the hierarchy,” she said, while underscoring that the role of AI should remain limited.
Alder asserted the ultimate goal is to use AI as a complementary technology that supports mental health professionals, not as a substitute for human connection. While AI can provide valuable assistance, the fundamental human need for genuine interpersonal connection remains paramount in mental health care.
“We can’t leave decision making to AI,” she said. “There needs to be somebody who is responsible for the decisions that get made and AI can't be responsible because AI is not a person. The outputs from AI need to be looked at as recommendations or suggestions.”
Katz added that a future where therapists are replaced could shrink training programs and funding for human-led care. But Jamal remains slightly optimistic.
“AI has the potential to replace benefits consultants like me,” he said. “But for a lot of decision makers, nothing beats having a handshake, conversation or relationship with somebody.”
“Part of what makes human therapy special is that it’s imperfect,” said Katz.