If you have seen Spike Jonze’s film Her, where a man develops a deep emotional relationship with an AI operating system, then you would have already seen a glimpse of today back in 2013. The movie illustrates both the allure and the limits of AI companionship: it can feel responsive, attentive, and even intimate, yet it ultimately lacks the unpredictability, accountability, and embodied presence that human relationships provide. Her captures the tension between immediate emotional gratification and the deeper, nuanced, and often messy work of connecting with another human being; highlighting the same ethical and emotional questions that arise when AI enters the realm of mental health support, which actually goes way back before the release of the film.

Early Experiments: ELIZA and PARRY

The idea of talking to a computer about your feelings isn’t new. In 1966, MIT researcher Joseph Weizenbaum introduced the world to ELIZA, a simple chatbot designed to mimic a psychotherapist. ELIZA’s “conversations” were little more than rephrased questions and stock phrases, but some users still felt heard. Weizenbaum himself was startled at how quickly people attributed empathy and understanding to what was, in reality, a script. It was one of the first signs that humans are wired to find connection – even in machines.

A decade later came PARRY, designed to simulate a person with paranoid schizophrenia, for training of psychiatrists. It was so convincing in written exchanges that psychiatrists found it difficult to distinguish PARRY’s responses from those of real patients. This shows that the system could mirror the emotional responses of a human in conversation, yet it did so without any true awareness of the person’s internal experience.

From Computerized CBT to AI in Mental Health

From there, technology in mental health evolved in waves. The 1990s and early 2000s saw the development of computerized CBT (cCBT) programs such as Beating the Blues and MoodGYM. These weren’t powered by artificial intelligence but by structured, interactive lessons based on cognitive-behavioural therapy principles. They offered something valuable: anonymity, accessibility, and consistency. Now, with AI being able to simulate human-like dialogue, mental health technology is entering an entirely new chapter; one where the “listener” isn’t just following a script, but responding in ways that feel almost human.

Why Human Presence Still Matters in Mental Health

And yet, mental health is more than words on a screen. Therapy isn’t only verbal – it’s the way your therapist greets you that day, the moments they pause when you hesitate, the fleeting shift in your expression when a memory surfaces. It’s the way they notice that you dressed differently this week, or that your voice is quieter than usual. These subtle cues, so deeply human and often subconscious, are part of the healing process. AI can simulate conversation, but it can’t truly read a sigh that hides behind a smile.

Accessibility and the Waiting List Problem

One reason people may turn to AI tools is the stark reality of public mental health services: the waiting lists. In many countries, including the UK, it can take weeks or even months to be seen by a mental health professional. For someone in distress, AI offers immediate access, without bureaucracy or scheduling delays. In that sense, it can feel like a lifeline. But here’s the catch: mental health work often requires taking the long road. In our instant-fix culture, it’s tempting to want a quick solution, like swallowing a pill and feeling better within 20 minutes. But even painkillers don’t remove the cause of pain, they only dull it. Sometimes that’s exactly what’s needed; other times, healing requires the slower, more uncomfortable work of looking inward.

The Limits of AI in Mental Health Therapy

Another key limitation of AI as a therapeutic tool is its inability to challenge clients in the same way a human therapist can. Therapy often requires gentle confrontation – holding a mirror to our blind spots, questioning unhelpful beliefs, or keeping clients accountable when they slip back into old patterns. This balance between empathy and challenge is essential for meaningful progress. Without it, growth can stagnate. AI is designed to be agreeable and “safe,” which means it avoids this discomfort and cannot support someone toward deeper self-awareness or necessary change.

Short-Term Comfort vs. Long-Term Growth

Avoiding difficult or uncomfortable material in therapy doesn’t just weaken the process, it can also shape how someone engages with the world. For someone struggling with low self-esteem, it could unintentionally reinforce feelings of inadequacy – because AI is unlikely to notice and challenge the harsh, self-critical narratives they’re repeating to themselves. For someone navigating grief or loss, relying on AI for constant reassurance might provide temporary comfort, but without the genuine presence of a human listener, it could prevent the natural process of grieving from unfolding, leaving emotions unprocessed and stagnating.

The danger isn’t inherently in the technology itself, but in the way it’s designed to keep users engaged and “feeling good” in the short term, even if that comes at the expense of deeper, long-term growth.

Responsibility and Agency

This raises an important question: Who holds the responsibility for my mental health? If I rely on AI as a convenient, always-available tool, am I outsourcing my emotional upkeep to a machine? Or is it simply another form of self-help like journaling or reading a therapy book, where I remain in charge? Responsibility and agency are key here; a tool can guide, but it can’t do the work for you.

Ethical Questions in AI Mental Health

AI’s strength lies in its capacity to store and synthesise vast amounts of information; everything from psychodynamic theory to the latest research on trauma-informed care. In seconds, it can reference techniques from different schools of psychotherapy, tailor them to your situation, and recall “your” past conversations.

But if AI can convincingly imitate a helpful therapist, it’s worth asking the unsettling reverse: could it also imitate someone with mental difficulties or illness, similar to PARRY? If so, what ethical lines are we crossing when technology can replicate not just therapeutic guidance, but the complex emotional states of those it is meant to assist? The answer is not so clear.

The Future of AI in Mental Health

Overall, AI in mental health holds enormous potential: accessibility, reduced stigma, and personalised support. But alongside the promise is a set of complex ethical, clinical, and practical questions. As with any tool, the impact will depend on how – and why – we use it. We may be able to code empathy into an algorithm, but the deeper work of understanding ourselves may still require what humans have always sought: another human who sees not just our words, but the pauses between them. At least for now.

Shares:
Show Comments (0)
Leave a Reply

Your email address will not be published. Required fields are marked *