Could AI Therapy Actually Work?

ai therapy
LISTEN TO THIS ARTICLE:
https://webshrink.com/wp-content/uploads/2023/06/Could-AI-Therapy-Actually-Work.mp3?_=1

Artificial intelligence is taking the world by storm this year. Services like Bard and ChatGPT are the talk of the tech world. Almost every major industry is trying to figure out how to integrate AI-powered technology into what they do. But what about the field of therapy? Could AI therapy actually help people overcome their emotional and psychological issues? Are there any potential risks and dangers that we should be aware of?

New AI therapy services

Believe it or not, several AI therapy services already exist, such as Wysa, Youper, and Woebot. These aren’t just general chatbots like ChatGPT. They are trained in multiple therapy techniques, including supportive therapy, psychoeducation, and Cognitive Behavioral Therapy.

Creators of these services emphasize that these bots aren’t meant to replace therapists, either. The company that created Woebot actually calls it a “guided self-help ally”. Woebot engages users in daily conversations, offering exercises and techniques that help them manage mental health symptoms.

Chatbots are designed to mimic the experience of talking to a human therapist.

The conversational interfaces of these chatbots are designed to mimic the experience of talking to a human therapist. The interactions are personalized based on peoples’ responses and progress over time. The services also provide users with regular check-ins to track their progress and identifies areas where they may need additional support.

Evidence for AI therapy

One study, published in the Journal of Medical Internet Research, found that Woebot’s form of cognitive-behavioral therapy was effective in reducing symptoms of depression and anxiety in college students. However, it’s important to note that this study did not directly compare the AI chatbot to traditional human therapy. It only compared it to psychological education using materials from the National Institute of Mental Health on depression and anxiety.

Another study in the journal, Diagnostics, developed a chatbot to help people with eating disorders and body image issues. The bot was trained in CBT, psychoeducation, mindfulness, and Acceptance and Commitment Therapy (ACT). The bot offered users coping skills and helped them confront and replace negative thought patterns about body image.

The computer programs just don’t pull their weight.

Studies that do compare chatbot delivered therapy to therapy done by humans find that the computer programs just don’t pull their weight. A study from the Journal of Medical Internet Research found that while an AI chatbot designed to provide therapy for Post-Traumatic Stress Disorder was effective in reducing symptoms, human therapists were still more effective.

Emotional processing

That brings up a major difference between these bots and humans: the capacity for facial recognition and emotional processing. While technology that can interpret facial cues is advancing, none of these chatbots currently integrate that ability. The study using a body for treating eating disorders came equipped with a cartoon avatar. Interestingly, many users preferred that it lacked most body parts (it was just head and arms). They said it prevented them from comparing their own bodies to that of their “counselor”.

However, that’s the extent of “face-to-face” interaction. Popular chatbots like Wysa and Woebot are text-only. They are not advertised as having any facial recognition capabilities. Nothing yet comes close to the ability of an empathetic human to visually interpret the facial expressions and bodily mannerisms that convey so much unspoken emotion.

Some potential upsides

Despite this failing, the chatbots do have a few things going for them. One of the biggest advantages to these mental health chatbots is accessibility. Many of these services, including Wysa and Woebot, are currently free to use. Many people who suffer from mental health issues may not have access to therapy due to cost or lack of availability. However, with AI powered, chatbots anyone with an internet connection can access these services at any time they need.

Another potential benefit is they may feel more private to some people. Some people are hesitant to see a therapist face-to-face due to the stigma associated with mental illness. But with chatbots, individuals can receive therapy from the comfort of their own home, without worrying about being judged or stigmatized.

In short, these chatbots lower the barrier to entry for people who need to talk  about how they are feeling. Even if you know you want to see a therapist, there’s several steps you need to take before you sit down with one. You need to find a therapist who is taking new patients and has time slots open that work with your schedule. You need to make sure that therapist takes your insurance and hopefully you have enough cash for a copay. You’ll need to arrange transportation if you don’t have a car. Even if you manage all of that, many mental health issues make it hard to even leave the house.

Aside from that, these tools can perform other ancillary services for doctors and therapists. They can help bring in new patients, such as walking people through new patient questionnaires. Chatbots can collect patient histories and symptoms. They could even help with triage and could give an early warning to providers when someone is in crisis. In the long term, these programs could also help with reminders for taking medication, going to appointments, and attending to other self care activities like exercise.

Not so fast…

Now, if studies are showing that chatbots are better than nothing for mental health, why not use them? We’re not saying people shouldn’t try to find real therapists. That’s not the comparison we are making. Instead, these chatbots (which, again, are free and readily accessible) can just provide a stepping stone on the way to therapy with a real person.

But let’s not forge ahead just yet. Studies or not, some of these chatbots are not ready for prime time as users of Tessa found out, so you should be wary. There’s still many unexplored and still yet not studied potential risks. One of the biggest concerns with AI chatbots is the quality of therapy that they provide. Studies I’ve mentioned have already found that therapy by AI chatbots doesn’t hold up compared to therapy provided by humans.

No human connection in AI therapy

While chatbots like Wysa and Woebot are designed to be empathetic and supportive, they are still limited by their programming. Chatbots are still essentially “text prediction” programs. That means they use artificial intelligence algorithms to predict the most likely response to a user’s input. They analyze the user’s message and generate a response based on a combination of predefined rules, machine learning, and natural language processing techniques. It uses the context of the conversation, the user’s previous messages, and other relevant information to determine the most appropriate response.

AI cannot empathize.

This system works well. Go talk to ChatGPT or Bard and you’ll see for yourself how real the conversation feels. However, despite all their complexity, these chatbots are not human. They do not have feelings and they do not “understand” emotions. They lack the emotional intelligence that a human therapist possesses. Most importantly, they cannot empathize. They can only analyze what users say to them and try to emulate the response that a therapist might give.

Another problem is the potential for misdiagnosis or incorrect treatment. Chatbots may not be able to detect certain nuances or subtleties in a person’s behavior or emotions. If the companies that run these programs continue to work on them, they will likely get better, but that time has yet to come. The bots can supposedly recognize when someone is in real trouble and tell them to contact emergency services. But that’s not anything special. Search engines like Google already do that if people type in something worrying.

Leave therapy to the pros

The fact is that, when people really need help, they should not be relying on computer programs. Diagnosis and treatment of life-altering and life-threatening mental illnesses needs to be left to trained professionals.

Lastly, there is the concern about data privacy. With chatbots collecting and analyzing large amounts of personal information, there is the potential for data breaches or misuse of personal information. Creators and proponents of the chatbots reassure anyone who asks that all the information is encrypted and secure. But of course they would say that. A major issue is that so much of this sensitive medical information is flowing into these companies’ servers. It’s not in a HIPAA protected medical record or in your therapist’s secure office. It’s just up in the cloud, on a server somewhere, waiting to be hacked or sold to the highest bidder.

All this is a lot to take in. AI is moving rapidly and much of what we’ve talked about will be changing as it progresses. It may have issues and there may be drawbacks, but AI is unquestionably here to stay. It’s just a new tool in the toolbox, but it surely won’t completely replace human therapists.

Exit mobile version