AI in Psychotherapy: Where Technology and Human Connection Intersect | Psyched for Mental Health S2 Ep 1
General

AI in Psychotherapy: Where Technology and Human Connection Intersect | Psyched for Mental Health S2 Ep 1

For the first episode of Psyched for Mental Health‘s second season we dive deep into the intriguing question: Can artificial intelligence replace psychotherapists?  With rapid advancements in AI,  and its increasing integration into various aspects of our lives, the possibility of it playing a role in mental health support and therapy is becoming a subject of great interest. Host Dr. Ed Bilotti speaks with with Dr. Diego Pinochet who has a a PhD in Computational Design from MIT, Smriti Joshi, a clinical psychologist, member of the board of directors, and chief psychologist for Wysa, a company pioneering the delivery of mental health care using AI applications, and Professor Alastair Van Heerden. the research director of Human and Social Development Programme at the Human Sciences Research Council in Pretoria, South Africa. As we watch the evolution of AI right before our eyes, what are the potential benefits, limitations, and ethical considerations surrounding the use of AI in psychotherapy?

Alastair Van Hearden
Diego Pinochet
Smriti Joshi

 

 

Listen Now

Episode Transcript:

Dr. Ed Bilotti:

I am Dr. Edward Bilotti and we are psyched for mental health. Welcome to the first episode of our second season, and thank you for continuing to listen and support us.
In this thought-provoking installment, we delve into the intriguing question, can artificial intelligence replace psychotherapists. With advancements in artificial intelligence and its increasing integration into various aspects of our lives, the possibility of AI playing a role in mental health support is becoming a subject of great interest.

Join us. As we explore the potential benefits, limitations, and ethical considerations surrounding the use of AI in psychotherapy. Get ready to uncover the evolving landscape where technology and human connection intersect and gain new insights into the future of therapy.

What you just heard was written by an artificial intelligence application. I asked the now famous ChatGPT to write me an introductory paragraph to a podcast episode about whether AI can replace psychotherapists. In seconds the system returned the paragraph I just read. I did not edit it or change it in any way.

We’re seeing the beginning of a new era where hyper powerful computers can sift through massive amounts of data, testing different scenarios, assembling patterns and actually learn.
AI technology is not new. Engineers and computer scientists have been researching it and developing it in various forms for decades, but with faster and more powerful computer chips, widely available, and more data on more things available than ever before because of this thing we call the internet, coupled with advances in the technology itself, artificial intelligence is coming of age.

The impact it will have on our lives will be far reaching. But can it be used in mental health? Can a chat bot become your therapist? For many of us who are not computer scientists or engineers, terms like artificial intelligence, machine learning, deep learning transformers, large language models can not only sound like gibberish, but may conjure up images of things we’ve read about or seen in science fiction shows.

Now, personally, I have never been shy about technology. In fact, I consider myself pretty computer savvy. I’ve done some programming and coding. Things were a lot less complex back then, however, and keeping up with the fast changing world of computer science has proven impossible while trying to pursue a career as a physician.

So I needed to do some research to gain some understanding of what all this stuff is about so that I could do my best to demystify it for myself and then in turn demystify it for you. I knew that a machine is just a machine and does not have a brain or actual neurons. Despite the adoption of these terms, like neuron and neural network by computer scientists and AI developers.

A conventional computer program is a set of step by step instructions given to a computer for it to execute in order to get some result. Generally, the user enters some data. The computer performs some functions and operations with that data, and then gives a result just as you might give a set of simple instructions to an unskilled person to do a certain task, they can follow the steps without much thought or deduction and just end up with the same desired result. But what is artificial intelligence?

Diego Pinochet:

If we try to define what artificial intelligence is and like the most simple word, words, uh, artificial intelligence, let’s say like the theory and development of computer systems that are able to perform tasks normally requiring, requiring human intelligence. For example, such as like, I don’t know, like visual perception, speech recognition, decision making, uh, translation between language, languages and so on.

Dr. Ed Bilotti:

Dr. Diego Pinochet possesses a PhD in computational design from the Massachusetts Institute of Technology. Artificial intelligence is a broad term that describes any scenario where a computer emulates human intelligence. By applying reasoning and deduction to solve a problem rather than simply following step-by-step instructions, it gives the appearance that the machine is thinking or reasoning as a human being would.

But is it really? Well, as tempting as it is to think there is some kind of magic or sorcery here? It turns out that by giving the computer even more steps and instructions to follow, it can create a situation where it appears the machine is thinking. What is actually happening is that the computer is applying complex mathematics, statistically equations and algebraic formulas to sets of data in order to find patterns, test different possible alternatives until it comes up with the best guess.

Diego Pinochet:

And the second definition is like, uh, of something like a machine, for example, or a device program. So as, as it’s capable of, uh, showing or performing independent action. So in that sense, it’s, it’s not strange to find, like why, for example, there are all these, um, myths or also like fears about, about AI that we get usually from science fiction as you mentioned, uh, in this idea of like cybernetic organisms that will take the, uh, over the world.

Um, because uh, when we think about, uh, this idea we’re inferring that machine is intelligence because we’re giving that connotation, right? Because we’re the ones that are, let’s say, like seeing or experiencing that, that emergent behavior from, from, from a machine or, or a device.

Dr. Ed Bilotti:

This is all just sophisticated use of computing and calculations, numbers, and statistics. In the digital age everything from pictures to text, to audio recordings to videos is broken down into zeros and ones. Once you’ve done that, a computer can sift through that data and find patterns. Basically do what looks like magic. Arthur Clark’s third law states that any sufficiently advanced technology is indistinguishable from magic.

Diego Pinochet:

People believe that this like sorcery or like magic because we, we see it happening. We don’t understand, uh, how is it, how this is happening. And of course, uh, when we see something that shows emergent behavior, something that seems smart.

Ed Bilotti:

Anything that gives a machine the ability to appear to be thinking or making decisions is artificial intelligence. So what then is machine learning? How can a machine learn? Again, the words used can make you think of a machine that can learn the way we do. In fact, engineers developing these systems, have tried to mimic the way the human brain works, or at least the way we think the human brain works. There’s still a lot we don’t know or understand about the brain. They’ve adopted words like neuron and neural network furthering our science fiction fantasies.

Diego:

Uh, and also we hear a lot about, for example, machine learning. And sometimes this is like, these are again like, um, Things that are, some, sometimes like confused with each other. Um, what we can say is that machine learning, for example, is a subset of artificial intelligence. That is that all machine learning counts as AI, but not all AI counts as machine learning. Machine learning is is basically like an like an application of artificial intelligence that provides system with the ability to automatically learn and improve from what we call experience.

That, that is something that, uh, we can talk about. But in, in that sense, when we talk about experiences like about data, uh, without being explicitly programmed. That that’s, that’s uh, basically like a definition of of of machine learning. It’s a computer program that as we get more data, uh, that program can basically make or infer decisions based on, on, on facts or data update.

Ed Bilotti:

But these systems are in fact, lines of code that make up algorithms, sequences of mathematical equations done in a certain way. The computer can take very large amounts of data and sort through it step by step, applying statistical formulas and algebraic equations to compare features and find patterns in that data.

In a so-called neural network, the data is analyzed. And then the results are passed onto another layer of calculations and sorts. Different features are given different weights as to how important they are in identifying what something is, and then another layer and so on, until finally there is an output or a result.

The computer is not doing anything miraculous or mysterious. It is basically crunching numbers, a lot of numbers, and fast, very fast. The equations and methods being applied are things that anyone who has ever taken a course in statistics has probably heard of. Things like linear regression, sensitivity, specificity, the mean, the standard deviation, et cetera.

This is not science fiction, it’s just another tool to help us do our work. What about this thing called deep learning? Now that sounds scary. It turns out that deep learning just means machine learning, but with more layers, doing more calculations on more data, and maybe even some back propagation as they call it, where one layer sends information back to the previous layer to correct errors and improve the accuracy of the results.

Large language models are AI systems that scrape the internet for data and then use that data to predict things like what the next word of a sentence might be. This type of system is called generative AI because it generates a result based on context and meaning.

These are called transformers. The G P T in ChatGPT stands for generative pre-training transformer. These types of programs are trained on huge numbers of different parameters within a vast amount of data. And by giving attention to lots of details, they form the ability to predict based on patterns.

But how can all of this apply to mental health treatment? Can AI replace a human psychotherapist? If it cannot replace a human therapist entirely, what are the ways it might potentially be used in addressing mental health needs?

It is easy to imagine how AI can have useful applications in medicine, helping doctors make diagnoses and choose effective treatments, analyzing CT scan or ultrasound images to try to identify possible tumors and countless other scenarios. But what about the human mind? What about mental health? Can a computer running artificial intelligence software replace your psychotherapist?

As early as the mid 1980s, a program called Eliza, developed by Joseph Wisebaum, attempted to simulate a conversation between a patient and a therapist. In many ways, Eliza behaved like a primitive chatbot, allowing a user to type in natural language. Say for example, “I’m having problems with my girlfriend.”

The program used pattern matching and looked for keywords and then delivered a scripted response, making the user feel as though it understood what they said. I actually had the opportunity to play with Eliza firsthand on a radio shack TRS80 computer. It was entertaining and intriguing to be sure, but it quickly became obvious that it could not handle any truly complex questions.

Fast forward to 2023, where the tiny computers we all carry around in our pockets are endlessly more powerful than that clunky desktop. AI and machine learning technology has advanced in leaps and bounds and volumes of data about anything and everything are readily available on the internet. What could a 2023 version of something like Eliza look like?

Companies like Wysa are pioneering the delivery of mental healthcare using AI applications.

Smriti Joshi:

Wysa’s mission is to make mental healthcare accessible and affordable for everyone. And it’s basically a chatbot. AI uh, based, AI based chatbot delivers CBT based conversations.

Ed Bilotti:

Smriti Joshi is Chief Clinical psychologist for Wysa and sits on their board of directors

Smriti Joshi:

It’s basically a mental health support tool and it has a wide variety of self-care tools grounded in CBT, DBT, acceptance and commitment therapy and, uh, many other such evidence-based school of thoughts. And it also has, uh, a layer of human coaches available to offer, uh, human-based support to those who may need more than just the digital.

Ed Bilotti:

Wysa’s platform uses plain language recognition and gives the user responses that are predetermined by human clinicians. In this sense, it is not a transformer or a generative AI tool that will come up with its own responses. This may actually be reassuring at this early stage, since generative AI systems can produce false or incorrect information. Think for example about how your predictive text works. How many times a day are you frustrated by it autocorrecting, something you typed to something completely wrong?

Applying technology that is right most of the time, but often very wrong to mental health and therapy seems potentially dangerous. Last month it was reported that the National Eating Disorders Association had to disable its chatbot on its website after the chatbot gave harmful dieting advice. A big question is, who will be responsible for decisions made by an AI system in a mental health setting?

Smriti Joshi:

It uses artificial intelligence to understand various aspects that the user may input, like emotions, uncertainty, agreement, and other topics of conversations. And it’s not completely generative AI, like the ChatGPT or the Bard, which use large language models to generate responses to text queries or messages.

Uh, currently we don’t use, uh, a lot of generative AI to respond to our users in real time. Most of our responses have been written by a team of conversational designers, and they’re vetted by clinicians like me to ensure that it is empathetic, non-judgmental, and clinically safe for the users.

So AI basically is to help with better detection to be able to offer better responses. And those responses are crafted or designed by the designers and the team of clinicians. And then these, uh, conversations are thoroughly vetted by members who belong to our actual user groups and our team. And then our, uh, clinicians only when we find that these are safe enough, these are empathetic enough to go into Wysa as responses for whatever we are trying to, uh, respond to or address only then it is included in the actual version.

I think those, that responsibility lies with the organization or people who are working on, you know, creating AI specifically for health or sensitive scenarios like, or areas like offering it for mental health support. Responses could be appropriate as no AI model can be perfect.

Ed Bilotti:

Dr. Alastair Van Hearden is research director for the Human and Social Development Program at the Human Sciences Research Council in Pretoria, South Africa. In May of 2023, he and his colleagues published a paper in the Journal of the American Medical Association for psychiatry entitled “Global Mental Health Services and the Impact of Artificial Intelligence Powered Large Language Models”.

Alastair Van Hearden:

The issue that, as I see it, is that we, we are in a place where we would still, uh, trust, uh, individuals, humans, more than machines. Um, so I think of, I, I was thinking of this in terms of self-driving cars, uh, and we can think of it in more, uh, psychological for psychology slash uh, mental health, but you know, at what point do you give control over to the self-driving vehicle, and you are aware of all those dilemmas that are in that space about what decision the car makes, if it’s gonna have an accident as to where it has its accident, do you not hit the BMW and rather take out another car because the insurance will be cheaper? Typically people say that they want to be making the final decisions. This happens a lot as well in medical imaging where the field is a bit more advanced.

So, uh, you might, for example, take a chest x-ray of the lung and then maybe able to detect tuberculosis, but still, uh, often the clinician would be making the final decision. We see this as well in, uh, parenting where children are unwell and parents then call the pediatrician all the time, and part of the reason they do that is for reassurance and offloading some of the responsibility that if my child has a fever, at least I called the doctor. They are now responsible for saying it’s fine or not. With language models and with therapy, this is the exact question that what do we ever get to the point where we say that model can be responsible and be, um, accountable for the clinical decisions it makes?

And I don’t know if the answer’s yes or no. Um, my sense is it feels like it should get to that point, but, uh, it’s a conversation we need to have as a society. Uh, I think, uh, do we always want to be making those decisions or if we get to a point where these models, as you were just saying, are better than humans, does it make any sense to have, uh, a person then, uh, signing off on stuff when they are actually less experienced and are maybe less able to be supportive? So my sense is that we will go in that direction. But it’s really a fascinating and challenging discussion to have, I think.

Ed Bilotti:

There is a vast unmet need around the world for mental health services and a serious lack of adequate resources. Could AI help fill in those gaps?

Alastair Van Hearden:

While there may not be psychiatrists or psychologists or talk therapists available, through large parts of the world to do one-on-one sessions. My research interests have been whether technology might open that space up to more and more people.

The problem with that though is that the opposite now seems to be the likely outcome where those technologies probably can be applied. But what might happen is that accessing another human, having individual contact is going to be reserved for the very wealthy.

And so in some ways, I feel like I’m starting to doubt a little bit this course I’m taking because I, I worry that the situations where this will be applied are in low resource settings where you then have access to AI therapy versus if you happen to be, um, more well off that you then get access to a person, and that seems to just be a kind of digital divide I don’t really want to help perpetuate.

Ed Bilotti:

Training mental health professionals takes years and requires a great deal of supervision by already trained professionals, Professor Van Hearden and his colleagues write that even if AI systems are not a substitute for human therapists, they could be useful to help train and supervise people with a set of basic skills who can then fill in some of the gaps in underserved areas.

So it seems the AI systems might be supervised by humans. And in some cases, humans could be supervised by the machines. One major concern is privacy and data collection. In 1996, I was a third year psychiatry resident learning psychotherapy. I used to meet with a supervisor at the office of her private practice.

Her office was configured such that there was one door to enter the waiting room from outside, and a separate door to leave to the outside from the consultation room. This was so that a patient did not have to pass back through the waiting room on the way out and see or be seen by the next person waiting.

Privacy was paramount and there was no sign on the outside of the door advertising that one was walking into a psychiatrist’s office. Now, I understand that some of this relates to stigma and shame, but I still believe there is something to be said for respecting privacy and the deeply personal nature of psychotherapy.

Contrasting that to someone seeking treatment online with tech companies that are collecting data on everything about you is something I just can’t even wrap my head around. Websites and apps that connect people with therapists online are kind of like Uber for therapy. They are Silicon Valley tech companies that want your data.

They want to use it for profit. Now we all know that laws and regulations around this kind of technology always lag far behind and we cannot rely on anyone looking out for us. Psychotherapy and mental health treatment is not like calling a ride, nor is it social media. It’s a serious and deeply personal endeavor.

Having a chat bot as your therapist takes this to a frightening new level. AI is all about data, and in this case, you are the data. Do you want a machine learning about your mind, your trauma, your personal story? Will companies that provide this technology respect individual privacy?

Smriti Joshi:

Also there are questions of, uh, ethics and safety, uh, about, uh, user data, the privacy, and, um, when using AI in regulated setting like health, like healthcare, we have to be very careful of people working in, uh, this particular sector specifically need to ensure that there are guardrails around conversational AI to ensure that it is aligned to the ethical standards, clinical validation, privacy, safety and transparency.

And, um, yeah, I mean, and we’ve learned it from experience. Like I said, when we started, I think the only other organization or company offering this was WoeBot. And I think we’ve, WoeBot and us and many others in this sector have learned from experience that just letting it be completely generative can, uh, bring in missed detection, can bring in un um, we don’t know what response it may give to somebody who’s typing in a very sensitive piece of information about their health or sharing about an emotional crisis.

And, um, Uh, uh, there, there could be trans lack of transparency around, uh, from the organization side about how they’re going to use the data. And we’ve seen a lot of controversy in the US about some mental health apps, which were not safe, uh, with regard or not transparent about how the data would be used.

Ed Bilotti:

It is well known that laws and regulations have not and cannot keep up with the pace of advances in technology, so we likely will not be able to rely on those protections. How can we be sure that our personal data will not be used for unfitting purposes?

Alastair Van Hearden:

Social media’s whole hook was attention. And so this kind of first wave of digital tools we’ve seen have all been optimizing for attention, trying to get you to scroll, uh, on Instagram. Whereas the AI revolution is likely to try and, uh, hook into intimacy and connect and, and have people connecting with products and tools by forming bonds.

Ed Bilotti:

If AI systems know every intimate detail about you, this information can be valuable to say marketers or worse insurance companies who could potentially use it to exclude high risk cases. Wysa says it collects data, but no personally identifiable information. In other words, no user’s identity is collected along with the data. Rather, the data is used for continuously updating and improving the system and for population-based statistics.

Smriti Joshi:

Wysa does not collect any PII upfront, like it does not ask for the email ID phone numbers, any insurance id, um, we miss out on a lot of data. Our B to B partners sometimes want to know who is talking about work stress, what is the kind of, and is this person, you know?

So, um, and that is precisely what we cannot compromise on. And we say we can give you population level insights, but not individual level insights. And there are models that we are using within our AI, uh, as well as in the coach conversations that happen via text on our platform that are able to detect PII and mask them.

So we try our best that there is no PII or EPHI that gets stored in our systems and also, who owns the data? It’s the user and they have the right to reset the data at any point in time on Wysa.

Ed Bilotti:

In his book, “Homo Deus”, Israeli historian, Yuval Noah Harari discusses the idea of data science being the new religion, the pervasive idea across science, that everything can be analyzed through algorithms, even in biology. The idea that living organisms are simply algorithms. That every cell in our bodies is simply executing algorithms, and these algorithms can be analyzed and broken down.

Our behaviors are just algorithms. Each cell in our bodies and each of the tiny components inside those cells are all just executing procedures programmed by evolution and our genes. Our organs are following codes, and that includes our brains. Individuality, human consciousness and free will are just delusions or delusions in a sense.

If this were true, then applying AI to psychotherapy would seem straightforward. Just figure out how to tune up each person’s algorithms so it functions in the healthy correct way, make everyone the same.

But what about bias? Since large language models are trained on huge amounts of data from across the internet, the potential exists for some big problems. Problems trusting the accuracy, quality, and value of the data, as well as whether it is biased. Can the same algorithms deliver mental health services to people of different cultures, ethnicities, socioeconomic statuses and religions?

Alastair Van Hearden:

I think that’s rather, these, some of these cultural biases that I’ve seen through my work really relate to the fact that I mainly work in low and middle income countries. So I do a fair amount of work in Nepal, uh, in parts of Africa, uh, and South America. And, oftentimes I’ve seen that some of the artificial intelligent models that we’ve wanted to use haven’t really been able to cope with some of the diversity that I experienced in those settings.

Not directly related to language model, but one of my, uh, favorite examples is I had done some work with, uh, postnatal depression, um, and we were interested in the, um, sort of theory of mind within the child, uh, and seeing if we could clip a little lapel camera to the child’s, uh, shirt and then record images every, uh, couple of minutes, um, which we then fed back to the mom.

I won’t get into that, but the images were then run through an object detection model and, um, I was very surprised to see that the children were spending a lot of time viewing umbrellas. And, um, I made no sense of it. Anyways, when she, we went and actually looked at, got permission from ethics to actually review some of the images.

And it turned out that what the model thought were umbrellas were actually the huts which have these cone-shaped roof. Um, and so it was actually the child’s home. And I think that’s just a, a great example of how bias can be introduced unintentionally into models. In that case, we were able to identify the issue, but you can imagine in a sort of psychotherapy slash language model that might, uh, have unintended consequences if you’re unaware that the model may have biases.

The other perspective I could share though, uh, so that’s, that’s the one. The other is that, um, I guess the counterpoint is that, uh, some of these sort of general foundational models, Uh, potentially have been trained on all the text in the, on the internet and slash in the world. And as humans, we grow up learning one particular part of the sort of human experience, like the pens, if you know, born in the West or born in Africa, you’re going to learn different stories, different identities around what masculinity looks like, et cetera.

Uh, and the argument is made that we are only being exposed to one tiny part of what it is to be human. Whereas these models are being trained on everything that’s known about being human. And so in some ways the potential exists that they may have access to a broader set of cultural, um, experiences than humans do.

Smriti Joshi:

Uh, we do have to be careful about biases, and I think because Wysa has this very, I would say, uh, huge presence across the globe in ninety-five countries. We have that kind of, uh, data set and the AI team is very busy and constantly trying to ensure that, um, our, we take away as many biases as possible with the help of the clinical team that we have

Ed Bilotti:

As human beings we form our sense of self, our perception of who we are from a lifetime of taking in information about the world. Our sense of self is a culmination of our perceptions, our relationships, the people we have interacted with, our life experiences, memories, and of course our genes. In short, people are very complex.

Each individual’s reality is their own, and it seems difficult to imagine that any single algorithm, no matter how complex or how many deep layers it contains, could be universally applied to everyone. So could AI really replace a human therapist?

Diego Pinochet:

My guess is no, I question myself. Will I be replaced as an architect or designer or creative person by a computer? And if we, if I make the, if I extrapolate that to what you ask, I will say like, no, because there’s like a fundamental, uh, difference between, again, The type of intelligence that is embedded in a machine and the type of intelligence that we have that has to do with meaning. Uh, as humans, we can deal with ambiguity.

Smriti Joshi:

I would not say AI can replace a human therapist. Um, I would say AI can help augment, uh, any work that is, any treatment that is being offered, um, in the traditional settings, um, I would say it can assist the processes become much faster. Uh, the blended models usually are the ones that work the best. So even if we have, um, uh, in our pilot with Washoe where Wysa was used, Um, to offer mental health support to people bringing chronic pain.

They still had a team of physicians who were looking after their concerns around pain, and they had access to psychologists as well, it seems. But there were a lot of them who did not want to go to see a psychologist because of affordability reasons, or probably because of their own condition that was making it hard for them to leave the house very frequently or go for sessions, et cetera.

And Wysa brought that support closer to them. And it has escalation pathways built in for crisis scenarios where, uh, when there is that need to really connect with their therapist or psychologists, we are able to provide that escalation pathway and inform the users that now is the time you need to go and see your therapist.

Ed Bilotti:

But even if the current state of AI probably can’t replace a therapist, the technology is advancing at a fever pitch, and that might not be far away.

Alastair Van Hearden:

The speed at which, uh, these capabilities are kind of being obtained is, is really, as, you know, rapid. And so that feel feels true as we talk now, but I don’t know if that will feel true in December of this year. So, uh, it is a difficult thing to talk about. Yes.

Ed Bilotti:

So what is the future of artificial intelligence and mental health? What can we look forward to and what might be the potential dangers?

Diego Pinochet:

I, I think that all, all of this idea of artificial intelligence and, and again, going back to the first question about like all the myth and all the, the, the hype and all the fears also that people have about AI. About making people lose jobs and, and so on and so forth. I think those are things that are, are common with the introduction of like any new technologies. I mean, it’s, it’s, it’s not something new in, in the history of humanity. Also like industrial revolution and I dunno, like basically we ask in the humanity, we live like certain inflection technologies introduced change a lot.

Smriti Joshi:

We will be, you know, constantly evaluating. We as in companies who are in the space of AI and mental health. I’m speaking more on Wysa’s front, that we will be constantly evaluating the performance of Wysa’s AI models and AI training to improve them. And apart of the processes, also staying up to date with the latest AI technologies and, and assessing if and how they could be used to make a product better.

At the same time, uh, it is important that to mitigate the risks around data privacy, safety, clinical efficacy, um, uh, we are very careful in designing training and testing the AI systems used in mental health context. This means that we will continue to use the diverse. And representative data sets to address biases involving mental health experts in the development process and regularly monitoring and evaluating the system’s performance to ensure that it’s accurate and safe.

I think these are two important terms, that it’s safe and it’s accurate because we are dealing with human lives and people need help and they’re searching for help online. And it is, uh, the responsibility of the organizations that are in this space. To ensure that when they are, uh, people reach out to them, then what they receive is safe. What they receive is, um, clinically informed, and it’s, even if it doesn’t help them for a bit, at least it shouldn’t take trigger, uh, physical, emotional comfort.

Alastair Van Hearden:

So until we work out in alignment, And until we work out responsibility, uh, we can’t really, uh, ethically I think put these machines directly in, uh, contact with people. Although it’s happening. I mean, it’s happening, but it’s a bit like that letter that was signed recently where people had asked for a sort of six month slow down. Whilst we could kind of do some of the work to, to test out these models and some of the recent, uh, hearings that have taken place.

I think there’s a realization things are moving fast and people are curious and are trying things. But the, to answer your question, the reason we proposed in this way is to give us times to both benefit from AI whilst still having some level of abstraction away from the, um, a person who’s still talking. And that also then means that that person’s still responsible and that there’s still some, uh, kind of human in the loop reviewing the suggestions and using their intuition as to whether what’s been suggested is helpful or not. So, uh, it’s really like a holding place. I don’t see this being the place that we end. But I think a useful starting point.

Ed Bilotti:

It seems likely given the continuous rapid advances in technology along with the forces of capitalism, that the application of AI as a substitute for a human therapist, not just for training or supervision, is inevitable. In fact, it may already be happening.

Alastair Van Hearden:

It’s going to be a massive gold rush towards generating intimacy between humans and machines, whether that’s therapists or, uh, romantic partners or, uh, other types of entities. It’s tapping right into our wiring for connection.

My, you know, by training models on text, we’ve really given over the keys of our shared humanity and culture uh, to, to, uh, artificial intelligence. And I think my, not sure it’s my vision for the future, but what I hope by people talking about this is that we are able to have more people ethically thinking about the future and trying to guide it in a way that doesn’t lead to disaster or dystopia, which is a real concern.

And, um, rather trying to bring it to a space where these give us all access, which was the original goal is like, give everybody access to high quality mental health care, um, slash other care that they need, um, in a way that is supportive. My wife is a, a psychotherapist and when I was playing around with some of the stuff she was talking, uh, with one of the little models I built.

And, uh, at one point she started crying and just like I, I haven’t spoken to anybody who’s been that willing to listen to me and empathize, um, through all my years in this field. And so it, you know, it’s, it’s there and it’s coming. I. It’s just a matter of figuring out how to do it in a way that’s not gonna be harmful, uh, to our species and that enables us to kind of coexist and, and utilize this for good rather than, uh, getting caught up in this very real risk of capital markets. Uh, taking us quickly down a path we are not ready for and not being aware of the consequences.

With social media, you could see that happening a little bit. That was like our test run. Uh, and there’s plenty of research around some of the challenges that happened. When adolescents get hooked into social media, this is gonna be 10 times more powerful and so, I hope we can, uh, talk about this and make sure that, uh, we don’t, that we learn something from our first experiment and, uh, go in a more positive direction with this technology.

Our minds are a bit like rubber bands and sometimes they get stretched. Maybe this is, uh, hopefully people have had their mind stretched a little bit by this, but quickly wanna snap back and just say, ah, this is just a, uh, statistical model that predicts the next word. There’s nothing here to see. We can just carry on as usual.

And I just really want to caution against that and say this is gonna be, uh, world changing and we need, uh, sort of smart people. We need everybody to be talking about and thinking how best to, um, make sure that, as I say, we take this technology and use it for good.

Ed Bilotti:

There’s no doubt that artificial intelligence is here to stay. It is already built into so many devices and systems we already use every day. With technology advancing at breakneck speeds, the powerful forces of economics and capitalism and the vast unmet needs that exist, it seems inevitable that AI psychotherapists will become a reality soon enough.

As we’ve seen, there may be ways that this can be good and helpful, but there also may be downsides. And so without a crystal ball to tell the future, we can only guess what the future of artificial intelligence in mental health will look like. In the conclusion of his book, “Homo Deus”, Harari directs the reader to ask themselves three questions.

Are organisms, really just algorithms and is life really just data processing? What’s more valuable? Intelligence or consciousness? What will happen to society, politics, and daily life when non-conscious, but highly intelligent algorithms know us better than we know ourselves?

I propose a fourth question. What will happen to people struggling with mental health issues when non-conscious, but highly intelligent algorithms are providing the therapy instead of a highly trained, empathetic, caring human being. With their own life experience, what was once a deeply personal and sensitive process that took place in a private setting from one human being to another, that formed a bond rooted in trust and safety is instead provided by machines that emulate intelligence, but lack consciousness and lack of soul.

Thank you for listening. I would like to thank my three guests, Dr. Diego Pinochet, Smriti Joshi, and Dr. Alistair Van Hearden. Thank you for your participation and insights.

I urge you to visit WebShrink.com, which is a platform for seekers and providers of mental health. We’ll give you lots of great information, reliable, trustworthy information about disorders and treatments, and current news and personal stories. If you’re a mental health professional, please go to WebShrink.com. Click on the button in the upper right part of the screen that says, List your practice to get listed in our provider directory today.

Of course, in this podcast, we’re not intending to provide any medical advice or opinions, and this is certainly no substitute for an evaluation and treatment by a professional.

References

Reducing Mental Health Stigma Through Personal Stories | Psyched for Mental Health S1 Ep 6

How COVID Tore Apart American Society | Psyched for Mental Health Podcast S1 Ep 4

Climate Change and Mental Health | Psyched for Mental Health S1 Ep 5