Healthcare Technology Featured Article

May 09, 2023

Would You Talk to an AI Therapist?




Technology is a wonderful thing, and so is therapy. Historically, there hasn’t been much of an overlap between the two fields of study. Therapy has always remained more of a human-to-human experience to empower a person to improve. Alternatively, technology has always been about pushing boundaries and creating things that add value to our lives.

With the advent of technology, psychotherapy has also undergone some changes. You could schedule an online session with your therapist from the comfort of your home. Psychotherapy has reached a point where you don’t need to talk to a human anymore. Advanced AI software and fast internet like CenturyLink are making it possible for these companies to provide care to their users at an unprecedented level.

However, there are several challenges to its wide-scale availability and adoption. Dhruv Khullar wrote an interesting article on the use of AI in psychotherapy in the New Yorker. He contended that AI is where therapy is ultimately headed. But, some things are getting lost in this transition to a more modern world. Bear with us as we explain.

The Turing Test

The concept of automated programs is not new. It has been around since the 1960s when programs like ELIZA were used to mimic the responses of a human therapist. However, its approach was quite limited in what it could generate. Normally, it would rephrase what was said as a question back to the user.

For example, when a user would say that they felt sad, ELIZA would merely acknowledge the sentiment and say, I understand that you feel sad. This was effective until the user didn’t put out complex situations.

That is where the Alan Turing test comes in. Alan Turing was a mathematician who developed an assessment test to gauge the efficacy of a computer’s response. If a computer could produce responses that could pass a human’s, the computer would pass the Turing Test. Needless to say, programs like ELIZA couldn’t pass the test.

AI Advancements

The advent of generative AI programs that are trained on a large set of data can produce responses that can feel like they came from a human. AI nowadays can easily pass the Turing Test and most humans can distinguish between the responses of an AI program and a human.

However, users familiar with ChatGPT and the likes could sometimes discern between the responses of an AI and a human. Then, they introduced ChatGPT which takes into account an even bigger set of data. That allows it to produce life-like responses.

The Uncanny Valley

This term stems from a robotics professor, Masahiro Mori, who studied robotics and its impact on humans. The professor contended that the development in robotics has reached a point where seeing an almost replica of a human in the form of a robot causes a feeling of unease. Although a person would find a robot more appealing when it can do more things and appear more human. But, that appeal is limited to a point. When they pass that point, the person would feel a slight unrest.

It basically originates from a difference in what we believe and what we’re actually experiencing. Normally, we don’t expect robots to produce a photorealistic representation of humans, and when we see them doing that, we feel uneasy. AI nowadays has reached this point and that’s why several people have raised concerns about what the future will look like. There’s also an open letter doing rounds on mainstream media containing signatures from well-known personalities asking for a selective ban on generative AI applications.

Privacy Concerns

If generative AI is used for psychotherapy, like some companies are already doing, it might put users’ privacy at risk. Admittedly, an AI program can handle more people at once whereas a traditional therapist would only counsel one person at a time. However, if the goal is to make therapy readily available and affordable, it accompanies a risk.

Take VPNs, for example. Since it costs money to maintain the servers that provide the desired speed to its users, the companies operating those work on subscription models. The free VPNs survive by selling users’ data to marketers. After all, someone has to pay. The same could be said for AI in psychotherapy. Will the users’ data be sold to marketers to average the costs, or will it remain expensive and out of reach?

Conclusion

As with any technology, progress often accompanies some questions. These questions eventually get answered as the tech becomes more refined. However, we need a proper regulatory infrastructure in place to better protect the consumers. Admittedly, the road to achieving a life-like therapy experience is inching closer as we move forward. We could end up seeing AI surpass humans in its ability to help people if used correctly.

Sources

https://www.newyorker.com/magazine/2023/03/06/can-ai-treat-mental-illness

https://www.psychologytoday.com/intl/blog/psyche-meets-soul/202303/how-artificial-intelligence-could-change-psychotherapy



Get stories like this delivered straight to your inbox. [Free eNews Subscription]




SHARE THIS ARTICLE



FREE eNewsletter

Click here to receive your targeted Healthcare Technology Community eNewsletter.
[Subscribe Now]