Should LLMs be Giving Therapy?

Probably not!

A while ago, a website known as BetterHelp landed itself in hot water when it was revealed that the website was allowing unlicensed people to give therapeutic advice, matching people poorly with help when they were licensed, and in general sort of fumbling the ball. Users paid money expecting a licensed professional, and only sometimes got what they paid for. The service briefly disappeared to restructure and rebuild it’s trust with it’s Youtuber advertisers.

BetterHelp is still out there, and according to the people it sponsors on social media platforms like YouTube, it’s doing a better job of vetting and managing the people it connects, but its reputation has not fully recovered. That’s with people. People who can be individually found and punished or fined if something bad happens to a patient as a result of their poor advice. Betterhelp itself was held liable as a go-between, a much stricter standard than physical or digital product companies like Amazon are held to when it comes to their third parties.

AI seems poised to expand into the field of mental health, and it seems unbelievably precarious. Why should an unlicensed LLM, which – in the process of training – has doubtlessly absorbed plenty of bad advice regarding mental health, be allowed to advise people? That’s absurd. Cousin uses for LLMs like Character AI have a terrible track record of getting ‘weird’ with people who aren’t seeking help. It’s largely agreed that AI might help people in tandem or as a supplement to a real human therapist, but that it is insufficient to do the job itself. Right now, LLMs exist in a legal gray zone –the thing imitating a person cannot be punished like one, but it’s possible a death from bad advice might be neglect, which might be the company’s fault.

This is not touching the issue of privacy, either. Therapists cannot disclose anything in the session due to patient privacy laws (with some exceptions for things like the safety of the patient or other people) but ChatGPT is bound by no such rules. It stores conversations you have, partially for training, partially so that ChatGPT can review previous conversations and discuss them in lieu of an actual ‘memory’. Many other apps are not much better, actively communicating the information you give it to improve conversations, yes, but also to analyze user wants and advertise to them.

Thankfully, even AI favorable sources don’t like the thought of ChatGPT, which is not always polite and can be commanded to agree with everything a user says, doing ‘therapy’ alone, and yet users still go to it as an option of last resort. ChatGPT is a lot of things, but it’s also not a lot of things too.

Sources: https://www.classaction.org/news/class-action-alleges-betterhelp-misleads-patients-matches-them-with-unlicensed-unsuited-therapists

https://www.greenbaumlaw.com/insights-publications-Is-Amazon-Liable-Third-Party-Sellers-Products.html

https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0

https://www.psychologytoday.com/us/blog/the-healthy-journey/202412/ai-mental-health-is-coming-are-you-ready

https://hai.stanford.edu/news/blueprint-using-ai-psychotherapy

https://openai.com/policies/privacy-policy
https://aicompetence.org/ai-for-mental-health-chatbots-digital-therapists