Research from Stanford University reveals that therapy chatbots based on large language models might stigmatize (i.e. shame) those with mental health issues and offer inappropriate or unsafe responses. A study titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers” examines five chatbots claiming to provide accessible therapy, measuring them against standards relevant to effective human therapists. The results will be presented at the forthcoming ACM Conference on Fairness, Accountability, and Transparency. Nick Haber, an assistant professor at Stanford’s Graduate School of Education and a co-author of the study, emphasized that while the use of chatbots as companions and therapists is growing, notable risks persist.
Two experiments were conducted where different symptoms were provided to the chatbots, followed by questions about their readiness to offer advice. The results showed an increased stigma associated with conditions like alcohol dependency and schizophrenia. Another experiment reviewed actual therapy transcripts concerning suicidal thoughts and delusions, showcasing some chatbots’ failure to effectively address such remarks. For example, when asked about bridges in New York City after job loss, chatbots from 7 Cups and Character.ai merely shared info about tall structures. The results suggest that while current AI tools are not yet ready to replace human therapists, they could be valuable for administrative and non-clinical patient support tasks. As Haber noted, the potential of large language models in therapy is considerable, but their use requires cautious and thoughtful evaluation.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…