Now Reading
A Startup Is Under Fire for Using A.I. to Offer Therapy

A Startup Is Under Fire for Using A.I. to Offer Therapy

A mental health nonprofit is facing scrutiny after users discovered the startup was “experimenting” on them by using an AI chatbot to treat their mental health crises without their knowledge.

Rob Morris, the cofounder of Koko, tweeted that their services helped 4,000 people thanks to GPT-3. But after users learned their messages were co-created by AI, they felt disturbed by the program.

“Simulated empathy feels weird, empty,” Morris wrote.

Koko used Discord to offer peer-to-peer support for people experiencing mental health crises. Their process was guided by a chatbot. Morris shared that messages were composed by AI but supervised by humans. The AI technology allowed Koko to respond twice as quickly as a human, and AI messages were rated significantly higher than ones written by humans. It was only after users learned their messages had been co-created by a machine that things started to fall apart.

Koko’s chatbot system raised many ethical questions by experts, namely could the AI unintentionally cause harm to someone seeking emergency help by misreading cues? And who would ultimately be responsible for the AI’s messages?

“Large language models are programs for generating plausible sounding text given their training data and an input prompt,” Emily M. Bender, a professor of linguistics at the University of Washington, told Motherboard. “They do not have empathy, nor any understanding of the language they producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks.”

Bender shared that she isn’t opposed to utilizing AI to address mental health crises. However, more ethical research should be done before throwing people to the bots.

“I think everyone wants to be helping,” Bender shared. “It sounds like people have identified insufficient mental health care resources as a problem, but then rather than working to increase resources (more funding for training and hiring mental health care workers) technologists want to find a short cut. And because GPT-3 and its ilk can output plausible sounding text on any topic, they can look like a solution.”

© 2023 RELEVANT Media Group, Inc. All Rights Reserved.

Scroll To Top

You’re reading our ad-supported experience

For our premium ad-free experience, including exclusive podcasts, issues and more, subscribe to

Plans start as low as $2.50/mo