Techno - SOCIAL

Mental health company used chatbot to experiment with at-risk teens

Koko, a mental health company used a chatbot to conduct a controversial experiment with adolescents in crisis without obtaining prior informed consent. The experiment in question was conducted during August and September of last year and was described as highly unethical by academics and technology professionals.

The algorithm used by Koko detected teenagers between 18 and 25 years old who made posts with keywords such as depression on social networks such as Discord, Facebook or Telegram. These people were referred directly to a chatbot on the Koko site and, after an exchange of messages, were assigned to one of two groups in the experiment. One of these groups received a crisis session and another group functioned as a control group and only received a telephone number.

There is no doubt that the use of artificial intelligence for the care of people with mental health problems is controversial. In this case, the controversy also occurred because this organization did not inform the subjects that they were taking part in an experiment. They were also not informed that they were interacting with an AI bot and not a professional.

 

What was the experiment with a chatbot carried out by Koko?

The Koko organization and a professor from Stony Brook University conducted the experiment. First, an algorithm was used to detect people with possible mental health crises through their social media posts.

These individuals were referred to the Koko website where a chatbot asked them questions to detect those at risk. These people were organized into two groups. Those who were assigned to the control group were provided with the telephone number of a crisis management hotline. The others were given a questionnaire to complete. The questionnaire asked them what was bothering them, what they thought they could do about it, and who they could talk to about it.

Finally, they were given a number to call in case of emergencies. All interactions with the subjects in the second group were conducted by a chatbot and the aim was to determine the effectiveness of this intervention. This intervention was contrasted with the control group, which was offered only a telephone number.

 

Is it a good idea to use a chatbot for mental health care?

The experiment conducted by the Koko organization created controversy by using artificial intelligence to perform interventions on people with mental health issues, especially if people are unaware that they are interacting with a bot. However, the experience also raises the issue of staff shortages to cope with the demand from people needing online assistance with mental health problems.

In addition to this, the way in which the experiment was carried out was also controversial, since the people selected were not informed that they were taking part in an experiment, something that generates at least one ethical dilemma. According to Koko founder Rob Morris, the experiment was conducted in this way because obtaining informed consent might have led people to opt out.

There is no doubt that the experiment with chatbots conducted by Koko leaves open the debate about the use of artificial intelligence for mental health. According to Morris, this is not a perfect solution, but it is necessary for organizations to continue to look for new ways to care for people with mental health problems.