Can robots make good therapists?

Sophie McBain in New Spectator:

In the mid-Sixties the Massachusetts Institute of Technology (MIT) computer scientist Joseph Weizenbaum created the first artificial intelligence chatbot, named Eliza, after Eliza Doolittle. Eliza was programmed to respond to users in the manner of a Rogerian therapist – reflecting their responses back to them or asking general, open-ended questions. “Tell me more,” Eliza might say. Weizenbaum was alarmed by how rapidly users grew attached to Eliza. “Extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” he wrote. It disturbed him that humans were so easily manipulated.

From another perspective, the idea that people seem comfortable offloading their troubles not on to a sympathetic human, but a sympathetic-sounding computer program, might present an opportunity. Even before the pandemic, there were not enough mental health professionals to meet demand. In the UK, there are 7.6 psychiatrists per 100,000 people; in some low-income countries, the average is 0.1 per 100,000. “The hope is that chatbots could fill a gap, where there aren’t enough humans,” Adam Miner, an instructor at the department of psychiatry and behavioural sciences at Stanford University, told me. “But as we know from any human conversation, language is complicated.”

Alongside two colleagues from Stanford, Miner was involved in a recent study that invited college students to talk about their emotions via an online chat with either a person or a “chatbot” (in reality, the chatbot was operated by a person rather than AI).

More here.