Shelly Fan in Singularity Hub:
If you’ve ever vented to ChatGPT about troubles in life, the responses can sound empathetic. The chatbot delivers affirming support, and—when prompted—even gives advice like a best friend. Unlike older chatbots, the seemingly “empathic” nature of the latest AI models has already galvanized the psychotherapy community, with many wondering if they can assist therapy. The ability to infer other people’s mental states is a core aspect of everyday interaction. Called “theory of mind,” it lets us guess what’s going on in someone else’s mind, often by interpreting speech. Are they being sarcastic? Are they lying? Are they implying something that’s not overtly said?
“People care about what other people think and expend a lot of effort thinking about what is going on in other minds,” wrote Dr. Cristina Becchio and colleagues at the University Medical Center Hanburg-Eppendorf in a new study in Nature Human Behavior.”
In the study, the scientists asked if ChatGPT and other similar chatbots—which are based on machine learning algorithms called large language models—can also guess other people’s mindsets. Using a series of psychology tests tailored for certain aspects of theory of mind, they pitted two families of large language models, including OpenAI’s GPT series and Meta’s LLaMA 2, against over 1,900 human participants. GPT-4, the algorithm behind ChatGPT, performed at, or even above, human levels in some tasks, such as identifying irony. Meanwhile, LLaMA 2 beat both humans and GPT at detecting faux pas—when someone says something they’re not meant to say but don’t realize it. To be clear, the results don’t confirm LLMs have theory of mind. Rather, they show these algorithms can mimic certain aspects of this core concept that “defines us as humans,” wrote the authors.
More here.