Does it matter if empathic AI has no empathy?

Garriy Shteynberg, Jodi Halpern, Amir Sadovnik, Jon Garthoff, Anat Perry, Jessica Hay, Carlos Montemayor, Michael A. Olson, Tim L. Hulsey & Abrol Fairweather, in Nature Machine Intelligence:

Imagine a machine that provides a simulation of any experience a person might want, but once the machine is activated, the person is unable to tell that the experience isn’t real. When Robert Nozick formulated this thought experiment in 1974, it was meant to be obvious that people in otherwise ordinary circumstances would be making a horrible mistake if they hooked themselves up to such a machine permanently. During the intervening decades, however, cultural commitment to that core value — the value of being in contact with reality as it is — has become more tenuous, and the empathic use of AI, in which people seek to be understood, cared for and even loved by a large language model (LLM), is on the rise.

The use of LLMs for information, entertainment and even behavioural encouragement (such as encouragement to go for a walk or make a friend) can be constructive. Applications of LLM chatbots in certain therapeutic domains, from diagnosis to health advice, also seem promising. However, we, as a team of psychologists, philosophers and computer scientists, have concerns about LLMs as a source of empathic care.

More here.