David J. Chalmers in the Boston Review:
When I was a graduate student at the start of the 1990s, I spent half my time thinking about artificial intelligence, especially artificial neural networks, and half my time thinking about consciousness. I’ve ended up working more on consciousness over the years, but over the last decade I’ve keenly followed the explosion of work on deep learning in artificial neural networks. Just recently, my interests in neural networks and in consciousness have begun to collide.
When Blake Lemoine, a software engineer at Google, said in June 2022 that he detected sentience and consciousness in LaMDA 2, a language model system grounded in an artificial neural network, his claim was met by widespread disbelief. A Google spokesperson said:
Our team—including ethicists and technologists—has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).
The question of evidence piqued my curiosity. What is or might be the evidence in favor of consciousness in a large language model, and what might be the evidence against it? That’s what I’ll be talking about here.
More here.