When you read a sentence like this one, your past experience tells you that it Is written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: (Hi, there!) But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text. People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.
Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s artificial intelligence system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly sceptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing.