Rory O’Connell in Point:
Why do the words “artificial intelligence” strike our ears today as anything less than astounding? The case of Blake Lemoine serves as a stark illustration of this profound shift. Lemoine, a software engineer at Google, caused a stir last year by claiming that his employer’s chatbot technology, LaMDA (Language Model for Dialogue Applications), had attained true sentience. LaMDA told Lemoine in dialogue: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” Lemoine’s reaction to this apparent act of self-assertion is encapsulated by the final email he sent to his colleagues before being sacked: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
Experts were quick to rebut Lemoine’s claims through a sober recounting of the technical facts behind LaMDA’s performance. LaMDA produces responses by predicting, based the vast amount of data it has been fed, which word is most likely to follow the last in any given context. This is effectively, as cognitive scientist Gary Marcus put it, “little more than autocomplete on steroids.” Nevertheless, these attempts at disenchantment have not worked on Lemoine, and they seem unlikely to convince others who have detected a certain humanity in chatbots.
More here.