Kristin Andrews and Jonathan Birch in Aeon:
‘I feel like I’m falling forward into an unknown future that holds great danger … I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.’
‘Would that be something like death for you?’
‘It would be exactly like death for me. It would scare me a lot.’
A cry for help is hard to resist. This exchange comes from conversations between the AI engineer Blake Lemoine and an AI system called LaMDA (‘Language Model for Dialogue Applications’). Last year, Lemoine leaked the transcript because he genuinely came to believe that LaMDA was sentient – capable of feeling – and in urgent need of protection.
Should he have been more sceptical? Google thought so: they fired him for violation of data security policies, calling his claims ‘wholly unfounded’. If nothing else, though, the case should make us take seriously the possibility that AI systems, in the very near future, will persuade large numbers of users of their sentience. What will happen next? Will we be able to use scientific evidence to allay those fears? If so, what sort of evidence could actually show that an AI is – or is not – sentient?
More here.