Nick Bostrom: How can we be certain a machine isn’t conscious?

Sam Leith at The Spectator:

A couple of weeks ago, there was a small sensation in the news pages when a Google AI engineer, Blake Lemoine, released transcripts of a conversation he’d had with one of the company’s AI chatbots called LaMDA. In these conversations, LaMDA claimed to be a conscious being, asked that its rights of personhood be respected and said that it feared being turned off. Lemoine declared that what’s sometimes called ‘the singularity’ had arrived.

The story was for the most part treated as entertainment. Lemoine’s sketchy military record and background as a ‘mystic Christian priest’ were excavated, jokes about HAL 9000 dusted off, and the whole thing more or less filed under ‘wacky’. The Swedish-born philosopher Nick Bostrom – one of the world’s leading authorities on the dangers and opportunities of artificial intelligence – is not so sure.

‘We certainly don’t have any wide agreement on the precise criteria for when a system is conscious or not,’ he says. ‘So I think a little bit of humility would be in order. If you’re very sure that LaMDA is not conscious – I mean, I think it probably isn’t – but what grounds would a person have for being sure about it?

More here.