Alexis Papazoglou interviews Susan Schneider at the IAI:
If we define consciousness along the lines of Thomas Nagel as the inner feel of existence, the fact that for some beings “there is something it is like to be them”, is it outlandish to believe that Artificial Intelligence, given what it is today, can ever be conscious?
The idea of conscious AI is not outlandish. Yet I doubt that today’s well-known AI companies have built, or will soon build, systems that have conscious experiences. In contrast, we Earthlings already know how to build intelligent machines—machines that recognise visual patterns, prove theorems, generate creative images, chat intelligently with humans, etc. The question is whether, and how, the gap between Big Tech’s ability to build intelligent systems and its ability (or lack thereoff) to build conscious systems will narrow.
Humankind is on the cusp of building “savant systems”: AIs that outthink humans in certain respects, but which also have radical deficits, such as moral reasoning. If I had to bet, savant systems already exist, being underground and unbeknownst to the public. Anyway, savant systems will probably emerge, or already have emerged, before conscious machines are developed, assuming that conscious machines can be developed at all.
More here.