Susan Schneider in KurzweilAI:
Some things in life cannot be offset by a mere net gain in intelligence. The last few years have seen the widespread recognition that sophisticated AI is under development. Bill Gates, Stephen Hawking, and others warn of the rise of “superintelligent” machines: AIs that outthink the smartest humans in every domain, including common sense reasoning and social skills. Superintelligence could destroy us, they caution. In contrast, Ray Kurzweil, a Google director of engineering, depicts a technological utopia bringing about the end of disease, poverty and resource scarcity. Whether sophisticated AI turns out to be friend or foe, we must come to grips with the possibility that as we move further into the 21st century, the greatest intelligence on the planet may be silicon-based. It is time to ask: could these vastly smarter beings have conscious experiences — could it feel a certain way to be them? When we experience the warm hues of a sunrise, or hear the scream of an espresso machine, there is a felt quality to our mental lives. We are conscious.
A superintelligent AI could solve problems that even the brightest humans are unable to solve, but being made of a different substrate, would it have conscious experience? Could it feel the burning of curiosity, or the pangs of grief? Let us call this “the problem of AI consciousness.” If silicon cannot be the basis for consciousness, then superintelligent machines — machines that may outmode us or even supplant us — may exhibit superior intelligence, but they will lack inner experience. Further, just as the breathtaking android in Ex Machina convinced Caleb that she was in love with him, so too, a clever AI may behave as if it is conscious. In an extreme, horrifying case, humans upload their brains, or slowly replace the parts of their brains underlying consciousness with silicon chips, and in the end, only non-human animals remain to experience the world. This would be an unfathomable loss. Even the slightest chance that this could happen should give us reason to think carefully about AI consciousness.
The philosopher David Chalmers has posed “the hard problem of consciousness,” asking: why does all this information processing need to feel a certain way to us, from the inside? The problem of AI consciousness is not just Chalmers’ hard problem applied to the case of AI, though. For the hard problem of consciousness assumes that we are conscious.
More here.