by Tim Sommers
Computers are not alive. Hopefully, we can agree on that. It’s a place to start.
But if anyone succeeds at creating a program that exhibits true artificial general intelligence, wouldn’t such a program, despite not being alive in the biological sense, deserve some kind of moral consideration? Or, at least, if we loaded the AI into a robot body, especially one capable of experiencing pain-like discomfort, wouldn’t it be wrong to use it as a slave? (Ironically, the word “robot” comes from a 1921 Czech play, R.U.R. (“Rossum’s Universal Robots”) by Karel Čapek, and specifically from “robata,” which is just the Czech word for “slave.”)
But if an AI exhibited certain characteristics, like human-level intelligence, we would consider it, I hope, a person in the moral sense, despite its not being alive. On the other hand, presumably streptococcus or human sperm, while clearly alive are not persons. If that’s right, then being alive, in the biological sense, is neither necessary nor sufficient for being a person in the moral sense.
If friendly, intelligent aliens showed up to help us out with global warming, they would probably be alive, unless they were robots. And they would, of course, not be human (unless they seeded the Earth long ago with their DNA and they are us). But if they are intelligent, able to communicate, and act with admirable intentions, they would deserve to be treated as “persons” in the moral sense, surely? Similarly, if we succeed at decoding dolphin language, or find that some other nonhuman animal exhibits intelligence on par with human intelligence, shouldn’t we think of them as persons? Non-human persons, sure, but persons nonetheless.
Since whether you are human or alive does not settle the question of your personhood, we are going to need some other criteria. But, first, what do we mean by personhood in the moral sense? Read more »