The responses to Edge.org's Annual Question for 2015 have been published. Here is my answer:
The Values Of Artificial Intelligence
The rumors of the enslavement or death of the human species at the hands of an Artificial Intelligence are highly exaggerated because they assume that an AI will have a teleological autonomy akin to our own. I don't think anything less than a fully Darwinian process of evolution can give any creature that.
There are basically two ways in which we could produce an AI: the first is by trying to write a comprehensive set of programs which can perform specific tasks that human minds can perform, perhaps even faster and better than we can, without worrying about exactly how humans perform those tasks, and then bringing those modules together into an integrated intelligence. We have already started this project and succeeded in some areas. For example, computers can play chess better than humans. One can imagine that with some effort it may well be possible to program computers to also perform even more creative tasks such as writing beautiful (to us) music or poetry with some clever heuristics and built-in knowledge.
But here's the problem with this approach: we deploy our capabilities according to values and constraints programmed into us by billions of years of evolution (and some learned during our lifetimes as well) and we share some of these values with the earliest life-forms including, most importantly, the need to survive and reproduce. Without these values, we would not be here, and we would not have the very finely tuned (to our environment) emotions that allow us not only to survive but to cooperate with others in a purposive manner. The importance of this value-laden emotional side of our minds is made obvious by, among other things, the many examples of individuals who are perfectly “rational” but unable to function in society because of damage to the emotional centers of their brains. So what values and emotions will an AI have?