From Edge.org:
People talk about the future of the intelligent machines, and whether intelligent machines are going to take over and decide what to do for themselves. What one has to figure out, while given a goal, how to execute it into something that can meaningfully be automated, the actual inventing of the goal is not something that in some sense has a path to automation.
How do we figure out goals for ourselves? How are goals defined? They tend to be defined for a given human by their own personal history, their cultural environment, the history of our civilization. Goals are something that are uniquely human. It's something that almost doesn't make any sense. We ask, what's the goal of our machine? We might have given it a goal when we built the machine.
The thing that makes this more poignant for me is that I've spent a lot of time studying basic science about computation, and I've realized something from that. It's a little bit of a longer story, but basically, if we think about intelligence and things that might have goals, things that might have purposes, what kinds of things can have intelligence or purpose? Right now, we know one great example of things with intelligence and purpose and that's us, and our brains, and our own human intelligence. What else is like that?
More here.