by Chris Horner
The question of how to program AI to behave morally has exercised a lot of people, for a long time. Most famous, perhaps, are the three rules for robots that Isaac Asimov introduced in his SF stories: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These have been discussed, amended, extended and criticised at great length ever since Asimov published them in 1942, and with the current interest in ‘intelligent AI’ it seems they will be subject for debate for some time to come. But I think the difficulties of coming up with effective rules of this kind are more interesting for what they tell us about the difficulties of any rule or duty based morality for humans than they are for the question of ‘AI morality’.
Duty based – the jargon term is ‘deontological’ – morality seem to run into problems as soon as we imagine them being applied. Duties can easily seem to clash or lead to unwelcome outcomes – one might think that lying would be justified if it meant protecting an innocent person from a violent person set on harming them, for instance. So which duties should take precedence in the infinite number of future situations in which they might be applied? Answering a question like that involves more than coming up with a sequence of rules, as there seems to be something one needs to add to any would-be moral agent for them to really exercise an adequate moral judgment. Considering the problems around this is more than a philosophical parlour game as it should lead us into more realistic ways of thinking about what it takes to act well in the real world. What we are looking for, I think, is an approach that takes into account the need for genuinely autonomous moral thinking, but also connects the moral agent to the the complicated social world in which we live. Read more »