by Chris Horner
The question of how to program AI to behave morally has exercised a lot of people, for a long time. Most famous, perhaps, are the three rules for robots that Isaac Asimov introduced in his SF stories: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These have been discussed, amended, extended and criticised at great length ever since Asimov published them in 1942, and with the current interest in ‘intelligent AI’ it seems they will be subject for debate for some time to come. But I think the difficulties of coming up with effective rules of this kind are more interesting for what they tell us about the difficulties of any rule or duty based morality for humans than they are for the question of ‘AI morality’.
Duty based – the jargon term is ‘deontological’ – morality seem to run into problems as soon as we imagine them being applied. Duties can easily seem to clash or lead to unwelcome outcomes – one might think that lying would be justified if it meant protecting an innocent person from a violent person set on harming them, for instance. So which duties should take precedence in the infinite number of future situations in which they might be applied? Answering a question like that involves more than coming up with a sequence of rules, as there seems to be something one needs to add to any would-be moral agent for them to really exercise an adequate moral judgment. Considering the problems around this is more than a philosophical parlour game as it should lead us into more realistic ways of thinking about what it takes to act well in the real world. What we are looking for, I think, is an approach that takes into account the need for genuinely autonomous moral thinking, but also connects the moral agent to the the complicated social world in which we live.
Take the most prominent deontological theory of (relatively) modern times, for instance: Kant’s ‘Categorical Imperative’. There are a number of formulations of this, the most famous being along these lines:
Act in such a way that the maxim behind your action could be willed by you as a universal law for humanity.
Note that Kant isn’t drawing up a specific set of duties here, but rather a principled approach to moral judgment. It’s a demand that one be consistent: you can’t make special rules for yourself. So if I am thinking of lying to you in order to defraud you, I am applying something like the maxim ‘it is acceptable for me not to tell the truth if I can profit by my deceit’. This maxim cannot be universalised by me as I won’t want everyone else to apply it as I do: as a liar, I want most people to tell the truth rather than lie. Lying is parasitic on truth telling; liars depend on there being honest dealing.
Now Kant’s approach has, as you can imagine, been subjected to even more discussion and criticism than Asimov’s rules for robots, and I’m not going to go into them all now. I do, though, want to focus on one kind of criticism because I think it illuminates something more interesting than the viability of Kant’s approach alone. It touches on the preconditions and possibility of anything we might call moral judgement.
Kant’s approach has been attacked as formula for moral robots who are applying abstract Law to the messy reality of human life. It specifies what the form of a moral action must be (according to a maxim that one can will as universal law), but not what the content of our willing ought to be. This is connected, arguably, to the emphasis on Reason as an utterly transcendent and ahistorical kernel to morality. Yet genuine moral judgments always presuppose a concrete historical context, a time and a place in history. So if I must not steal because I cannot consistently will the maxim ‘take what belongs to others if you desire it’ to be a universal law for all rational beings like myself, then I have presupposed the existence of private property as a principle of social life. But private property as we understand it is a feature of a period in history, a specific set of social arrangements, a culture that is the effect of centuries of development. Property, contracts, taxes, wealth and poverty and a capitalist economic system provide the context in which the moral Judgment about stealing occurs. Moral judgment cannot be abstracted from the complexity and density of our lives.
But this formal indeterminacy is also a strength of Kant’s account: the subject must take responsibility in ‘converting’ the abstract imperative of the Law into concrete ethical obligations. So there is a kind of existential responsibility one takes on in making a judgment. The categorical imperative should not be understood as an ‘abstract testing device’ for ascertaining whether or not a determinate moral norm is ethical or not. It has to arise from a judgment made by a real person in a real situation, in which a kind of risk is accepted. But what do we need in order to be an effective moral subject of this kind, whether of the Kantian type or not?
Let us turn to a famous example of a human acting rather like a robot and invoking Kant: Adolf Eichmann. Eichmann had played a leading role in organising the transport for the deportations that the Nazi holocaust needed to achieve its genocidal aims. After the war he fled to Argentina, but was subsequently abducted by Israeli agents and brought to trial in Jerusalem in 1961. During the police examination preceding the trial Eichmann made the surprising claim that he had lived his whole life according to Kantian idea of duty. He admitted, though, that he had given up following Kant when he had become involved in the Final Solution. Hannah Arendt sums up his distortion of the Kantian ethic into the imperative that ‘one should act in such a way that the Fuhrer, if he knew of your action, would approve it’(1). Arendt, famously, referred to the ‘banality of evil’ in describing Eichmann’s actions and justifications and described him as a thoughtless man. But if he was thoughtless what wasn’t he thinking about? He was certainly intelligent, as far as that went: capable of organising a transport system that took multitudes to their deaths. Perhaps he was like an intelligent robot, acting like a machine, and failing to think about the meaning of his actions.
Any moral philosophy must presuppose a sensitivity to cases and contexts that implies that one can identify the morally relevant factors in a situation, a sensitivity that informs and enables moral deliberation. But in order to do that you need to be the kind of person who already is habituated to exercising an awareness in a way it is hard to imagine with Eichmann, or indeed AI as we have it now. This is the use of imagination and deliberation to identify when one might confront something one should not do under any circumstances, a taboo, or when there is a moral hazard, a situation in which one is going to have to think carefully about the right thing to do. Without it, all the rules and maxims in the world won’t help us. It isn’t just that we may do the wrong thing (or fail to act at all), but that we won’t even identify a situation that calls for a decision of that kind. Without it we can become machine-like, placing routines and rule following above sensitivity to the people and things that need our attention. It’s this problem that Arendt came to be most concerned with in her later work: the way imagination and thought in the autonomous individual might enable genuine moral judgment, rather than the obedient abdication of responsibility to a Leader.
But this is only one side of the answer to what is missing in the ‘moral robot’. Something more is needed. This is the notion of the ethical ecology or substance of social life – what Hegel calls Sittlichkeit. Morality may be rational and reflective but it is also individualistic. Hegel realised that in order to live well among others in a shared life we need to have a fundamental orientation towards the common good, towards the ways we not only ought to act but also feel. Sittlichkeit is that affective life that makes us want to do the right thing by others, and that causes us shame when we fail. It applies to relatively trivial things (saying thank-you, not pushing in first, etc) but also the more significant side of life, dealing honestly with money, for instance. One end of it is manners, the other morality. It is the ethical ground on which the moral life is planted, and it is developed through socialisation. Before we come to make the kind of reflective moves that a moral philosophy like Kant’s demands, we need to have this pre-reflective mode of life that makes it possible to react with moral sensitivity in the first place.
Arendt, though, pointed out that hundreds of years of Christian morality and civilisation hadn’t stopped the advent of the Third Reich and of people like Eichmann. So perhaps we should see that moral collapse as a breakdown of Sittlichkeit, or its toxification by the trauma of war and economic collapse. A moment like that requires an individual to not just go along with what everyone else is doing but to think for oneself in a way Eichmann was disinclined to do. So what we might want to conclude is that we need both poles: a shared ethical life in which our feelings and habits encourage basic trust and decency and the autonomous moral agent who can detach from the herd mentality when that shared life goes wrong. No easy task. Yet perhaps what is impressive is not how often we get an Eichmann but how often people do achieve the goals of basic decency and moral literacy when it counts. It will be interesting to see if that kind of achievement will be possible for a machine of the future.
(1) Hannah Arendt: Eichmann in Jerusalem, 1963, Penguin Classics, P 136.