By Grace Boey
Imagine the following scenario. Bob doesn’t have any opinion on whether abortions are okay. Although he could think through the issue for himself, Bob takes another route: he asks his friend Sally what she thinks. Because Bob trusts Sally, he doesn’t hesitate to believe her when she says that abortions are fine. From then on, Bob doesn’t give the question any more thought, and goes about acting as if what Sally says is true.
What, if anything, is weird about Bob? There might not be much of a problem if Bob already has some strong moral views about the permissibility of ending life more generally, and trusts that Sally—who happens to be an expert obstetrician— knows some intricate scientific facts about abortions and foetal development that he isn’t in a position to know or understand. But what if, instead, Bob knows all the scientific information there is to know about abortions, lacks any moral views on the matter, and proceeds to outsource his moral beliefs to Sally? Even more provocatively: what if this scenario is set far in the future, and Bob uses the widely-available and completely reliable ‘Google Morals’ app to look up whether abortions are morally permissible?
There is something off-putting about Bob in the last two scenarios, that isn’t in the first. This has been framed as the ‘puzzle of pure moral deference’ in academic philosophical discussions. The puzzle, in short, concerns the asymmetry in our willingness to defer to others about empirical matters on the one hand, and purely moral matters on the other. Most of us would have no problems with Bob believing what Sally says about the science behind abortions. But the idea of him outsourcing his ethical beliefs to someone else, and the notion of anything like ‘Google Morals’, makes us balk.
Contemporary philosophers have offered solutions to two parts of this puzzle. First, what makes us balk at the prospects of practicing pure moral deference to others? And second, even if something is amiss about the practice, is it still alright for us to do it? In other words, should we be hopeful or doubtful about outsourcing our moral beliefs to others?
Some worries about moral deference
One way to dodge the discussion—as philosopher Sarah McGrath notes—is to be a moral skeptic or nihilist, and deny that there are any moral truths at all to defer about. Under this view, the very idea of ‘Google Morals’ doesn’t make sense. And there are easy answers for those who hold the positions that philosophers call moral subjectivism, or moral non-cognitivism. But many of us have opposing intuitions that there are at least some moral truths, which apply to everyone. For instance, many of us think that murder is morally wrong, and believe that this standard applies to everyone.
Few of us with such realist inclinations would deny that anything seems odd about farming out our moral beliefs to others. Precisely what might the matter be? One response is that we’re all on equal footing when it comes to grasping all types of moral truths, and that no one is more of a moral expert than anyone else. But this doesn’t seem very plausible: it’s pretty clear that some people are more morally sensitive than others. In some cases, this might be due to differences in innate capacity. Think, for example, about how we see psychopaths as being inherently morally compromised.
Differences in moral expertise might also arise from differences in experience, and this doesn’t have anything to do with innate moral capacities. Philosophers like Karen Jones and Julia Driver note that richer experiences with certain types of moral problems can give us greater moral expertise in those areas. McGrath notes, as an example, that someone who has participated in many close friendships is probably in a good position to recognise which disclosures of a friend’s personal information would—or wouldn’t—constitute betrayals. Robert Hopkins also argues that the how often we’ve exercised our opportunities to solve some type of moral problem affects our ability to solve similar ones in the future. These lines of thought are plausible when we think about forming reliable moral judgments as a skill—one that can improve with practice.
Still, McGrath and Driver think it's difficult to know exactly who these ‘moral experts’ are, and how to go about identifying them. One line of thought is that it’s difficult—if not impossible—to go about identifying moral experts, without calibrating their track records against our own pre-existing moral views. Another related worry stems from moral disagreement. Given that there’s little consensus on most moral questions, how are we to identify those to whom we should defer? And again, how should we do this without relying on our pre-existing moral views?
We might skirt these worries in various ways. Hopkins, for instance, notes that skepticism of this sort implies an identical skepticism about all of our own existing moral judgments. Unless we are willing to deny that we should place any stock at all in our own moral beliefs, then it must be permissible, at least to some extent, to use them when choosing who to defer to. And returning to the matter of experience, if we think that experience can sometimes contribute to moral expertise, then we might legitimately identify someone as a moral expert on some issue if—amongst other things—we know they have lots of the relevant type of experience. And if we share moral sensibilities on other issues that we do have similar experiences in, and I have a high and justified level of confidence in my own moral beliefs, then the case for me deferring to your moral judgments when I'm uncertain becomes even more compelling.
Does pure moral deference threaten our moral identity?
Setting aside the worries above, there is one last matter that many philosophers take to be the most compelling candidate for the oddity of outsourcing our moral beliefs to others. As moral agents, we’re interested in more than just accumulating as many true moral beliefs as possible, such as ‘abortion is permissible’, or ‘killing animals for sport is wrong’. We also value things such as developing moral understanding, cultivating virtuous characters, having appropriate emotional reactions, and the like. Although moral deference might allow us to acquire bare moral knowledge from others, it doesn’t allow us to reflect or cultivate these other moral goods which are central to our moral identity.
Consider the value we place on understanding why we think our moral beliefs are true. Alison Hills notes that pure moral deference can’t get us to such moral understanding. When Bob defers unquestioningly to Sally’s judgment that abortion is morally permissible, he lacks an understanding of why this might be true. Amongst other things, this prevents Bob from being able to articulate, in his own words, the reasons behind this claim. This seems strange enough in itself, and Hills argues for at least two reasons why Bob’s situation is a bad one. For one, Bob’s lack of moral understanding prevents him from acting in a morally worthy way. Bob wouldn’t deserve any moral praise for, say, shutting down someone who harasses women who undergo the procedure.
Moreover, Bob’s lack of moral understanding seems to reflect a lack of good moral character, or virtue. Bob’s belief that ‘late-term abortion is permissible’ isn’t integrated with the rest of his thoughts, motivations, emotions, and decisions. Moral understanding, of course, isn’t all that matters for virtue and character. But philosophers who disagree with Hills on this point, like Robert Howell and Errol Lord, also note that moral deference reflects a lack of virtue and character in other ways, and can prevent the cultivation of these traits.
Beyond virtue, the link between our emotions and our moral beliefs is interesting in its own light. There is something very cold, and very strange, about someone who claims to believe that murder is wrong, without feeling horror or outrage in the face of an actual murder. Yet moral deference allows for this possibility—someone may gain moral beliefs from another, without inheriting the emotional responses associated with those beliefs.
What does this all mean for moral deference? Hills thinks that her worry about moral understanding means we are never permitted to practice it, unless moral understanding is out of our reach. Yet one can disagree—as many philosophers do—by granting that practicing moral deference is permissible even though there’s something sub-optimal about it. Howell argues that nothing about the sub-optimality of deference indicates that we shouldn’t do it often. After all, why should we think that we are, or must be, optimal moral agents all the time? Other philosophers like Lord, and Jones & François Schroeter, think that we can’t possibly be expected to base our every action and belief on moral understanding, or cultivate our moral sensitivities, virtues, and emotions to the fullest and finest extent. If moral knowledge is sometimes available to us via testimony, why not take advantage of this opportunity?
Philosophers like Lord, and Jones & Schroeter, also argue that actions can have moral worth even when an agent lacks moral understanding. And this seems plausible: doesn’t Bob deserve some moral credit for stopping the unjustified harassment of women, on the basis of his concern for believing correctly about morality? Lord argues that agents like Bob are praiseworthy, to the extent that they know how to use the true beliefs they have gained as moral reasons for action. Bob knows how to use the fact that ‘abortion is permissible’ in order to act. His surely count for something, even if he might not possess everything we desire.
In many cases, deference also seems to be necessary for cultivating virtuous moral characters at all—and by extension, other valuable goods like moral understanding. As Howell notes, children are a prime example, since they can’t be expected to get at many moral truths, and grow as moral agents, in any other way. The importance of deference extends into adulthood as well, if we accept that even mature moral agents are on paths of continuous learning, and that others have relevant moral experience that we don’t. As Jones & Schroeter note, the requirement that we should never practice moral deference is much too ideal. Most of us aren’t—and will never be—completely perfect moral agents.
Last, knowing when to practice moral deference may be a moral good in itself. Jones argues that the virtuous person knows when she needs the moral help of others. David Enoch argues that refusing to practice moral deference in action, at least, often reflects a lack of the very moral goods we might be trying to cling on to. For instance, it is self-defeating for someone to act in ways that are likely to increase human suffering, by refusing to defer to someone who's more likely to have the right moral beliefs, in order to cultivate the character trait of compassion. Although Enoch limits his arguments to deference in action, one might extend his conclusions to pure moral deference about beliefs as well.
The role of pure moral deference for mature moral agents
Readers may walk away from this discussion feeling optimistic about the prospects of pure moral deference. Others will leave with more cautious convictions. But even for those who remain uncomfortable with outsourcing their moral views to others, it’s hard to maintain that pure moral deference should never, ever, play a role in the life of a mature moral agent. It’s difficult to deny that our moral sensitivities can be heightened—or inhibited—by differences in experience, capacity, and other resources like time and energy. Humility requires that we must, at least sometimes, be willing to take advantage of how moral expertise have been divvied up.
It’s also difficult to deny that we should always cling on to the value we place on exhibiting things like moral understanding, proper emotions, and virtue. The discussion, of course, is complicated by the fact that these goods have more than just intrinsic value to moral agents. Since we’re not always in a position to defer to others, cultivating these traits for ourselves often instrumentally allows us to acquire more true moral beliefs across time. Despite this, there are still cases where these traits will be out of reach for us in the near future, and where the best way to cultivate them is to begin by deferring to others.
If the skeptic makes these concessions, then the most plausible situations in which we are permitted, or even required, to defer to the moral testimony of others are times when we are uncertain about some type of moral issue we have never encountered before, have little to no relevant experience, are unlikely to cultivate the relevant character traits or understanding or emotional responses in the near future, and have access to the moral judgments of someone else whom we know and trust to be superior to us in these respects.
This very minimal level of humility covers more cases than we might initially think. For example, this could require those who have high levels of social privilege and relatively sheltered life experiences to defer, in some circumstances, to the moral judgments of those who have much more experience being oppressed. This recommendation has its limitations, of course. But it does put pressure on members of privileged groups to put a significant amount of stock in the judgments of marginalised individuals when the latter decry that certain things are oppressive.
Other interesting issues might affect our willingness to defer. For instance, how incumbent is it on us to defer (or not to defer) when the moral stakes are high? Those who place supreme importance on the value of moral agency and identity might think that deference is less permissible in high-stakes situations. Yet one’s intuition might swing sharply in the other direction: perhaps we should be even more willing to defer when the moral stakes are high, since (amongst other things) refusing to budge here seems rather perverse. If either of these opposing positions is true, then pure moral deference is subject to an interesting form of what some philosophers call ‘moral encroachment’.
In any case, even if we’re generally skeptical about pure moral deference, it’s clear that at least some case-specific factors can and should increase our willingness to defer, in at least some situations, however narrow these sets of factors and situations may be. And if we place any value on our identities as moral agents, it’s incumbent on us to think hard about when we should be humble enough to trust the moral testimony of others.