Akrasia and the divided will: The crisis of moral choice and the goal of human existence

by John Hartley

Augustine ponders the stolen pear

“To err is human,” observed the poet, Alexander Pope. Yet, why do we consciously choose to err from right action against our better judgement? Anyone who has tried to follow a diet or maintain a strict exercise regime will understand what can sometimes feel like an inner battle. Yet why do we stray from virtue, choosing paths we know will lead to inevitable suffering? Force of habit? Addiction? Weakness of will?

This crisis of moral choice lies at the heart of Western philosophy, as the Ancients crafted their doctrines to explain why individuals often fail to realize their good intentions. “For I have the desire to do what is good, but I cannot carry it out.” Observed St Paul, “For what I do is not the good I want to do; no, the evil I do not want to do–this I keep on doing.”

Akrasia in ancient thought

Homer’s Iliad paints a poignant portrait of humanity, ensnared in a cycle of necessity. This relentless loop can only be broken by wisdom and self-knowledge, encapsulated in the Delphic maxim “know thyself.” Socrates, however, argued that true knowledge of the good naturally precludes evil actions (If you really know what is right you will not do wrong!) He contends that misdeeds arise not from a willful defiance of the good but from flawed moral judgment—a tragic aberration, mistaking evil for good in the heat of the moment.

Plato, 427 – 348 BC

Plato linked wisdom and necessity to the duality of good and evil. He envisions self-realization and ordered integration as pathways to the good (inefficiency and unrealized potential signify malevolence). For Plato, good and evil are not external forces but internal currents: one flowing with love and altruism, the other emanating greed, envy, and malice.

The ascent to goodness, according to Plato, hinges on self-mastery and moral transformation, guiding one’s life towards the ideal form of goodness. Stoicism, of course, has experienced something of a resurgence of late, owing to Gen Z influences advocating extreme self discipline and heightened personal responsibility. Read more »



Monday, April 10, 2017

Critique of the Smiley Face

by Emrys Westacott

The ubiquitous yellow smiley is the perfect representation of our culture's default conception of happiness. It signifies a pleasant internal state of mind. Right now, life is fun, it says. I'm enjoying myself. Don't worry–be happy. Unknown

This is a subjectivist conception of happiness. It's all about how one feels, and it tends to be applied to relatively short periods of time: minutes, hours, days.

When discussing happiness with my students, I sometimes describe Barney the Couch Potato. Barney inherited enough money not to have to work for a living. He spends the bulk of his days lounging on the sofa playing video games, watching reruns of old TV sitcoms, smoking weed (it's legal where he lives), and drinking a few beers. He gets off his sofa just enough to stay more or less healthy. Friends drop by often enough to keep him from feeling lonely.

Is Barney happy? When I ask my students this question, nine out of ten invariably say yes. "Maybe I wouldn't want to live like that," they say, "but hey, if that's what he wants, and it makes him feel good, then I guess he's happy."

This response supports my suspicion that a subjectivist conception of happiness is dominant these days, at least in the US. What else could happiness be, after all, but lots of pleasure without too much pain? And what is pleasure if not an enjoyable subjective state?

One way of gaining a critical perspective on this view of happiness is to contrast it with the view of happiness found in ancient Greek philosophy, particularly in the thought of Plato, and Aristotle. Interestingly, their more objectivist notion of happiness, while it has been somewhat displaced, is still with us to some extent; so what they say does not sound utterly alien. Let's consider what it involves.

Read more »

Monday, September 17, 2012

Saintly Simulation

by Evan Selinger

ScreenHunter_02 Sep. 17 08.14My colleague Thomas Seager and I recently co-wrote “Digital Jiminy Crickets,” an article that proposed a provocative thought experiment. Imagine an app existed that could give you perfect moral advice on demand. Should you use it? Or, would outsourcing morality diminish our humanity? Our think piece merely raised the question, leaving the answer up to the reader. However, Noûs—a prestigious philosophy journal—published an article by Robert J. Howell that advances a strong position on the topic, Google Morals, Virtue, and the Asymmetry of Deference”. To save you the trouble of getting a Ph.D. to read this fantastic, but highly technical piece, I’ll summarize the main points here.

It isn’t easy to be a good person. When facing a genuine moral dilemma, it can be hard to know how to proceed. One friend tells us that the right thing to do is stay, while another tells us to go. Both sides offer compelling reasons—perhaps reasons guided by conflicting but internally consistent moral theories, like utilitarianism and deontology. Overwhelmed by the seeming plausibility of each side, we end up unsure how to solve the riddle of The Clash.

Now, Howell isn’t a cyber utopian, and he certainly doesn’t claim technology will solve this problem any time soon, if ever. Moreover, Howell doesn’t say much about how to solve the debates over moral realism. Based on this article alone, we don’t know if he believes all moral dilemmas can be solved according to objective criteria. To determine if—as a matter of principle—deferring to a morally wise computer would upgrade our humanity, he asks us to imagine an app called Google Morals: “When faced with a moral quandary or deep ethical question we can type a query and the answer comes forthwith. Next time I am weighing the value of a tasty steak against the disvalue of animal suffering, I’ll know what to do. Never again will I be paralyzed by the prospect of pushing that fat man onto the trolley tracks to prevent five innocents from being killed. I’ll just Google it.”

Let’s imagine Google Morals is infallible, always truthful, and 100% hacker-proof. The government can’t mess with it to brainwash you. Friends can’t tamper with it to pull a prank. Rivals can’t adjust it to gain competitive advantage. Advertisers can’t tweak it to lull you into buying their products. Under these conditions, Google Morals is more trustworthy than the best rabbi or priest. Even so, Howell contends, depending on it is a bad idea.

Read more »