by Tim Sommers
When I ask students what they were most interested in, or at least what they remember most, from their “Introduction to Ethics” or “Intro to Philosophy” class, it’s remarkable how many offer the same answer. It seems they all remember Robert Nozick’s “Experience Machine.” Here it is.
The Experience Machine
Suppose you were offered the choice between continuing on in your life just as it is, or being plugged into a machine which would give you whatever sensations or experiences you prefer, while also causing you to forget that these experiences are caused by the machine and not the real world. Would you plug in?
Independent of their philosophical significance, such thought experiments are just fun. So, I thought, sometimes you just want the frosting and not the whole cake; and I designed and taught a course I called, “Life’s a Puzzle: Philosophy’s Greatest Paradoxes, Thought Experiments, Counter-Intuitive Arguments, and Counter Examples.”
Here I present a few examples. I am not going to comment much or offer my – or anyone’s – proposed solutions (for the most part). It’s just the carnival ride without the line. (But keep in mind there are a variety of ways all of these can be presented and some of the differences are substantive.)
Let’s start with another from Nozick, since he was a modern master of the genre.
The Department for the Redistribution of Eyes
Imagine that, roughly half of the time, people are born without eyes and, roughly half of the time, people are born with two eyes. Suppose eye transplants are cheap and relatively painless. Would a compulsory eye redistribution program run by the government, that forced people with two eyes to give one to someone with none, be morally permissible?
Nozick says it would be wrong because we own ourselves. If it is wrong, are there any other plausible explanations – other than self-ownership – for why such an eye redistribution scheme is wrong? Or is there some version of such a scheme that might not be unethical? (Robert Nozick)
Here’s an ancient one with an early modern twist. If Theseus has a ship that he maintains over time by having rotting planks, rusting nails, and torn sails replaced, is it still the ship of Theseus once half of the original parts have been replaced? What about when 75% of the original parts have been replaced? What about all of them? Hobbes added this twist. What if Theseus’ brother has been assiduously collecting the discarded parts from the ship of Theseus from the beginning and reassembling them into a ship of his own? When the brother has a complete ship composed entirely of all the original parts, does he now have the ship of Theseus? (Plutarch plus Hobbes)
Suppose Mary has perfectly normal color vision but is confined to a black and white room and never glimpses color. Nonetheless, she studies color in all its aspects – from the physical to the neurological and psychological. In fact, she believes that she knows everything there is to know about color. One day she gets out of the room and actually sees red for the first time.
Does she now know something she didn’t know before? If so, does that mean that a complete physical description of the world leaves something out? What does it leave out? (Frank Jackson)
Suppose you don’t know what economic or social class you are in, your race or gender, you don’t even know if you are talented or hardworking – or not so much. You don’t know your religion, if you have one, nor do you know your own views on morality or justice. In short, you don’t know who you are or what your position in society is or what you believe in. You know that you are somebody and there is some stuff that everyone wants and so you know that you should grab as large a share as possible. But you could be the luckiest or the worst-off. If you have to pick the principle used to divide things up, from a purely self-interested point of view, what principle would you pick?
For example, some say that we should/would agree to whatever maximizes the well-being of all (utilitarians). Some argue that everyone should get a “sufficient” amount (sufficientarians/some advocates of UBI) or that there should be an upper limit on wealth and income (limitarians). Rawls, who came up with this thought experiment and called it being in the “original position” behind the “veil of ignorance”, says we should/would choose to make the least well-off person as well-off as possible (the difference principle) – since you could be that person. (And, maybe, because the maximin strategy (maximize your minimum share) is the rational strategy under conditions of uncertainty, according to game theory.) (John Rawls)
Since every emerald you have seen so far is green, you might infer that the next emerald you see will also be green. But what if it’s “grue” instead? Grue means that something is green until some time in the future and then it will be blue. Until that time, every emerald you observe is just as grue as it is green. Even if inductive inference works, how do we know which terms we should use when making inductions? What’s wrong with “grue”? There are things like leaves, flowers, and even minerals that change color over time. (Nelson Goodman)
(By the way, the old riddle of induction, to oversimply, is that we are justified in thinking if we drop something it will fall because every time we have dropped something so far it falls. But there’s no deductive argument proving the future will be like the past in that way. And an inductive argument for that conclusion would be circular or question-begging. The new riddle emphasizes that, in fact, we only expect the future to be like the past in some ways. How do we know which ways? (Bonus. What do you make of Quine’s claim that, “Creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die before reproducing their kind”?))
From “The Fly”, to “Star Trek”, to “The Boys” and the Marvel Universe (Nightcrawler, Dr. Strange, etc.), teleportation has been a staple of popular entertainment. It usually works like this. You are broken down into your constituent parts and, then, either those bits of you are transported somewhere else and put back together at this new location or just the information travels and you are reassembled out of atoms already at your destination. The trouble is that being “broken down into your constituent parts” is usually called death. It seems that you are not so much “teletransported”, as Derek Parfit called it, as killed and then copied elsewhere. So, would you get into a teleporter? Is what happens in the teleporter the same as ordinary death? Why not?
Parfit says that while a teleporter does kill you and copy you, “going on” in this way is just as good as “going on” in the ordinary way.
I would go further. If possible, I think you should use teleporters that don’t destroy the original so as to make multiple copies of yourself since it could be better for you if there were more of you. (Derek Parfit)
You need a surgery today that only Dr. Slice and Dr. Patch can perform. And they can only perform it together. No use getting sliced without getting patched. And vice versa. Unfortunately, Slice has a golf game scheduled. Luckily, for him, so does Patch. Patch also knows that Slice has a game scheduled. Of course, it’s neither’s fault if the other doesn’t show up. And if the other doesn’t show up, being there alone is useless. Since both know the other will not be there, and that they can’t help alone, neither shows up. If you die, whose fault is it?
If you simply say, well, it’s the fault of both, even though considered independently it’s neither’s fault, then you have to be ready to defend some kind of collective responsibility. Can we have an obligation to do something, that depends on others acting? Even when we have good reason to believe that they won’t? (David Estlund.)
It seems to some that it is logically possible that there could be a being just like us that behaves just as we do, but has no conscious experience or inner life at all. The lights are on and nobody’s home. You might deny that this is physically possible. But some philosophers argue that it only needs to be logically or conceptually possible, and not physically possible, to show that we are more than merely physical beings. So, is it logically or conceptually possible? If so – or not – what does that show?
(Also, consider LaMDA, the Google AI that convinced Blake Lemoine that it is sentient. It could probably also pass the Turing test with most people, but almost everyone seems to agree that it obviously has no self-awareness, no inner life. Is it a philosophical zombie? Or evidence that one could exist?)
(It’s a little unclear who gets the most credit (blame?) on this, but Saul Kripke and David Chalmers are front-runners.)
Suppose Alex Holland is walking through a swamp. He stops to lean on a tree. He and the tree are both struck by lightning simultaneously. He disintegrates, but, improbably, an exact replica of him is created by the lighting out of elements from the tree. Alex’s replacement, call them Swampman, has no idea what just happened.
Many people think that part of what gives your words meaning is that they are connected in the right way to the external world. For example, the way I use water is caused by my experiences with water and wetness. But then, since Swampman has no previous interaction with anything, his words, by hypothesis otherwise identical to what Alex’s words would have been, actually have no meaning. Can that be right? (I bet you didn’t think that was where I was going with that.)
Compare. Suppose you notice an ant meandering and leaving a discernable path in the sand. Suppose further that path looks exactly like a drawing of Winston Churchill. Is it possible that it is a drawing of Churchill? Why not?
(Swampman belongs to David Lewis and Allan Moore, Hillary Putnam owns the ant.)
Can you be morally responsible even if you could not have done otherwise?
Many people believe that if the world is deterministic there can be no free will and no moral responsibility since no matter what we do things will happen the way they were always and already causally determined to happen. One way to characterize the incompatibility of free will and determinism is this. How can we be morally responsible when we could not have done otherwise?
Well, suppose Kathy intends to kill her roommate Joe. She secretly installs a device in their mutual friend Geoff’s brain so that when she presses a button his arms shoot forward automatically. She then lures them both to the top of a cliff and waits for Joe to step in front of Geoff so that she can press the button forcing Geoff to push Joe off the cliff. But right before she pushes the button Geoff pushes Joe off the cliff anyway.
Geoff seems morally responsible in this case. Yet, Geoff could not have done otherwise. Had he failed to push Joe off the cliff, Kathy would have made him do it. It would seem one can be morally responsible even if they could not have done otherwise. So, if determinism is incompatible with free will, it’s not because we can only be morally responsible when we could have done otherwise. (Harry Frankfurt)
You’re going to be on a game show where you are given a choice involving two boxes. One box is transparent and has $1,000 in it. You can’t see into the other box, but it either has a million dollars or nothing.
The show features a mysterious perfect predictor. If the predictor predicts you will take both boxes, then the opaque box contains nothing. If the predictor predicts you will only take the opaque box, then it has a million dollars in it. The thing is, the predictor is perfect. It’s never been wrong before. What should you do?
Obviously, you should just take the opaque box that will, therefore, have a million dollars in it. But, no, wait, when it comes time to make your choice the money is already in the box – or it isn’t. There’s nothing that can change whether it is or it isn’t at that point. If you take both you at least get a $1,000 and maybe $1,001,000. Take both. Obviously. (Newcomb, of course. William Newcomb. I mean it would be weird if someone not named “Newcomb” came up with Newcomb’s problem, right?)
Suppose you wake up one morning to discover that a rogue group of music lovers have, without your consent, attached you physically to world renowned violin player. You blame yourself (at least a little). You heard that music lovers were searching the area looking to find someone physiologically compatible to hook the violinist up to, yet you drank heavily and passed out with all your doors and windows open. Now, the music lovers tell you that you are the only compatible candidate they could find, and that if you just stay attached to the violin player for nine months the violinist’s life will be saved. But if you unhook yourself sooner the violinist will die almost immediately. Do you have a moral obligation to remain so attached? Keep in mind that the violinist is clearly a person in the moral sense – and even an exceptional person – given their world-class violin playing.
So, even if a fetus is also a person in the moral sense, if you are not morally obligated to stay hooked up to the violin player for nine months, even where unhooking causes his death, then why are you obligated to carry a fetus for nine months, even if not doing so results in its death? (Judith Jarvis Thompson)
I would love to hear from you about puzzles I should not have left out, alternative versions of the one’s included, or solutions to any and all in comments. Maybe, no trolley problems, though? Right? Please.