The Von Neumann Mind: Constructing Meaning

by Jochen Szangolies

Figure 1: The homunculus fallacy: attempting to explain understanding in terms of representation begs the question of how that representation is itself understood, leading to infinite regress.

Turn your head to the left, and make a conscious inventory of what you’re seeing. In my case, I see a radiator upon which a tin can painted with an image of Santa Claus is perched; above that, a window, whose white frame delimits a slate gray sky and the very topmost potion of the roof of the neighboring building, brownish tiles punctuated by gray smokestacks and sheet-metal covered dormers lined by rain gutters.

Now turn your head to the right: the printer sitting on the smaller projection of my ‘L’-shaped, black desk; behind it, a brass floor lamp with an off-white lampshade; a black rocking chair; and then, black and white bookshelves in need of tidying up.

If you followed along so far, the above did two things: first, it made you execute certain movements; second, it gave you an impression of the room where I’m writing this. You probably find nothing extraordinary in this—yet, it raises a profound question: how can words, mere marks on paper (or ordered dots of light on a screen), have the power to make you do things (like turning your head), or transport ideas (like how the sky outside my window looks as I’m writing this)? Read more »

The Projected Mind: What Is It Like To Be Hubert?

by Jochen Szangolies

Figure 1: Hubert, the author’s cheerful plush ladybug flatmate.

Meet Hubert. For going on ten years now, Hubert has shared a living space with my wife and me. He’s a generally cheerful fellow, optimistic to a fault, occasionally prone to a little mischief; in fact, my wife, upon seeing the picture, remarked that he looked inordinately well-behaved. He’s fond of chocolate and watching TV, which may be the reason why his chief dwelling place is our couch, where most of the TV-watching and chocolate-eating transpires. He also likes to dance, is curious, but sometimes gets overwhelmed by his own enthusiasm.

Of course, you might want to object: Hubert is neither of these things. He doesn’t genuinely like anything, he doesn’t have any desire for chocolate, he can’t dance, much less enjoy doing so. Hubert, indeed, is afflicted by a grave handicap: he isn’t real. He can only like what I claim he likes; he only dances if I (or my wife) animate him; he can’t really eat chocolate, or watch TV. But Hubert is an intrepid, indomitable spirit: he won’t let such a minor setback as his own non-existence stop him from having a good time.

And indeed, the matter, once considered, is not necessarily that simple. Hubert’s beliefs and desires are not my beliefs and desires: I don’t always like the same shows, and I’m not much for dancing (although I confess we’re well-aligned in our fondness of chocolate). The question is, then, whom these beliefs and desires belong to. Are they pretend-beliefs, beliefs falsely attributed? Are they beliefs without a believer? Or, for a more radical option, does the existence of these beliefs imply the existence of some entity holding them? Read more »

The Dolphin and the Wasp: Rules, Reflections, and Representations

by Jochen Szangolies

Fig. 1: William Blake’s Urizen as the architect of the world.

In the beginning there was nothing, which exploded.

At least, that’s how the current state of knowledge is summarized by the great Terry Pratchett in Lords and Ladies. As far as cosmogony goes, it certainly has the virtue of succinctness. It also poses—by virtue of summarily ignoring—what William James called the ‘darkest question’ in all philosophy: the question of being, of how it is that there should be anything at all—rather than nothing.

Different cultures, at different times, have found different ways to deal with this question. Broadly speaking, there are mythological, philosophical, and scientific attempts at dealing with the puzzling fact that the world, against all odds, just is, right there. Gods have been invoked, wresting the world from sheer nothingness by force of will; necessary beings, whose nonexistence would be a contradiction, have been posited; the quantum vacuum, uncertainly fluctuating around a mean value of nothing, has been appealed to.

A repeat motif, echoing throughout mythologies separated by centuries and continents, is that of the split: that whatever progenitor of the cosmos there might have been—chaos, the void, some primordial entity—was, in whatever way, split apart to give birth to the world. In the Enūma Eliš, the Babylonian creation myth, Tiamat and Apsu existed ‘co-mingled together’, in an unnamed’ state, and Marduk eventually divides Tiamats body, creating heaven and earth. In the Daoist tradition, the Dao first exists, featureless yet complete, before giving birth to unity, then duality, and ultimately, ‘the myriad creatures’. And of course, according to Christian belief, the world starts out void and without form (tohu wa-bohu), before God divides light from darkness.

In such myths, the creation of the world is a process of differentiation—an initial formless unity is rendered into distinct parts. This can be thought of in informational terms: information, famously, is ‘any difference that makes a difference’—thus, if creation is an act of differentiation, it is an act of bringing information into being.

In the first entry to this series, I described human thought as governed by two distinct processes: the fast, automatic, frequent, emotional, stereotypic, unconscious, neural network-like System 1, exemplified in the polymorphous octopus, and the slow, effortful, infrequent, logical, calculating, conscious, step-by-step System 2, as portrayed by the hard-shelled lobster with its grasping claws.

Conceiving of human thought in this way is, at first blush, an affront: it suggests that our highly prized reason is, in the last consequence, not the sole sovereign of the mental realm, but that it shares its dominion with an altogether darker figure, an obscure éminence grise who, we might suspect, rules from behind the scenes. But it holds great explanatory power, and in the present installment, we will see how it may shed light on James’ darkest question, by dividing nothing into something—and something else.

But first, we need a better understanding of System 2, its origin, and its characteristics. Read more »

Are We Asking the Right Questions About Artificial Moral Agency?

by Fabio Tollon

Human beings are agents. I take it that this claim is uncontroversial. Agents are that class of entities capable of performing actions. A rock is not an agent, a dog might be. We are agents in the sense that we can perform actions, not out of necessity, but for reasons. These actions are to be distinguished from mere doings: animals, or perhaps even plants, may behave in this or that way by doing things, but strictly speaking, we do not say that they act.

It is often argued that action should be cashed out in intentional terms. Our beliefs, what we desire, and our ability to reason about these are all seemingly essential properties that we might cite when attempting to figure out what makes our kind of agency (and the actions that follow from it) distinct from the rest of the natural world. For a state to be intentional in this sense it should be about or directed towards something other than itself. For an agent to be a moral agent it must be able to do wrong, and perhaps be morally responsible for its actions (I will not elaborate on the exact relationship between being a moral agent and moral responsibility, but there is considerable nuance in how exactly these concepts relate to each other).

In the debate surrounding the potential of Artificial Moral Agency (AMA) this “Standard View” presented above is often a point of contention. The ubiquity of artificial systems in our lives can often lead to us believing that these systems are merely passive instruments. However, this is not always necessarily the case. It is becoming increasingly clear that intuitively “passive” systems, such as recommender algorithms (or even email filter bots), are very receptive to inputs (often by design). Specifically, such systems respond to certain inputs (user search history, etc.) in order to produce an output (a recommendation, etc.). The question that emerges is whether such kinds of “outputs” might be conceived of as “actions”. Moreover, what if such outputs have moral consequences? Might these artificial systems be considered moral agents? This is not to necessarily claim that recommender systems such as YouTube’s are in fact (moral) agents, but rather to think through whether this might be possible (now or in the future). Read more »