The Dolphin and the Wasp: Rules, Reflections, and Representations

by Jochen Szangolies

Fig. 1: William Blake’s Urizen as the architect of the world.

In the beginning there was nothing, which exploded.

At least, that’s how the current state of knowledge is summarized by the great Terry Pratchett in Lords and Ladies. As far as cosmogony goes, it certainly has the virtue of succinctness. It also poses—by virtue of summarily ignoring—what William James called the ‘darkest question’ in all philosophy: the question of being, of how it is that there should be anything at all—rather than nothing.

Different cultures, at different times, have found different ways to deal with this question. Broadly speaking, there are mythological, philosophical, and scientific attempts at dealing with the puzzling fact that the world, against all odds, just is, right there. Gods have been invoked, wresting the world from sheer nothingness by force of will; necessary beings, whose nonexistence would be a contradiction, have been posited; the quantum vacuum, uncertainly fluctuating around a mean value of nothing, has been appealed to.

A repeat motif, echoing throughout mythologies separated by centuries and continents, is that of the split: that whatever progenitor of the cosmos there might have been—chaos, the void, some primordial entity—was, in whatever way, split apart to give birth to the world. In the Enūma Eliš, the Babylonian creation myth, Tiamat and Apsu existed ‘co-mingled together’, in an unnamed’ state, and Marduk eventually divides Tiamats body, creating heaven and earth. In the Daoist tradition, the Dao first exists, featureless yet complete, before giving birth to unity, then duality, and ultimately, ‘the myriad creatures’. And of course, according to Christian belief, the world starts out void and without form (tohu wa-bohu), before God divides light from darkness.

In such myths, the creation of the world is a process of differentiation—an initial formless unity is rendered into distinct parts. This can be thought of in informational terms: information, famously, is ‘any difference that makes a difference’—thus, if creation is an act of differentiation, it is an act of bringing information into being.

In the first entry to this series, I described human thought as governed by two distinct processes: the fast, automatic, frequent, emotional, stereotypic, unconscious, neural network-like System 1, exemplified in the polymorphous octopus, and the slow, effortful, infrequent, logical, calculating, conscious, step-by-step System 2, as portrayed by the hard-shelled lobster with its grasping claws.

Conceiving of human thought in this way is, at first blush, an affront: it suggests that our highly prized reason is, in the last consequence, not the sole sovereign of the mental realm, but that it shares its dominion with an altogether darker figure, an obscure éminence grise who, we might suspect, rules from behind the scenes. But it holds great explanatory power, and in the present installment, we will see how it may shed light on James’ darkest question, by dividing nothing into something—and something else.

But first, we need a better understanding of System 2, its origin, and its characteristics. Read more »

Are We Asking the Right Questions About Artificial Moral Agency?

by Fabio Tollon

Human beings are agents. I take it that this claim is uncontroversial. Agents are that class of entities capable of performing actions. A rock is not an agent, a dog might be. We are agents in the sense that we can perform actions, not out of necessity, but for reasons. These actions are to be distinguished from mere doings: animals, or perhaps even plants, may behave in this or that way by doing things, but strictly speaking, we do not say that they act.

It is often argued that action should be cashed out in intentional terms. Our beliefs, what we desire, and our ability to reason about these are all seemingly essential properties that we might cite when attempting to figure out what makes our kind of agency (and the actions that follow from it) distinct from the rest of the natural world. For a state to be intentional in this sense it should be about or directed towards something other than itself. For an agent to be a moral agent it must be able to do wrong, and perhaps be morally responsible for its actions (I will not elaborate on the exact relationship between being a moral agent and moral responsibility, but there is considerable nuance in how exactly these concepts relate to each other).

In the debate surrounding the potential of Artificial Moral Agency (AMA) this “Standard View” presented above is often a point of contention. The ubiquity of artificial systems in our lives can often lead to us believing that these systems are merely passive instruments. However, this is not always necessarily the case. It is becoming increasingly clear that intuitively “passive” systems, such as recommender algorithms (or even email filter bots), are very receptive to inputs (often by design). Specifically, such systems respond to certain inputs (user search history, etc.) in order to produce an output (a recommendation, etc.). The question that emerges is whether such kinds of “outputs” might be conceived of as “actions”. Moreover, what if such outputs have moral consequences? Might these artificial systems be considered moral agents? This is not to necessarily claim that recommender systems such as YouTube’s are in fact (moral) agents, but rather to think through whether this might be possible (now or in the future). Read more »