The Lobster and the Octopus: Thinking, Rigid and Fluid

by Jochen Szangolies

Fig. 1: The lobster exhibiting its signature move, grasping and cracking the shell of a mussel. Still taken from this video.

Consider the lobster. Rigidly separated from the environment by its shell, the lobster’s world is cleanly divided into ‘self’ and ‘other’, ‘subject’ and ‘object’. One may suspect that it can’t help but conceive of itself as separated from the world, looking at it through its bulbous eyes, probing it with antennae. The outside world impinges on its carapace, like waves breaking against the shore, leaving it to experience only the echo within.

Its signature move is grasping. With its pincers, it is perfectly equipped to take hold of the objects of the world, engage with them, manipulate them, take them apart. Hence, the world must appear to it as a series of discrete, well-separated individual elements—among which is that special object, its body, housing the nuclear ‘I’ within. The lobster embodies the primal scientific impulse of cracking open the world to see what it is made of, that has found its greatest expression in modern-day particle colliders. Consequently, its thought (we may imagine) must be supremely analytical—analysis in the original sense being nothing but the resolution of complex entities into simple constituents.

The lobster, then, is the epitome of the Cartesian, detached, rational self: an island of subjectivity among the waves, engaging with the outside by means of grasping, manipulating, taking apart—analyzing, and perhaps synthesizing the analyzed into new concepts, new creations. It is forever separated from the things themselves, only subject to their effects as they intrude upon its unyielding boundary.

Then consider the octopus. The octopus, in engaging with the world, becomes the world—not merely due to the lack of a rigid boundary, but because of its unique capacity for mirroring it, and thus, taking it in. This mirroring should not be misunderstood as a mere means of predator-evasion, or camouflage: it is deeply enmeshed with its ability to cogitate. A while back, a video of an octopus (potentially) dreaming made the rounds: in its sleep, the octopus changes color, and even shape, in a way reminiscent of analogous changes during hunt and feeding.

Fig. 2: An octopus changing color (and texture) in its sleep—dreaming of hunting for crabs along the ocean floor? Stills taken from this video.

The octopus’ signature move is not that of grasping, but rather, of entwinement—of becoming enmeshed with that which attracts its attention. Indeed, it is often hard to delineate a proper boundary between an octopus’ body and its surroundings; moreover, in touching you, the octopus also tastes you, its suckers being equipped with taste buds. Indeed, as its arms contain largely independent neuronal assemblages, it also thinks you. The octopus is not a centralized reasoner, a homunculus ghost-in-the-machine piloting the vehicle of its body through the outside world; its cognition is distributed throughout the body, and its body and its surroundings blend into one.

The interaction of the octopus with the world is then nondual, in the sense that it cannot conceive of itself (if it conceives of itself at all) as wholly separate from the external world. Everywhere, the world moves through it; there is no clear separation between the subject and the object. It becomes the thought it thinks, the element of the environment it cognizes. Its world does not cleanly separate into concepts, into discrete objects of individual character, but is apprehended as a whole Gestalt, with itself an inseparable part thereof.

William James and Two Modes of Thought

The above has, in a perhaps somewhat fanciful way, introduced two distinct modes of connecting with the world. The lobster-mode of meeting the world sees it as a collection of distinct concepts, a container of things to take apart and (re-)combine, and sees the self as a Cartesian subject separated from the external, physical world; while the octopus-mode of being in the world sees it as a continuous, fluid amalgamation that does not cleanly separate into distinct objects, or even self and non-self.

The distinction between two modes of engagement with the world has a long, if somewhat muddled, philosophical pedigree. In the Western tradition, it is enshrined in the distinction between nous and logismos, intellectus and ratio, Verstand and Vernunft: the latter being the faculty of ‘mere understanding’, and the former referring to a more all-encompassing mode of apprehension. Often, the nondual mode is held to be superior to dualistic reasoning. In The Mystic Will, a commentary on the thought of 16th century mystic Jakob Boehme, Howard Brinton puts it as follows:

In Vernunft subject and object are separated. Accordingly Vernunft is doubtful knowledge. In Verstand the subjective-objective distinction has been transcended, therefore Boehme held Verstand is sure knowledge, for knower and known are one.

There is a comparison here to notions from Buddhist thought, particularly as relates to the Mahāyāna tradition. This comparison is explicitly made in David Loy’s seminal study Nondualiy in Buddhism and Beyond, where he quotes D. T. Suzuki on the notions of prajñā and vijñāna:

Prajñā goes beyond vijñāna. We make use of vijñāna in our world of the senses and intellect, which is characterized by dualism in the sense that there is the one who sees and there is the other that is seen—the two standing in opposition. In prajñā this differentiation does not take place: what is seen and the one who sees are identical; the seer is the seen and the seen is the seer.

The human self-image is typically that of a lobster: a nucleic self acting in a world of external objects, grasping and manipulating, analyzing and synthesizing. Villem Flusser circumscribes this self-conception marvelously in Vampyroteuthis Infernalis, his richly imaginative study of the cognitive life of the ‘vampire squid from hell’:

We trace our fingers along the dissected rations of phenomena in order to comprehend and define their contours. With a theoretical gaze, we then disassociate these defined contours from the dissected phenomenon, at which point we are holding an empty husk. We call this empty husk a “concept,” and we use it to collect other rations of phenomena that have not yet been fully defined. We use concepts as models. In doing so, we create a mélée between dissected appearances and empty concepts—between phenomena and models. The unfortunate outcome of this conflict is that we can no longer discern any phenomena for which we have not already established a model. Since we can no longer apprehend model-less phenomena, we therefore brandish the scalpel of reason simply to tailor phenomena to our models. […]

The Vampyrotheuthis, on the other hand, has no knife, no need for human reason. […] Its reason is therefore preconceptual.

But certain states of mind can serve to break up this dualist conception: in flow experiences, for example, we can loose awareness of ourselves as things in the world, and become entirely subsumed in what we’re doing—we ‘become the action’, so to speak, the duality between actor and acted-upon eroding. In human understanding, lobsterian and octopoid thought, Vernunft and Verstand, vijñāna and prajñā occur together.

There is an intriguing analogy, here, to the dual process theory of human reason, pioneered by William James, the 19th century ‘Father of American psychology’. As popularized in Daniel Kahnemann’s best-selling book Thinking, Fast and Slow, there are two ‘systems’ of human thought: System 1, which is described as ‘fast, automatic, frequent, emotional, stereotypic, unconscious’, and System 2, considered to be ‘slow, effortful, infrequent, logical, calculating, conscious’.

System 2 is thus the discursive, deliberate, premises-to-conclusion mode of conscious reasoning. System 2’s thought always has an object—is, in the philosophical parlance, ‘intentional’: directed at or concerned with something beyond itself. Thus, System 2 introduces the lobster-like distinction between self and other, the vijñāna mode of reasoning about objects in the world as conceptually cleanly separated.

System 1 is mostly associated with intuitive, spontaneous modes of thought. The immediate recognition of faces, split-second judgments of situations, the vague feeling of dread in a situation that only later reflection recognizes as actually having been dangerous—these are not thoughts of the self about the world, but the world coming to mind under a certain aspect—or with a certain flavor, as in the octopoid ‘tasting’ of the world. System 1 does not take the world as its object, to be examined, poked at, disassembled and reformed at leisure, but produces spontaneous reactions, changing as the world does, without deliberation.

Both systems are often thought to work alongside each other, within their respective domains, playing to their strengths. But newer research has started challenging that conception. As they argue in their book The Enigma of Reason, Hugo Mercier and Dan Sperber believe that the only way to make sense of human irrationality is not to stipulate that our reasoning engine, System 2, is subject to the familiar list of cognitive biases and fallacies for unknown reasons, but rather, that it is not so much a reasoning as a rationalizing engine, tasked with making System 1’s judgments and reactions (socially) defensible.

Ultimately, the basic building blocks System 2 freely recombines into complex concepts have their roots in System 1’s intuitive characterizations, and can’t themselves be made more sensible—they’re the axioms from which our (that is, System 2’s) inferences flow. This, too, has been prefigured in Buddhist thought. Witness again Suzuki:

If we think that there is a thing denoted as prajñā and another denoted as vijñāna and that they are forever separated and not to be brought to the state of unification, we shall be completely on the wrong track. Vijñāna cannot work without having prajñā behind it; parts are parts of the whole; parts never exist by themselves, for if they did they would not be parts—they would even cease to exist.

We thus arrive at a picture of human reason where what we typically mean by ‘reason’—the lobsterian, deliberate, step-by-step coming to conclusions from premises, mulling about concepts, planning actions performed on objects in the external world—is ultimately a secondary gloss, derived from the more fundamental, nondual, System 1 immediate and direct octopoid interaction with the world, that yields the ‘input’ for System 2 to process. We are lobsters riding octopuses (or octopodes, if you want to feel fancy; but please, never octopi!).

This should not be taken to imply an overly disparaging view of System 2 (nor lobsters). It may provide important corrective impulses. Take the following riddle:

Emily’s father has three daughters. Two of them are named April and May. What is the third daughter’s name?

This riddle has an answer that is immediate, intuitive, effortless, and wrong: the name of the third daughter is not, of course, June, but Emily. Yet, typically, the wrong answer leaps immediately to mind, and it is only reflection—System 2-style drawing of inferences from explicit data—that yields the correct solution.

Furthermore, the connection between System 1 and System 2 isn’t entirely one-way: System 1, due to its holistic nature, considers System 2 just as much a part of the world as anything else; hence, System 2-behaviors backreact on System 1’s judgments. Indeed, as we will see, we can think of System 2 as providing ‘training data’ for System 1—thus, shaping its judgments and intuitions—for better or worse.

AI Needs an Editor

In September, the Guardian ran an article with the somewhat sensationalist headline “A robot wrote this entire article. Are you scared yet, human?”. It purported to be an op-ed written entirely by OpenAI’s deep learning language model, GPT-3. Except, that’s not quite what it was: rather, GPT-3 had been given the task, based on a specific prompt, to write a 500-word article; it produced eight different versions, which were then edited into an almost-cohesive whole.

This opens up the question: when should a creative work produced with machine assistance be actually credited to the AI involved in the process? One might imagine an extreme case, where I produce large volumes of text essentially randomly, then cut and splice various portions that just happen to make sense together—surely, the resulting text would not sensibly be considered the creation of the program I used. On the other hand, pieces written by human writers are also typically edited, yet still credited to the original author.

In a follow-up article, the Guardian opened up about the lobsterian process of editing, of cutting and re-joining, supplying also an example output of a single run of GPT-3. There, the lack of cohesiveness—the nonexistence of any real narrative thread, any sense of consistency between the elements of the narrative, and the concepts appealed to—is manifest: at one point, the AI claims to be from another planet, and at the end, some random spam sneaks in.

Fig. 3: What a ‘conceptless’ world might look like: the scenery as a whole is immediately familiar, but it fails to decompose into clearly recognizable objects. Originally posted to Twitter with the telling injunction to ‘name one thing in this photo’.

What GPT-3 lacks is the corrective facility System 2 lends to System 1’s free-wheeling association. There is no consistency check to disabuse it of the notion that Emily’s father’s three daughters are named April, May, and June: it sees the pattern, and completes it in a sensible way; but it does not understand how the concepts the words map to interrelate. It lacks, in other words, a model of the world.

This is part of the utility System 2 brings to the human thought process: it adds, so to speak, concepts to percepts, and allows us to manipulate and recombine them. Without it, the world would appear to us (if it appeared at all) as a singular, undifferentiated Gestalt, much as in the AI-generated picture in Figure 3.

That human-like artificial intelligence won’t likely be achieved by the System 1-style neural network approach on its own has long been appreciated in the AI community. As AI researcher Ben Goertzel put it back in 2012:

Bridging the gap between symbolic and subsymbolic representations is a—perhaps the—key obstacle along the path from the present state of AI achievement to human-level artificial general intelligence.

It is not clear, however, that successful navigation of the world necessitates the addition of a System 2. Indeed, in some ways, it may even be a handicap: modeling necessarily falsifies, and introduces the danger of mistaking the map for the territory. Thus, a reliance on System 2’s conclusions about the world introduces the danger of being radically mistaken about the nature of the world—that our mental model is constructed in some particular way does not imply that the world as such must follow the same principles. Just because we can use clockwork to model the solar system, doesn’t mean the planets run on wheels and gears.

Another hypothesis has the conclusions and explanations produced by System 2 fulfill an essentially social, communicative role: a neural network’s judgments are hard to explain, or justify. It pronounces an image to contain a cat because it has the Gestalt of a cat; but System 2 can opt in, and go, yeah, it’s got fur, whiskers, four paws, a tail, and a calico pattern—that’s what makes it a cat.

This is the insight behind DARPA’s push for third-wave, explainable AI: rather than just providing inscrutable judgments, to be trustworthy, AI needs to be able to justify its conclusions. Hence, the function of System 2 may be to add a layer of accountability to System 1’s associations. In doing so, it does not necessarily need to be truthful—but merely, convincing to other System 2s. As such, it is an engine of social cohesion, rather than a means of finding the ultimate truth about the world—which may, in line with the conclusions of Mercier and Sperber, explain why it so often falls short of the latter.

I believe this picture of the human mind, as a model-based System 2 making sense of the neural network-like System 1’s classifications—the lobster riding the octopus—has great explanatory potential. In future columns, I want to give an overview of how it sheds light on certain aspects of art, philosophy, religion, and science. For now, I will close with two famous verses from the Dàodéjīng:

The nameless is the beginning of heaven and earth.

The named is the mother of ten thousand things.

We may, with some poetic license, see here the nonconceptual, subsymbolic (‘nameless’), nondual System 1 as the original ground of the world, with System 2’s conceptualizations (‘names’) applied afterwards: the world as we think it arises once we populate it with concepts, breaking up the original unity into discrete substances of individual character.

Like what you're reading? Don't keep it to yourself!
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Share on Reddit
Reddit
Share on LinkedIn
Linkedin
Email this to someone
email