How informative is the concept of biological information?

by Yohan J. John

Gears_animationWe are routinely told that we live in a brave new Information Age. Every aspect of human life — commerce, entertainment, education, and perhaps even the shape of consciousness itself — seems to be undergoing an information-driven revolution. The tools for storing and sharing information are becoming faster, more ubiquitous, and less visible. Meanwhile, we are increasingly employing information as an explanation of phenomena outside the world of culture and technology — as the central metaphor with which to talk about the nature of life and mind. Molecular biology, for instance, tells us how genetic information is transferred from one generation to the next, and from one cell to the next. And neuroscience is trying to tell us how information from the external world and the body percolates through the brain, influencing behavior and giving rise to conscious experience.

But do we really know what information is in the first place? And is it really a helpful way to think about biological phenomena? I'd like to argue that explanations of natural phenomena that involve information make inappropriate use of our latent, unexamined intuitions about inter-personal communication, blurring the line between what we understand and what we don't quite have a grip on yet.

People who use information technologies presumably have a working definition of information. We often see it as synonymous with data: whatever can be stored on a hard drive, or downloaded from the internet. This covers text, images, sound, and video — anything that can be represented in bits and bytes. Vision and hearing are the senses we seem to rely on most often for communication, so it's easy to forget that there are still experiences that we cannot really communicate yet, like textures, odors or tastes. (Smellevision still seems a long way off.)

The data-centric conception of information is little over half a century old, and sits alongside an older sense of information. The word 'information' comes from the verb 'inform', which is from the Old French word informer, which means 'instruct' or 'teach'. This word in turn derives from Latin informare, which means 'to shape, form'. The concept of form is closely linked to this sense of information. When something is informative, it creates a specific form or structure in the mind of the receiver — one that is presumably useful.

But there is a tension between seeing information as a unit of communication, and seeing it as something that allows a sender to create a desired result in the mind of a receiver. And this tension goes back to the origins of information theory. Claude Shannon introduced the modern technical notion of information in 1948, in a paper called A Mathematical Theory of Communication. He framed his theory in terms of a transmitter, a channel, and a receiver. The mathematical results he derived showed how any signal could be coded as a series of discrete symbols, and transmitted with perfect fidelity between sender and receiver, even if the channel is noisy. But for the purposes of the theory, the meaning or content of the information was irrelevant. The theory explained how to efficiently send symbols between point A and point B, but had nothing to say about what was actually done with these symbols. All that mattered was that the sender and receiver agree on a system of encoding and decoding. Information theory, and all the technologies that emerged in its wake, allows us to communicate more and communicate faster, but it doesn't really tell us everything we would like to know about communication.

Humans are fundamentally social animals, and communication is the foundation for our social bonds. When we appeal to information and communication to explain natural phenomena, we are essentially appealing to the average person's working knowledge of communication. We say that genes are 'expressed' through a process of 'transcription' and 'translation'. We say that neurons 'send information' to each other. We may seem to be using technical language, but in using such terms we are also anthropomorphizing the things that we are studying. Human behavior then becomes an analogical yardstick with which to measure other phenomena.

Metaphors and analogies are essential to understanding — they allow us to create bridges between the known and the unknown. Let's look at just one example from the history of science: the wave nature of electromagnetic radiation. The scientific study of wave motion began in the 1700s with research on vibrating strings, such as those of musical instruments. In 1746 Jean le Rond D'Alembert discovered the wave equation. In 1864, James Clerk Maxwell was able to obtain a wave equation by combining his famous electromagnetic equations, and therefore provided the first strong theoretical argument for the wave nature of light. A mathematical analogy between two seemingly unrelated phenomena — light propagation and mechanical vibrations in a medium — paved the way for a major unification in science.

Now the key question is whether the metaphor of sending and receiving information is particularly helpful when trying to understand biological phenomena. The sheer familiarity of communication may induce us to overestimate our understanding of it. Just as we seem to know more about the surface of the moon than the core of our own planet, it is often the case that the things that are the closest to us are the hardest to see clearly. This is especially true of cultural, social and mental phenomena. Communication between people may be as intuitively straightforward as engaging in a conversation, but that is only because we often neglect what is arguably the most important aspect of communication: choosing what to communicate in the first place, and deciding how to act upon what has been received. Psychologists and neuroscientists still have only rudimentary insights into these complex processes.

Information theory was able to revolutionize the mechanical, external aspects of communication, in part by ignoring meaning and content. Explanations of genes or neurons that refer to information processing are therefore leaving out the most important part of the story: how information that has been successfully communicated becomes manifest in the structure and in the behavior of the recipient cell or organism. In other words, if information is to become truly useful in biology, the data-based conception must be reconciled with the older form-based conception: we must understand the sense, if any, in which information shapes the organism. We must figure out how the abstract and symbolic becomes concrete and material.

The history of genetics, which we reviewed in an earlier installment of this series, reflects the fact that the biologists of the late 19th- and early 20th- century were well aware of the distinction between the two senses of information — as an abstract measure of communication and as a determinant of form — even though they didn't necessarily frame their questions using such terms. Transmission genetics asked the question of how discrete Mendelian units of heredity passed from one generation to the next. Developmental genetics asked the question of how these inherited factors became outwardly apparent as physical traits. Progress in understanding transmission was very rapid in the first half of the 20th century, culminating in the discovery of the double helix structure of DNA. This structure suggested a mechanism for the transmission of genetic material from parent to offspring, and from mother cell to daughter cells.

The discovery of the structure of DNA occurred during the first flowering of information theory and computer science in the 1950s, and this may explain why the burgeoning field of molecular biology became suffused with the language of symbols and codes — transcription, editing, translation, expression, and so on. The concept of a symbolic code may have provided researchers with a heuristic — a rough guide — for thinking about the connection between DNA, RNA and proteins. The genetic code — the system through which triplets of nucleotide 'letters' map onto amino acids — is the most successful and least controversial informational metaphor in biology. But things quickly get murky beyond this level. The sheer complexity of genetics in multicellular organisms made extending the coding metaphor beyond the amino acid stage increasingly difficult and tenuous. This didn't dissuade researchers from describing DNA as the 'book of life' — a set of instructions that 'specify' the organism.

In describing the genetic information in DNA as a set of instructions, researchers attempted to bridge the gap between information as that which is communicated and information as that which creates form. A powerful metaphor from computer science may have encouraged them to make this move. Alan Turing's model of computation seemed to be a natural complement to Shannon's information theory. A Universal Turing machine can take in strings of symbols and transform them into virtually any other string. This seemed to be an ideal way to think about how the DNA molecule influenced the form of the organism. The DNA molecule, according to this picture, 'computes the organism', just as a Turing Machine computes an algorithm or a mathematical function. But as we saw in last month's installment, mapping this abstract notion onto the complex biophysical processes of the DNA molecule requires a great deal of conceptual contortion, and we aren't really compensated for the effort with any genuine insights. The steps linking the DNA molecule with proteins — to say nothing of the steps linking it with higher level traits and behaviors — are so multifaceted and context-sensitive that the notion of computation remains only a vague placeholder for a future theory.

A problem with living in the Age of Information is that we forget that there are other ways of thinking about the physical world. We do not need to describe phenomena using surreptitious anthropomorphisms like information, communication and coding. Even in the arena where information arises naturally, human communication, we only really understand one component of the process: the transmission side. The processes that occur on either side of the transmission — the setting up of a coding convention, the choosing of an appropriate message, and the transformation of the received message into action — remain largely opaque to us. While we wait for a more complete picture of the formative components of communication, we may benefit from revisiting a simpler and more concrete conception of physical process: mechanical causality.

Instead of attempting to provide a definition of causality, let's just pick a couple of paradigmatic examples. Consider a chain of dominoes. The falling of one domino triggers the fall of the next, and so on. Consider two interlocking gears. The motion of one gear influences the motion of the other. We say that the first domino or gear causes the motion of the second. Such phenomena are intuitively easy to understand, but our understanding doesn't rely on analogies with human communication. We could say that the first domino/gear is sending a signal to the second to move, but this doesn't really add anything to our understanding. Humans could easily rig a system of gears or dominoes to send and receive messages, but the mechanisms on their own aren't communicating. Whatever else information turns out to be, it is at the very least something humans can impute onto physical states by prior agreement.

The intuitively simple nature of gear motion may explain the appeal of the 19th century notion of a clockwork universe. This mechanical metaphor of causality is sometimes ridiculed as over-simplistic, and it admittedly does not encompass submicroscopic quantum weirdness, but for everything at larger scales, it does a reasonable job of capturing a central idea about causality: that it involves processes that are local in some sense. The motion of a classical object can be explained in terms of other phenomena that are either nearby or internal to the object.

You can always translate causal language to information-based language. The motion of dominoes and gears involves the flow of energy, which can always be redescribed as the communication of some signal. But this comes at the cost of having to assert that the whole universe is engaged in signaling, and that information is in some sense a fundamental entity. This idea — which shows up in physics as the “it from bit” theory — has a kind of metaphysical appeal, but as a practical concept in biology it doesn't do much work. A conceptual tool that doesn't distinguish between the motion of gears and the exchange of ideas is a rather blunt instrument, and is not going to help us answer many of the questions about organisms that we are interested in.

At some point in the causal web linking molecules to minds, the concept of information becomes a useful way to describe phenomena of interest. But it seems as if that point is very distant from the zone in which genes live. Even the genetic code — the most acceptable use of the information metaphor in molecular biology — might best be thought of using causal ideas. To see why, we need to explore what we mean when we say that information has been encoded with a set of symbols.

In the English alphabet, each letter symbolizes a particular sound or set of sounds. When you are reading aloud, a causal chain in your brain allows you to 'decode' the letters into sounds. But there is no necessary causal connection between the form of the letters and the final sounds you produce. You can change the typeface without changing the end result. Similarly, you can learn a completely new writing system that encodes the same sounds. A human coding system is arbitrary. Each symbol just needs to be something that can be distinguished from a background, and from the other symbols in the system. A smoke signal system can mean various things, depending on the convention adopted. Smoke is caused by fire, but smoke does not necessarily symbolize fire in the communication system you happen to be using. On a more elaborate scale, a string of zeros and ones can mean letters, or numbers, or musical sounds, depending on the decoding mechanism. Symbols are therefore intricately linked with a coding convention, and conventions are ultimately human inventions. Fire and smoke, by contrast, are chained together by cause rather than convention.

Some biologists argue that the genetic code really is a convention-like code, because it is arbitrary, just like a human symbolic system. But how arbitrary is it? A given 3-letter string of nucleotides may seem to have an ad hoc relationship with the amino acid it codes for, but this may simply be an impression that comes from not understanding the low level biochemistry in sufficient detail. We may find there is a causal chain that links DNA, RNA and amino acids, rather than some sort of evolutionary happenstance that 'could have been' otherwise. This causal chain may be probabilistic, but it is hard to imagine how natural selection could achieve a truly ad hoc mapping between biomolecules.

If we relax the requirement that a coding scheme be arbitrary, then we can save the notion of biological information. But having done this, information becomes very hard to distinguish from physical causality in general. Many popular accounts of genetics are very explicit about drawing a line between genetic information — viewed as the most important factor in cellular dynamics — and other sources of causal influence, such as the environment, epigenetic processes, and the cellular machinery inherited from the mother's egg cell. Most biologists acknowledge the crucial causal roles of these non-genetic factors in determining the form and behavior of an organism. But by conferring the label of information-bearer only to the DNA molecule, many thinkers create a kind of implicit causal hierarchy, in which the letters of the DNA molecule are the prime movers in cell biology, and the other factors, including the morphology of DNA, are secondary modulators. Some thinkers may even wish to quantify the relative causal importance of DNA versus other factors. One might say that DNA does 90% of the work, for instance. But this misrepresents the nature of complex causal interactions.

Consider a tripod. One might ask about the relative contribution of each of the three legs toward the stability of the tripod. Each leg contributes equally to the tripod's ability to stand: remove any one of them, and the tripod will topple. But it would be meaningless to assert that each tripod contributes one third of the stability of the tripod: that might create the false impression that the tripod would be 'two thirds' standing without one of its legs. The tripod is either standing or it isn't. This example is somewhat trivial, but the basic point is easily extended to more complex systems: clocks, cars, computers, and even biological systems. All relevant causal factors act in concert to give rise to the phenomenon. Biological systems are distinct from human-made devices in their ability to create functional redundancies — if one system fails another one may be able to partially compensate — but this is as true of the DNA molecule as it is of epigenetic and environmental factors. More than one gene may provide the same causal influence on a given process.

Genetic information is not an arbitrary code, and its causal role in biology does not seem to be qualitatively distinct from that of non-genetic factors. But biologists who cherish the idea of a special role for genetic information have one more trick up their sleeves: the so-called 'teleosemantic' approach. Teleology is the idea that a phenomenon can be explained in terms of the purpose it serves. This clearly flies in the face of traditional conceptions of causality: the explanation of how a gear influences another gear has little to do with what the gears are intended for. Teleosemantic theories are therefore attempts to incorporate intentionality into scientific discourse.

According to teleosemantic thinking genetic information is distinct from other sources of information (/causality) by virtue of having been selected. The evolutionary biologist John Maynard Smith, for example, used an explicitly anthropomorphic analogy to justify the claim that genes uniquely symbolize or represent their effects, or are 'supposed to' do certain things. Maynard Smith compares natural selection to the behavior of a computer programmer using a genetic algorithm. The programmer creates a code, and varies certain of its parameters. The parameters that work best for the task survive in the next 'generation' of the code. Over time, the computer code becomes 'selected' for the task, and is described as being 'about' the effects the code produces. Similarly, genes that have been naturally selected are 'about' the effects they produce in the organism. In this way 'mindless' evolution is described as giving rise to entities that have intentional or representational meaning. [1]

Several philosophers and evolutionary theorists have pointed out problems with the teleosemantic approach, and with adaptationism, which is closely related. Adaptationism is the idea that it is unproblematic to describe a particular trait in an organism as an adaptation tuned by evolution for a particular function. Thus a giraffe's neck is an adaptation for reaching the highest leaves of trees. Adaptationism is most controversial when it is employed in explanations of human behavior. Evolutionary “just so stories” about our ancestors provide plausible but hard to falsify accounts of specific traits. The problem with naive adaptationism is that it assumes an overly simple relationship between gene, organism, and environment. In this picture, the environment poses discrete problems to each organism, and the genes contribute discrete, independent traits that allow the organism to solve these problems. But natural environments cannot be neatly divided into separate problems. Further, genes are not independent entities floating in the genome: they are connected to each other in complex causal networks, and serve a variety of functions in an organism. As one physicist-turned-neuroscientist put it, the only way each gene could map to a distinct trait would be if the genome were an ideal gas.

An example will help us see why thinking about genes as being 'about' specific functions or historical adaptations can be problematic. Imagine an insect species in a particular jungle. It is preyed upon by certain predators, but they are all color-blind. One trait that might conceivably be acted upon by natural selection is heat regulation. The color of the insect influences its heat regulation, and so it settles upon a particular color, say bright green. Since the predators can't distinguish bright green from other colors in the jungle, heat regulation is an independent trait. Now imagine that some geological shift leads to the arrival of a new species of predator: one that can easily see bright green. Now there is a new selective pressure on the insect, so two traits that were previously unrelated — heat regulation and camouflage — are now correlated due to the environment. If there is a gene that causally influences the insect's color, is it 'about' heat regulation, or 'about' camouflage? An adaptationist perspective seems to force us to view traits in terms of specific environmental problems. But any gene or higher level trait can be causally associated with multiple overlapping functions or processes. Every organism needs to be a jack-of-all-trades, to some extent. [2]

But suppose we accept that adaptationist and teleosemantic notions of information are valid. Would this then mean that genes are the unique bearers of teleosemantic information? Perhaps not. The philosopher Paul Griffiths has pointed out that natural selection does not simply act on genes. An organism inherits cellular machinery from the mother's egg cell. Thus it makes sense to think of the entire developmental system — genome, epigenome and cellular machinery — as a potential target for natural selection. The cell membrane, for instance, is 'supposed to' perform certain functions, but it is inherited directly from the egg cell, and is not made up of proteins. Even teleosemantic theories cannot make a convincing distinction between genetic and non-genetic sources of information.

Given the problems associated with the metaphors of information, coding and computation, it may be much simpler to simply dispense with these inextricably anthropomorphic concepts and see if the humble clockwork universe picture suffices. Thinking in causal terms refocuses attention on the complex physical and chemical processes that underlie biological phenomena. Concepts like 'a gene for autism' or 'a mirror neuron system for empathy' become quite hard to defend when you are expected to provide a causal link between low-level cellular processes and high-level behavioral traits.

It is tempting to speculate about why information and coding based metaphors are so popular, particularly with many public intellectuals. Humans have always conceived of themselves using analogies from the technologies of the day. Ancient water technologies informed Greek notions about the soul and the Roman physician Galen's theory of the four humours. Clockwork metaphors were popular during the Enlightenment, and by the time of the industrial revolution, the body and mind were likened to steam-powered factories. In the 20th century computer metaphors proliferated, and now, at the dawn of the 21st century, information-bearing networks are the central signifier. [3]

The ideal application of metaphor is in connecting a poorly understood topic with one that is much better understood. The information perspective is therefore a somewhat unsatisfactory metaphor, because we don't fully understand it even in its 'home base' — the arena of interpersonal communication. So it seems premature to use it as a lens with which to look at genetics or neuroscience.

I suspect that information is not simply the latest fad explanation, however. While it seems to be a new and sophisticated attitude towards the physical world, it actually bears a striking resemblance to very ancient modes of thought. When we reviewed the history of genetics and heredity, we saw that despite early awareness of the roughly equal contributions of mother and father to the form of the offspring, ancient and medieval thinkers settled on a rather lopsided preformationism. Aristotle, building on an idea attributed to Pythagoras, held that a male ordering principle, “eidos”, contributed the essential characteristics of the offspring, while the female contributed the formless raw material, “catamenia”. “Eidos” connotes something which is seen by the intellect. The Greek words “eidos” and “idea” both stem from an Indo-European root which means to “see”. It is not too much of a stretch to think of “eidos” as precursor to the modern notion of information: an abstract concept that allows mind to give form to passive matter.

Since both the mother and the father contribute DNA to the offspring, the notion of genetic information is ostensibly less patriarchal than preformationism. Still, the gene-centric view retains a kind of masculine yang flavor: only the DNA carries information, in decisively discrete units. The yin — continuous and somehow indistinct — is provided by the environment, the epigenome, and the cellular machinery. Despite being as causally important as the DNA molecule, these factors come to be seen by the 'genetic preformationists' as little more than formless raw material, or as incidental modulation.

Extending our speculation even further, the privileging of digital genetic information over other forms of causality may also reflect certain age-old prejudices modern society still harbors. Information in the abstract floats aristocratically above mere matter, and seems to be unconcerned with the nuts and bolts of low-level causality. How a gene achieves its function is 'mere detail'. The noble and creative information-bearing gene in its nuclear palace can seem as far removed from the causal labor of the cell as Steve Jobs seems from the workers at the Foxconn factory.

The popularity of information metaphors may be a reflection of the pride of place society often gives to its thinkers, designers, CEOs, and sundry 'creatives'. When these sorts of people speak, things really do seem to get done. Their ideas magically transform into reality. Perhaps it makes sense in such an intellectual climate that the question of how ideas become manifest in the physical world gets swept under the carpet. Very few people want to know how the sausage (or the iPhone) is really made.

Perhaps the average consumer's distaste for laborious detail is understandable when confronted with the cruel complexities of the world economy, but in science this sort of selective attention allows us to get away with telling ourselves comforting but ultimately pointless half-truths. Those of us who are genuinely interested in the transformative power of scientific understanding should resist explanations such as 'the aggression gene encodes aggressiveness' or 'the mirror neuron represents empathy'. We still have no idea how a behavior could possibly be encoded by a gene, or how the firing of a group of neurons could create the subjective feeling of empathy. For the time being, the concept of biological information is largely a way to paper over our ignorance with the help of other concepts that are no less mysterious, but are perhaps more flattering to our intuitions and conceits.

______

Notes and References

This essay represents my synthesis of ideas from several thinkers, but particular credit goes to the philosophers of science Paul E. Griffiths, Sahotra Sarkar, and John S. Wilkins.

[1] Paul Griffiths — Genetic information: A metaphor in search of a theory

[2] This example is a variation on one found in The Dialectical Biologist.

[3] John Daugman — Brain Metaphor and Brain Theory

Here is a list of my previous 3QD articles on information and genetics: