“riverrun, past Eve and Adam’s, from swerve of shore to bend of bay, brings us by a commodius vicus of recirculation back to Howth Castle and Environs” – so began James Joyce’s (infamous) Finnegans Wake. That line is but the completion of the book’s last sentence, “A lone a last a loved a long the”, You can, of course, stitch the two halves together in order simply by reading first this string and then that one.
Joyce was a notorious jokester. One of the jokes embedded in that first and final sentence is a pun on the name of a scholar who straddled the seventeenth and eighteenth centuries, Giambattista Vico. “Vicus” puns on the Latin for village, street, or quarter of a city and Giambattista’s last name. Just why Joyce did that has prompted endless learned commentary, none of which is within the compass of this essay, though, be forewarned, we’ll return to the Wake at the end.
Neither, for that matter, is Vico, not exactly. He had an epistemological principle: “Verum esse ipsum factum,” often abbreviated as verum factum. It meant, “What is true is precisely what is made.” In this he opposed Descartes, who believed that truth is verified through observation. Descartes, which his cogito ergo sum and his mind/matter dualism, is at the headwaters of the main tradition in Western thinking while Vico went underground but never disappeared, as Finnegans Wake bears witness.
Perhaps this century will see our Viconian legacy eclipse the Cartesian in the study of the mind. With that in mind, let’s consider Grace Lindsay’s excellent Models of the Mind.
What, you might ask, what kind of book is it? It could be a highly technical mid-career summary and synthesis, which would certainly be welcome. But no, it’s not that. It could be a text book for an advanced undergraduate or a graduate level course in neuroscience. It’s not that either. There are few footnotes, but each chapter has a reasonable bibliography at the back of the book. No, Models of the Mind is intended for the sophisticated and educated reader who is interested in how physics, engineering, and mathematics have shaped our understanding of the brain, to reprise the book’s subtitle. And that’s just the right audience if we want to pull off a paradigm change, from Cartesian to Viconian, in how we understand the mind.
It is intended for you, gentle reader.
The mind, so the saying goes, is what the brain does. But without (mathematical) models that saying collapses into a string of words without intellectual substance. What’s a model?
My cousin, Erik Ronnberg, Jr., makes ship models, a craft he learned from his father. Most are models of sailing ships, some are large, some are small, but no one would mistake such a model for a real ship. They are too small and few, if any, would sail, even in a small protected pond. But they must look like a real ship. Thus in researching a project Erik will seek out the plans for the ship he is modeling and make his own plans accordingly.
One of his clients was a marine artist. Erik would build the model and then the artist would pose it – in shallow box filled with sand I believe – at an angle appropriate for his painting. The model had to resemble a real ship so closely that a marine artist could use it the way another artist would pose the subject of a portrait, or, for that matter, work from an artist’s model – a real person being posed for study purposes or as a subject in, for example, a historical tableau.
The ship model is like the real thing only in physical appearance, shape primarily, but then color – but what color, when new and freshly painted or when weathered from years at sea? It is much smaller than a real ship and does not have an interior construction like a real ship. Most models have hulls carved from a single piece of wood, though more elaborate ones having planking over ribs attached to a keel, but even those models do not have interior decks, and so forth.
The mind, alas, is not a physical thing like a ship, though the brain is. And brain processes are not amenable to direct observation in the way we can observe the movements of the planets, or little animalcules wriggling in water. Figuring out what the brain does is like trying to understand how an automobile engine works by listening to engine noises. Yes, there are correlations between those noises and the operation of engine mechanisms, but there are many ways of producing such noises. Which ones are actually used in the engine? Fortunately we can open up the hood and take a look. Heck, we could consult the engineers who designed the engines.
Alas, the engineer or engineers who designed the brain are either beyond our reach or are mere fictions. While we can “open the hood” as it were – neuroscientists have devoted a lot of time and effort into figuring out various ways of doing that – the processes that most interest us cannot be directly observed. We must infer them indirectly, by building models.
How do you model what the brain does? Here’s what Lindsay says in her opening chapter:
Mathematical models are a way to describe a theory about how a biological system works precisely enough to communicate it to others. If this theory is a good one, the model can also be used to predict the outcomes of future experiments and to synthesise results from the past. And by running these equations on a computer, models provide a ‘virtual laboratory’, a way to quickly and easily plug in different values to see how different scenarios may turn out and even perform ‘experiments’ not yet feasible in the physical world. By working through scenarios and hypotheses digitally this way, models help scientists determine what parts of a system are important to its function and, importantly, which are not.
We have three things: 1) experiments and observation, 2) computer simulations, and 3) mathematics.
We use mathematics both to analyze the data and to specify the design of a simulation. Mathematics thus closes the circuit of inquiry so that the process is one of Viconian making, and not merely Cartesian observing. Though Models of the Mind is very much about mathematics, you don’t need much math to understand it. If you’re hungry for equations, they’re in an appendix, but the main text is relatively free of math, though it does have helpful diagrams and illustrations.
Lindsay also traces the history of the various models and techniques she discusses, often back into the 19th and even 18th centuries. Just what, you’re asking, does she discuss? Why don’t I list the title headings? Boring, I know, but it gives you an idea of the book’s range. And, bonus points, it’s a very Joycean thing to do. Melville would approve as well.
Chapters: 1) Spherical Cows, 2) How Neurons Get Their Spike, 3) Learning to Compute, 4) Making and Maintaining Memories, 5) Excitation and Inhibition, 6) Stages of Sight, 7) Cracking the Neural Code, 8) Movement in Low Dimensions, 9) From Structure to Function, 10) Making Rational Decisions, and 12) Grand Unified Theories of the Brain.
Concerning those grand unified theories, after discussing the tango physicists have danced with their GUTs (Grand Unified Theories) Lindsay discusses Karl Friston’s free energy principle, Jeff Hawkins’ Thousand Brains Theory (which sounds a bit like AI maven Marvin Minsky’s society of mind idea), and the integrated information theory (IIT) of Giulio Tononi. She’s skeptical, noting:
Nervous systems evolved over eons to suit the needs of a series of specific animals in specific locations facing specific challenges. When studying such a product of natural selection, scientists aren’t entitled to simplicity. Biology took whatever route it needed to create functioning organisms, without regard to how understandable any part of them would be. It should be no surprise, then, to find that the brain is a mere hodgepodge of different components and mechanisms. That’s all it needs to be to function. In total, there is no guarantee – and maybe not even any compelling reasons to expect – that the brain can be described by simple laws.
Now let’s examine a few specific cases.
In 1943 Warren McCulloch, one of the Grand Old Men of neuroscience, and Walter Pitts, a young protégé, published a paper with the sobering title, “A logical calculus of the ideas immanent in nervous activity” (we’re in Chapter 3: Learning to Compute). Working from what little was known about the structure and connectivity of neurons, McCulloch and Pitts figured out how they could perform basic logical operations. Lindsay notes:
The radical story that McCulloch and Pitts told with their model – that neurons were performing a logical calculus – was the first attempt to use the principles of computation to turn the mind–body problem into a mind–body connection. Networks of neurons were now imbued with all the power of a formal logical system. […]
With this step in their research, McCulloch and Pitts advanced the study of human thought and, at the same time, kicked it off its throne. The ‘mind’ lost its status as mysterious and ethereal once it was brought down to solid ground – that is, once its grand abilities were reduced to the firing of neurons. To adapt a quote from Lettvin, the brain could now be thought of as ‘a machine, meaty and miraculous, but still a machine’.
She goes on to note that neuroscientists ignored the paper, perhaps because of its technical nature, and perhaps because it didn’t obviously lead to experiments. But the idea was taken up by the pioneers of artificial intelligence.
We now know that the conception McCulloch and Pitts – and, for that matter, pretty much everyone else at the time – had of neurons was way too simple. If we are to think of each neuron as a computing device, then it’s a complex electro-chemical computer rather than a simple logic circuit. But the damage had been done. The good ship Descartes had been hit below the waterline. That the brain is a thinking machine was now both a thinkable and a sensible idea.
A decade and a half after McCulloch and Pitts broke the mind-matter barrier Frank Rosenblatt figured out how a machine could learn. He called his device a Perceptron. It was a relatively simple network of artificial neurons implemented in a mass of “switches, plugboards, and gas tubes.” Objects were presented to it on “a 20×20 grid of light sensors” which were connected to an array of a thousand association units which were in turn connected to response units. The machine would guess what the object was based on the connections among those association units. If its guess was wrong, then connections from the association units are modified. If its guess was correct, nothing happened. The process would continue until the Perceptron stopped making errors.
This procedure for learning was, in many ways, the most remarkable part of the Perceptron. It was the conceptual key that could open all doors. Rather than needing to tell a computer exactly how to solve a problem, you need only show it some examples of that problem solved. This had the potential to revolutionise computing and Rosenblatt was not shy in saying so. He told the New York Times that Perceptrons would ‘be able to recognise people and call out their names’ and ‘to hear speech in one language and instantly translate it to speech or writing in another language’.
Alas, Rosenblatt was ahead of himself, way ahead. It would be years, decades in fact, and many ideas later, not to mention vastly more powerful machines, before computers could do either of those things reasonably well, as they now can, at least for some purposes.
Moreover, as Lindsay notes:
The power of learning, however, came with a price. Letting the system decide its own connectivity effectively divorced these connections from the concept of Boolean operators. The network could learn the connectivity that McCulloch and Pitts had identified as required for ‘and’, ‘or’, etc. But there was no requirement that it does, nor any need to understand the system in this light. […] Compared with the crisp and clear logic of the McCulloch-Pitts networks, the Perceptron was an uninterpretable mess. But it worked. Interpretability was sacrificed for ability.
And that remains true today. The most powerful AI engines can do remarkable things, but just how they do them, that is something of a mystery. In that respect they are dismayingly like the operations of the human brain. While it has long been a cliché that computers only do what they’re programmed to do, that is not true of these learning engines. We program them to learn, which they do. After that, though, they seem to have an agency, however minimal, of their own.
Let’s consider one more example. One of Pitts’ colleagues in McCulloch’s laboratory was a man named Jerome Lettvin (we’re in Chapter 6: Stages of Sight). He investigated the response properties of cells in a frog’s retina.
In fact, in a 1959 paper ‘What the frog’s eye tells the frog’s brain’ he and his co-authors describe four different types of ganglion cells that each responded to a different simple pattern. Some responded to swift large movements, others to when light turned to dark and still others to curved objects that jittered about. These different categories of responses proved that the ganglion cells were specifically built to detect different elementary patterns. Not only did these findings align with Selfridge’s notions of low-level feature detectors, but they also supported the idea that these features are specific to the type of objects the system needs to detect.
That paper quickly became a classic in the neuroscience of vision. A bit over twenty years later a British neuroscientist, Nicholas Humphrey, published a paper entitled ‘What the Frog’s Eye Tells the Monkey’s Brain’. He had been doing experiments where he destroyed the visual cortex – a ‘higher’ visual center – of monkeys brains’ and discovered that the monkeys retained some visual ability. When present in humans this phenomenon came to be called blindsight.
How could that be? The mammalian cerebral cortex is a structure that’s lacking in amphibians and reptiles. But the mammalian visual system is not entirely located in the cortex. It has subcortical structures that correspond with structures in the frog’s brain. Humphrey hypothesized that those structures remain active in these monkeys thus affording them some measure of visual ability.
Amphibians are evolutionarily older than mammals. The evolutionary process doesn’t necessarily discard structures when new ones develop. The old ones remain, perhaps serving a different function. Thus, we return to a passage we’ve already quoted, one where Lindsay notes, “the brain is a … hodgepodge of different components and mechanisms.”
That recapitulation, that recursus, in turn suggests another, one that will take us back to where we began, to the opening sentence of James Joyce’s Finnegans Wake. A couple years ago Adam Roberts, a British jokester, punster, and writer of speculative fiction, decided that Finnegans Wake just had to be translated into Latin. It makes a kind of weird sense, doesn’t it? We take a book that no one reads, though they may claim to do so, and translate it into a language that is no longer spoken, Latin. The result: Pervigilium Finneganis: “Finnegans Wake” translated into Latin.
As Roberts explains in his helpful introduction, though he has some knowledge of Latin, he doesn’t have enough to undertake such a project. What does he do? He turns to Google Translate. Google Translate is the fruit, one of many, of Rosenblatt’s Perceptron. Roberts had to clean things up here and there before feeding Joyce’s text to the machine 5000 characters at a time, and he did some post-editing as well. It was tedious and time consuming, but not so tedious and time consuming as doing a full-fledged human translation would have been.
So, let us end this essay where we began, only in Latin: “flumenflue, transitum Eva et Adae, declinationem ab litore ad flectere lauri, commodius ab nobis facit Houuthi vicus de castro et recirculus ad circumstant.”
Thank you, Adam. And thank you, Grace Lindsay.
 Nicholas Humphrey, What the Frog’s Eye Tells the Monkey’s Brain, Brain Behavior and Evolution 3(1):324-37, February 1970 DOI: 10.1159/000125480
 Adam Roberts. Pervigilium Finneganis: “Finnegans Wake” translated into Latin. Ancaster Books. Kindle Edition.
Note: You may wish to listen to my 3QD colleague Ashutosh Jogalekar interview Lindsay.