The Next 100 Years in the Human Sciences, a Reply to Frank Wilczek’s Remarks about Physics

by Bill Benzon

Frank Wilczek, theoretical physicist and Nobel Laureate at MIT, has recently published his speculation on what physics will yield over the next 100 years [1]. It’s an interesting and provocative read, if a bit obscure to me (I never studied physics beyond a mediocre high school program). And, of course, I had little choice but to wonder:

What about the human sciences in the next 100 years?

My initial reaction to that one (with a nod to Buster Keaton): Damfino!

But then I actually began to think about it and things got interesting, in part because some of Wilczek’s speculations about physics have implications for the human sciences.

I begin with a failed prognostication of my own from four decades ago. Then I move on to Wilczek’s central theme, unification, and conclude with some observations about memory and quantum computing.

Computing, the Prospero Project, and Cultural Singularity

Back in 1976 David Hays and I published a review of the then current computational linguistics literature for Computers and the Humanities [2]. At the time Hays was a senior scholar in the Linguistics Department with a distinguished career going back to his early days at the RAND Corporation, where he led their work in machine translation. I was a graduate student in English literature and a member of Hays’s research group.

Once we’d finished with the research roundups standard in such papers we indulged in a fantasy we called Prospero (p. 271): “a system with a semantics so rich that it can read all of Shakespeare and help in investigating the processes and structures that comprise poetic knowledge. We desire, in short, to reconstruct Shakespeare the poet in a computer.” We then went to specify, in a schematic way, what would go into Prospero and what one might do with Prospero as a research tool.

We did not offer a delivery date for this marvel, specifying only a “remote future” (p. 273). That, I’m sure, ways Hays’s doing; he was too experienced in such matters to speculate on due dates and told me so on more than one occasion. I’m quite sure that, in my own mind, I figured that Prospero might be ready for use in 20 years, certainly within my lifetime. Twenty years from 1976 would have been 1996, but nothing like Prospero existed at that time, nor was it on the visible horizon. Now, almost two decades after that we still have no Prospero-like computational systems nor any likely prospects for building one.

I’m not going to put Prospero down as something likely to happen within the next 100 years, that is, by 2115. Nor, for that matter, am I going to assert that a Prospero class system will not happen by 2115. I just don’t know.

I will offer, however, that the existence of Prospero-class system is likely to present ethical problems of the sort attending research on human, and increasingly, animal subjects. The general idea would be to set Prospero some task – read Hamlet, write an essay on Much Ado About Nothing, etc. – and then open her up and examine what she did in the process of performing that task. Would Prospero have to give us permission to do so? You can play with such possibilities all you will, in the manner of inventing books to be found in Borges’ “Library of Babel”, but that’s a side issue.

The issue: What happened that I’m unwilling to speculate about Prospero’s likelihood? Well, I’m no longer in my late 20s and I’ve learned a thing or two since then. More significantly, WE, the intellectual community, have learned a lot since then, and things have gotten more complex. The more we know about this new intellectual world, the stranger and larger it gets.

That last feature – that increasing knowledge also increases our sense of the unknowns running around out there (the unknown unknowns in an infamous recent formulation) – suggests we are passing through a cultural singularity. As far as I know, the term, “singularity” was first used in this sense by Stanislaw Ulam in 1958 in a summary of the intellectual achievements of John von Neumann [3]. He’d had many conversations with Johnny (as von Neumann was known to his associates):

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

These days, when used in reference to human affairs the term “singularity” most often designates an event in which, through some means or another, computers equal and then quickly surpass us in “intelligence” – whatever that is. Once that happens, so the speculation goes, all bets are off and we have no way of predicting what will happen as the computers will be way ahead of us and begin working their devious ways with us.

I prefer Ulam’s somewhat more modest usage in which a historical singularity is a regime which so changes human affairs that they cannot continue on their old course [4]. And I submit that the advent of the programmable digital computer has already sent us headlong into a cultural singularity. It is as though one undertakes a long and perilous ocean voyage in search of the Indies. Explorations undertaken after reaching landfall reveal that these Indies are not at all what one expected. Perhaps we’ve landed somewhere else, somewhere deeply unknown? All bets are off.

That’s where computing has landed us, though we try valiantly to pretend we’re but living out the mid-20th Century hopes and predictions of Vannevar Bush (“As We May Think”), Walt Disney (the many Tomorrow Land episodes of his TV show), and many others. We take comfort in the steady increase in computing power captured in the observation known as Moore’s Law, but we don’t in any deep sense know where computing technology is headed. They promised us a jet pack but delivered the Internet.

It is in this spirit that I offer a response to Wilczek that one dheera offered in a discussion at Hacker News:

My speculation for the next 100 years, that Wilczek does not propose, is that a notation revolution will happen.

The standard way of representing mathematics today is tedious to write and extremely non-intuitive in many ways. Unfortunately alternative representation systems of physical processes (e.g. block diagrams, Feynmann diagrams, etc.) don't yet provide a way for the user to operate on higher-level objects while staying on an abstracted level. One has to tear open all the black boxes, rewrite them as integral/sigma/matrix/bra/ket soup before they can be operated upon.

Most of the time when I read a physics paper, even in my own field of research, I spend abount 95% of my time and brain power parsing and 5% of the time understanding. This should be reversed.

Perhaps this seems like a peripheral matter – a mere matter of notation. But I’m not so sure.

From a certain point of view the Arabic notation of arithmetic is but an alternative to the Roman notation. But the Roman notation exists today only as a historical curiosity. It is not used as a practical tool for calculation. In that respect it has been completely eclipsed by the Arabic notation, which happened centuries ago. The Arabic notation allows one to represent any numerical quantity using a small number of symbols. That is not true of the Roman notation, which becomes intractable for large values. The Arabic notation is thus conducive to the formulation of explicit and replicable computational procedures – algorithms – while the Roman notation is not, requiring the invention of ad hoc procedures for new problems.

Seen in this light it is obvious that scientific revolution in the West would have been impossible without the prior adoption of the Arabic notion. The very fruitful metaphor of the clockwork universe would have likewise been impossible. Obviously there was more going on than the arithmetic notation, but the adoption of a more perspicuous notation was important [5]. Will a notation revolution be important in the future of physics? If so will it support the changes Wilczek envisions or will it transform the imagination in a way that physicists find themselves looking at a new agenda?

Unification, Consilience, and Evolution as a Theme

Wilczek sees seven unifications in the next 100 years. But he begins by posing unification as a question, noting that at a public lecture he was asked (p. 2): “Why do physicists care so much about unification? Isn’t it sufficient just to understand things?” I’m certainly glad the question was posed.

Before going on to detail his hypothesized unifications, Wilczek acknowledges (p. 2):

It’s a profound question, actually. It is not at all self-evident, and may not be true, that going after unification is an efficient way to advance knowledge, let alone to address social needs or to promote human welfare.

He then proceeds to point out unification has been important in the past and, in any event, unification is pleasing (“esthetic” is the word he uses).

For as long as I can recall there have been calls for and attempts at ‘unification’ in the human sciences. That was a prime motivation behind the (in)famous structuralism conference that took place at Johns Hopkins back in 1966 (when I was an undergraduate there) and it’s behind calls for ‘consilience’ that have been circulating here and there over the last two decades at the instigation of E. O. Wilson, the evolutionary biologist. But these calls seem different in kind from the unifications Wilczek cites in physics’ past.

Thus he mentions the “unification of celestial and terrestrial physics (Galileo, Newton” and the “Unification of electricity, magnetism, and optics (Maxwell)” (p. 2) to cite two of his examples. Those are fairly specific, if large, bodies of knowledge, with specific conceptual objects and mathematical forms. Talk of unification among the social sciences or between the humanities and the sciences seems a bit more modest.

While I am trained as a humanist, you’ll find that over the years I’ve also called on work in cognitive and neuroscience, behavioral biology, anthropology, sociology, and event bits of computer science and physics here and there. But I can’t say that unification was ever a goal of mine. I’ve just been trying to figure out how things work. In doing so it is helpful if there is a measure of conceptual commensurability across different disciplines. Is commensurability the same as unification?

Regardless of just what these unification calls may imply, Wilczek believes that physics has something for us. The seventh of the his projected unifications is that of mind and matter (p. 19):

Although many details remain to be elucidated, it seems fair to say that metabolism and reproduction, two of the most characteristic features of life, are now broadly understood at the molecular level, as physical processes. Francis Crick’s “Astonishing Hypothesis” is that it will be possible to bring understanding of basic psychology, including biological cognitive processing, memory, motivation, and emotion to a comparable level. […] And if physics evolves to describe matter in terms of information, as we discussed earlier, a circle of ideas will have closed. Mind will have become more matter-like, and matter will have become more mind-like.

This seems right to me.

Just how that works out, of course, remains to be seen. It would be nice, for example, if this unification could come to the aid of linguistics. Why linguistics? Language pervades human life for one thing. For another, the Chomskian revolution in linguistics was central to the cognitive revolution of the 1960s and 1970s.

But, alas, linguistics is itself far from being internally unified much less unified with anything else – though psycholinguistics has proven fruitful in the past. By way of comparison all biologists hold evolution in common though the details are much in dispute. But there is no comparable theoretical framework that unites linguists. As Peter Hagoort, director of the Max Planck Institute for Psycholinguistics, recently told the 47th annual meeting of the European Linguistics Society [6]:

The field of linguistics as a whole has become internally oriented, partly due to the wars between different linguistic schools. With exceptions, linguists have turned their backs to the developments in cognitive (neuro)science, and alienated themselves from what is going on in adjacent fields of research. The huge walls around the different linguistic schools have prevented the creation of a common body of knowledge that the outside world can recognize as the shared space of problems and insights of the field of linguistics as a whole.

What will it take to create that “common body of knowledge”? Will that come in the wake of Wilczek’s postulated unification of mind and matter or will it precede it?

Finally, I note that evolution is a major theme in the calls for unification of the human sciences. There is, of course, biological evolution, but the past two or three decades have seen a somewhat diffuse interest in cultural evolution as well. Moreover humanists have, in the last decade or two, ramped up their use of statistical methods for the analysis of large bodies of texts that have recently become available. While they have, for the most part, steered clear of evolutionary formulations, that abstemiousness will disappear as it becomes clear the the population thinking of evolutionary biology is well-suited to thinking about populations of texts being read by populations of people. While it remains to be seen whether or not we will one day be able to say, to paraphrase Theodosius Dobzhansky, “Nothing in the human sciences makes sense except in the light of evolution”, I think the prospects are good [7].

Quantum Computing and States of Mind

Once he’s gone through the unifications, Wilczek moves on to making things. That is, from proposing new and more capacious modes of understanding, Wilczek goes on to propose new things that we’ll be able to make. Under this rubric he discusses quantum computers (p. 23):

Quantum computers supporting thousands of qubits will become real and useful.

Artificial intelligence, in general, offers strange new possibilities for the life of mind. An entity capable of accurately recording its state could purposefully enter loops, to re-live especially enjoyable episodes, for example. Quantum artificial [intelligence] opens up possibilities for qualitatively new forms of consciousness. A quantum mind could experience a superposition of “mutually contradictory” states, or allow different parts of its wave function to explore vastly different scenarios in parallel. Being based on reversible computation, such a mind could revisit the past at will, and could be equipped to superpose past and present. I must confess that I don’t quite understand what he’s talking about. Oh, sure, I don’t understand quantum mechanics much less qubits.

But that’s not what I’m talking about. Doesn’t the human brain more or less already re-live the past? The nervous system is a vast collection of neurons and synapses. As one lives a life that system evolves through a trajectory of states that is coupled to the external world through perception and action. When one recollects something, isn’t we (just) reactivating a past trajectory of states?

Oh yes! I know that I’m now tap-dancing and hand-waving to beat the band. But hear me out.

Perhaps we don’t reactivate the complete neural trajectory, perhaps only a compressed simulacrum. But we ‘run’ that simulacrum on the same neurons that registered the original trajectory. Moreover when we create rituals and create works of verbal, visual, and auditory art, we are deliberately fashioning “re-livable enjoyable episodes”.

There are neuroscientists who think of the nervous system as a complex dynamical system having a vast number of states. Berkeley’s Walter Freeman is one. I mention him because I make use of his work in writing my book on music, Beethoven’s Anvil. In that book I argued the rhythm is the fundamental musical phenomenon, and rhythm is about repeating states. It’s one thing to talk of a simply isochronous beat. That’s nothing, but it’s a start. What of a Beethoven symphony? Is it not a way of creating a sonic object to which we can entrain our nervous system, time and again, so that we re-experience the same “enjoyable episode”?

The late Leonard Bernstein talked of how, when he became absorbed into a composition, while conducting it for example, that it was as though he ‘became’ the composer [8]. We need not imagine that, when conducting Mahler or Beethoven or Mozart, that Bernstein traveled back in time. We need only imagine that a certain ensemble of neural states is so deeply entwined with musical sound that the ensemble can be re-created, re-lived, in any brain that immerses itself in it [9].

My point is simply that, in a very general way, Wilczek is talking about the quantum computer of the future in terms rather like one can now talk about the human brain and its experience (recall his 7th unification, that of mind and matter). When Wilczek talks of an “entity capable of accurately recording its state” I’d assume the entity he has in mind is the quantum computer of future (QCF). It’s this QCF that experiences enjoyment and re-lives experiences, no? Is the QCF alive that it experiences enjoyment? Is that the speculation on offer?

What about Prospero? Would a quantum computer have the power needed to realize a useful simulation of Shakespeare? Perhaps, though it’s clear that simple computing capacity is not enough for the job. In the old days researchers would painstakingly hand-code knowledge into computational form; that’s what Hays and I had in mind when we envisioned Prospero. These days statistical learning techniques have come to dominate artificial intelligence and computational linguistics.

Just how would a quantum computer learn to read Shakespeare, or watch performances? My intellectual experience since the Prospero days in the mid-1970s tells me that minds are self-constructed from “the inside”. How would quantum computer learn to be humanlike? Would we have to supply it with a (simulated) body which it could take out into the world? Would it have to somehow grow from infancy into adulthood?

I don’t know the answers to those questions. On the other hand, there’s no reason a quantum computer couldn’t learn the Internet from the inside. That, after all, would be its native territory, wouldn’t it?


1. Frank Wilczek. Physics in 100 Years. MIT-CTP-4654, URL =

2. William Benzon and David G. Hays. Computational Linguistics and the Humanist. Computers and the Humanities 10: 265 – 274, 1976. URL =

3. Stanislaw Ulam. Tribute to John von Neumann, 1903-1957. Bulletin of the American Mathematical Society. Vol 64, No. 3, May 1958, pp. 1-49, URL =

4. I have already discussed this sense of singualirty in a post on 3 Quarks Daily: Redefining the Coming Singularity – It’s not what you think, URL =

5. David Hays and I discuss this in a paper where we set forth a number of such far-reaching singularities in cultural evolution: William Benzon and David G. Hays. The Evolution of Cognition. Journal of Social and Biological Structures 13(4): 297-320, 1990, URL =

6. Neurobiology of Language – Peter Hagoort on the future of linguistics, URL =

[7] See, for example: Alex Mesoudi, Cultural Evolution: How Darwinian Theory Can Explain Human Culture & Synthesize the Social Sciences, Chicago: 2011.

Lewens, Tim, “Cultural Evolution”, The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.), URL = Cultural evolution is a major interest of mine.

Here’s a collection of publications and working papers, URL =

[8] Helen Epstein. Music Talks: Conversations with Musicians. McGraw-Hill Book Company, 1987, p. 52.

[9] I discuss these ideas in more detail in Beethoven’s Anvil, Basic Books, 2001, pp. 47-68, 192-193, 206-210, 219-221, and in

The Magic of the Bell: How Networks of Social Actors Create Cultural Beings, Working Paper, 2015, URL =

On the possibility of superposed mental states, you might look at my Ayahuasca Variations, Human Nature Review 3 (2003) 239-251, URL =