by Nick Werle
The most striking aspect of Isaac Asimov’s Foundation is the pacing of its narrative. The story, which tracks the fall of the Galactic Empire into what threatens to be a 30,000-year dark age, never tracks characters for more than a few chapters. The narrative unfolds at a historical pace, a timescale beyond the range of normal human experience. While several short sections might follow one another with only hours in between, gaps of 50 or 100 years are common. The result is a narrative in which characters are never more than bit players; the book’s real focus is on the historical forces responsible for the rise and fall of planets. The thread holding this tale together is the utopian science of psychohistory, which combines psychology, sociology and statistics to calculate the probability that society as a whole will follow some given path in the future. The novel’s action follows the responses to a psychohistorical prediction of the Empire’s fall made by Hari Seldon, the inventor of the science, who argued by means of equations that the dark ages could be reduced to only a single millennium with the right series of choices. In comparing the science of psychohistory and the actual events that accompany the Galactic Empire’s fall, Asimov’s time-dilated narrative weaves together disparate theories of history and science articulated around the problem of predicting the future, the historical primacy of crises, and the irreducible difference between studying an individual and analyzing a society as a whole. In Asimov’s imagined science, however, we can trace the real logic of macroeconomics and begin to understand why Keynes could never produce such dramatic predictions.
The goal of Asimov’s psychohistory is always the prediction of future events, but these prognostications are different from the usual fictional presentiments in that they cannot determine exactly what will happen. It is a probabilistic science. Of course, this leaves open the possibility of psychohistorians trying to guide society toward the best possible future, as long as the population at large doesn’t know the predictions’ details. But more importantly, psychohistory’s probabilistic nature limits the scale on which its predictions are useful:
“Psychohistory dealt not with man, but with man-masses. It was the science of mobs; mobs in their billions. It could forecast reactions to stimuli with something of the accuracy that a lesser science could bring to the forecast of a rebound of a billiard ball. The reaction of one man could be forecast by no known mathematics the reaction of a billion is something else again.”
Like all statistical tools, it relies on the law of large numbers as a way to get past the intrinsic randomness one faces in anticipating human behavior. Practically, this means that psychohistory can only predict events for a population at large; it has nothing to say about any particular individuals.
Despite Foundation’s seemingly fantastic premise – the ability to know what will happen tens of thousands of years into the future – Asimov’s focus on this fundamental limit to psychohistory’s predictive power keeps the story firmly in the realm of science fiction. Psychohistory is, in a sense, an idealized form of macroeconomics, insofar as economists aim to predict and plan for the best possible future for society. However, the theoretical connection between these two “sciences” is more profound. John Maynard Keynes’ essential insight, which forms the epistemological core of macroeconomics, is the discontinuity between mathematical descriptions of the large and the small, the society and the individual. Indeed, all statistical sciences – genetics, epidemiology, and quantum mechanics, for example – face this same intrinsic limitation. While we have worked out many statistical laws identifying the genes responsible for congenital disease, the behaviors that raise the risk of spreading an infectious disease, and how electrons flow through super-conductors, none of these sciences can say with any certainty whether a genotype will manifest during a person’s life, a patient will contract a disease, or an electron will be at a given time or place.
This shared epistemological limit is not a coincidence. Keynes’ 1936 General Theory of Emplyoment, Interest, and Money sought to explain why the Depression-era economy seemed incapable of providing jobs to the millions of unemployed workers, who were starving and clearly willing to work for any wage. Unlike his predecessors, Keynes approached this problem by analyzing the collective behavior of the laboring masses rather than the imagined bargaining strategies of a single employer hiring a single worker. My research into his mathematical work, published more than a decade before the General Theory, suggests that Keynes modeled macroeconomics, his new statistical theory of the economy-at-large, on thermodynamics, the modern explanation for the behavior of bulk matter. Since Keynesian macroeconomics starts on the aggregate level, rather than from a theory of a rational actor’s decision-making like Ricardian classical economics or neoclassical rational expectations theory, Keynes faced the challenge of linking the dynamics of the national economy to the psychology of the individuals that compose it. This is precisely the theoretical problem Keynes solved with the statistical methods of thermodynamics and statistical mechanics.
Thermodynamics describes how a quantity of gas changes as a whole or how a chemical reaction will proceed through intermediate equilibria, by measuring temperature, pressure, and volume. It achieves this, however, without assuming anything at all about the small-scale structure of matter. Indeed, thermodynamics was developed before belief in the reality of atoms was widespread among physicists, and Einstein’s subsequent proof of their existence did nothing to change the science. Thus, the equations of thermodynamics give no information about the motions of individual gas molecules. An early attempt to bridge this gap, Bernoulli’s kinetic theory, assumed that a given combination of temperature, volume, and pressure determined that all the gas molecules were bouncing around the container at a uniform speed. Later, however, James Clerk-Maxwell and Ludwig Boltzmann showed how this was impossible. Instead, they argued that a gas in a certain thermodynamic state is composed of an ensemble of molecules all moving in fundamentally random directions and at random speeds. In other words, even if one knows the exact thermodynamic state of a gas, it is impossible to determine the motion of an individual molecule, and vice versa. Statistical mechanics is the link between these two levels; it characterizes the probabilistic distributions of molecular velocities that correspond to gases in thermodynamic equilibria. The two fields are joined by one of science’s slipperiest terms, entropy, which measures the microscopic uncertainty intrinsic to any known macroscopic situation.
In trying to work out the individual’s relationship to the macroeconomic aggregate, Keynes faced a similar mereological problem. Macroeconomics uses aggregate measurements – e.g. unemployment, GDP, and inflation – to describe the health of the economy overall and predict its future dynamics. Yet it is unclear how individual experience links up with such large-scale concepts. Whereas classical theory assumes that an economy of 300 million “makes decisions” identically to the total of 300 million individual choices, macroeconomics is premised on the idea that the aggregate economy acts as an “organic whole,” akin to a thermodynamic gas and irreducible to arithmetic sum of rational individuals. Complex feedback loops, which might make one person’s rising costs become another’s disposable income and yet a third’s sales revenue, multiply the effects of investment, consumption, and staffing decisions through the economy. Furthermore, people don’t think and act identically; the same background macroeconomic “facts” incite various people to form drastically different, even contradictory, expectations about the future. A direct connection between aggregates and individuals is even further frustrated by the importance of the income distribution: An economy can experience growth – as measured by a rising real GDP – even as the majority of the actual people living in it lose purchasing power. Indeed, this has been the reality for the past several decades, as income inequality in America has exploded. Finally, the prevalence of bank runs, stock market crashes, and asset bubbles attests to the destructive power of rational irrationality, cases in which many individuals’ personally rational decisions (to withdraw their deposits from a shaky bank) sum to produce a collectively insane result (a bank failure).
These blockages, which prevent a simple micro-macro relationship in economics, posed a mathematical as well as a philosophical problem for Keynes. He was able to solve both aspects of this difficulty by importing the probabilistic approach that had joined the macro-scale science of thermodynamics to the particulate-level explanation of statistical mechanics. This solution is crystallized in two rate-determining macroeconomic quantities – the Marginal Propensity to Consume and the inducement to invest – that connect the aggregate dynamics of an economy to the psychologies and behaviors of individuals living within it. (For the details of this argument, please check out my recent article in the Journal of Philosophical Economics.) Following statistical mechanics, Keynesian macroeconomics forswears assumptions of strict rationality, even though the resulting uniformity would make the necessary mathematics more tractable.
Although these different theoretical relationships between economic parts and wholes might seem esoteric and hopelessly abstract, they have real, material stakes when incorporated into experts’ policy analyses. Consider the case of today’s stubborn high unemployment. If the neoclassical theorists are right, and the macroeconomy is nothing more complex than millions of individual decisions, then the main problems are likely inflated wages and labor market friction. Their prescriptions: wage cuts, union givebacks, and labor market “reforms” that make it easier for employers to fire and hire workers. However, if the Keynesian analysis is correct and the macroeconomy is more than a simple sum of its parts, the problem is more structural. In this view, the economy is not operating at a level sufficient to utilize all of its unemployed factors of production and so the government ought to stimulate aggregate demand with expansionary fiscal and monetary policy. This is the debate going on today in capitals across the world.
Asimov’s psychohistory is useful because it dramatizes the problems statistical sciences have in simultaneously predicting the future on both aggregate and particulate levels. His vision in Foundation of a utopian social science was deeper than a mere desire for large-scale fortune telling; he followed Keynes in extending the logic of modern physics:
“Because even Seldon’s advanced psychology was limited, it could not handle too many independent variables. He couldn’t work with individuals over any length of time, any more than you could apply the kinetic theory of gases to single molecules. He worked with mobs, populations of whole planets, and only blind mobs who do not possess foreknowledge of the results of their own actions.”
Indeed, it’s probably this final restriction that prevents macroeconomics from achieving the predictive power of psychohistory. Insofar as real people always strive to know the equations and understand the forecast, anticipatory recursion will always prevent the best predictions from coming perfectly true.