As the New Orleans Saints lined up to kick off the second half of Super Bowl XLIV, CBS Sports color commentator and former Super Bowl MVP Phil Simms was explaining why the Saints should have deferred getting the ball after winning the pregame coin toss. Simms suggested that the Saints, 4½-point underdogs to the Indianapolis Colts, would be in a better position were they not giving the ball to future Hall of Fame quarterback Peyton Manning, who already enjoyed a four-point lead and had had 30 minutes to study the Saints’ defensive strategy. Simms had barely finished this thought when Saints’ place kicker Thomas Morstead surprised everyone – the 153.4 million television viewers, the 74,059 fans in attendance, and most importantly the Indianapolis Colts – with an onside kick. The ball went 15 yards, bounced off the facemask of an unprepared Colt, and was recovered by the Saints, who took possession of the ball and marched 58 yards down the field to score a touchdown and gain their first lead of the game, 13-10. The Saints would go on to win the championship in an upset, 31-17.
Although Saints quarterback Drew Brees played an outstanding game and the defense was able to hold a dangerous Indianapolis team to only 17 points, Head Coach Sean Payton received the bulk of the credit for the win, in large part because of his daring call to open the second half. Onside kicks are considered risky plays and usually appear only when a team is desperate, near the end of a game. In fact the Saints’ play, code named “Ambush,” was the first onside kick attempted before the fourth quarter in Super Bowl history. And this is precisely why it worked. The Colts were completely surprised by Payton’s aggressive play call. Football is awash in historical statistics, and these probabilities guide coaches’ risk assessments and game planning. On that basis, didn’t Indianapolis Head Coach Jim Caldwell have zero reason to prepare his team for an onside kick, since the probability of the Saints’ ambush was zero (0 onside kicks ÷ 43 Super Bowl second halves)? But if the ambush’s probability was zero, then how did it happen? The answer is that our common notion of probability – as a ratio of the frequency of a given event to the total number of events – is poorly suited to the psychology of decision making in advance of a one-time-only situation. And this problem is not confined to football. Indeed, the same misunderstanding of probability plagues mainstream economics, which is stuck in a mathematical rut best suited to modeling dice rolls.
Probability is a predictive tool; it helps decision makers confront the uncertainty of future events, armed with more than their guts. Both economists and football coaches use probabilistic reasoning to predict how others will act in certain situations. The former might predict that, faced with a promising investment opportunity and a low interest rate, entrepreneurs tend to invest, while the latter might anticipate time-consuming running plays from teams winning by a touchdown with four minutes left in a game. Both the economist and the coach would look up historical statistics, which they hope would provide insight into their subjects’ decision-making tendencies. And over the long run, these statistics would likely be quite good at predicting what people do most of the time. It would be foolish not to act in anticipation of these tendencies.
Indeed, there are many statisticians employed to do such things. In the lucrative, gambling-powered world of football analysis, for example, a company named AccuScore tries to predict the outcomes of NFL games and the performances of individual players with computational simulations early in the week. Although their exact computational methods are proprietary secrets, they have roughly described the strategy behind their Monte Carlo simulation engine. Through fine-grained analysis of troves of historical statistics, AccuScore’s computers create mathematical equations to represent the upcoming game’s players and coaches. How often does a team pass the ball when it’s third down with four yards to go at their own thirty-yard line, with no team up by more than three points in the first quarter at an indoor stadium? When New York Jets running back LaDainian Tomlinson rushes up the middle, how often does he get past the middle linebacker and rush for more than eight yards? The probabilistic answers to these questions – and many others – become the parameters of the players’ and coaches’ equations, which AccuScore pit against each other on a numerical field. The computers then simulate the game, one play at a time, guided by a random number generator and the participants’ tendencies. Then they repeat the simulation 10,000 times and average the results.
According to AccuScore’s website, their predictions have an overall gambling accuracy of about 54%. This probabilistic strategy makes sense for its purpose, predicting the outcomes of games by analyzing the frequency with which subjects make certain decisions, but does not at all resemble the thought process by which a coach or his opponent calls a play in the middle of a close game. In contrast to AccuScore’s simulations, the real football game is only played once. Had they played Super Bowl XLIV 10,000 times, the Colts’ normal, kickoff return formation would surely have been the right bet to make at the starts of the 10,000 second halves. But they only kicked it once, and the act of kicking it destroyed the possibility of it ever happening again. (For the moment, let’s ignore the chance that someone on the Saints committed a penalty, necessitating a redo.) Sean Payton’s aggressive call worked, not because it gave the Saints the highest probability of success, but because the one time Morstead kicked it onside he caught the Colts by surprise.
Economics must also grapple with the difference between these two interpretations of probability. When economists declare that that markets are populated with rational agents, they must mathematically define that rationality, just as AccuScore defines players and coaches with tendency equations. The dominant strategy for defining economic agents’ rationality comes from Oskar Morganstern and John von Neumann’s groundbreaking 1944 book, Theory of Games and Economic Behavior. In it, they propose assigning each market actor with a utility function, which weights the payoffs of various possible actions with their probabilities of coming to pass. In constructing utility functions, neoclassical economists must assume that they have considered all of the relevant possibilities, which is another way of saying that the probabilities of all possible events included in the utility function add up to one.[1] They then define the agent’s rational choice as the one that maximizes the expected value of her utility function. This method is the foundational concept of game theory and is used to predict how decision makers will act. Modeling a market then proceeds in roughly the same way that AccuScore models NFL games.
However, generations of critics have argued that rational choice theory is psychologically unrealistic as a description of actual human decision-making. While one might be able to argue that it represents the optimal definition of rationality, it is nearly impossible to conceive of someone actually making this sort of calculation on the fly in an even remotely complex situation. In general, it is unrealistic to assume that people consider every single possible outcome of a decision, so that the probabilities of all these events can properly sum to 100%. If someone thinks of a new possible outcome, why should she consider any of the ones she’s already considered to be any less likely than they were before she thought of the new one? But more fundamentally, rational choice theory relies on the frequency ratio definition of probability, which we have seen is incoherent when applied to the circumstances of one-time-only decisions. The most important decisions we face (and thus, model) are unique. In these cases, when making a choice destroys the very possibility of anyone ever making that same choice again, the notion of probability as a historical frequency ratio is nonsensical.
There have been several attempts to construct a theory of probability that accurately describes the psychological process of making decisions in one of these self-destructive choices. One strand of thought, coming from the Keynesian economist G.L.S. Shackle, is particularly well suited to describing the psychology of making decisions in the face of uncertainty. In Shackle’s theory, the probability of an event coming to pass is no longer calculated as one of several possible outcomes, as the standard frequency ratio theory does. Instead, he figures the likelihood of any particular outcome on its own terms, by asking a simple question: Based on what I know now, how surprised would I be if Y happened? Because the likelihood of each outcome is determined independently, their probabilities need not sum to one. That means thinking of a new possibility does not make any other less likely to happen. It also means that one can hold two or more mutually exclusive outcomes to be equally unsurprising, based on the information at hand. Indeed, most of the time, there will be a range of possible outcomes that are all judged to be equally unsurprising. (Shackle illustrated this with the graph at right.) Thus, Shackle’s decision-making comes down to a comparison of the best possible unsurprising outcome to the worst possible unsurprising outcome. This process seems much closer to the psychology of forming expectations and making choices than trying to maximize the expected value of a weighted average of allpossible outcomes in your head.
Shackle developed his potential surprise framework as a way to model individuals’ expectations when considering a capital investment. A firm facing a particular investment decision may never have those same choices again. If it spends, it could lose and potentially go bankrupt. If it saves, it might not get such an attractive offer in the future, or it may be outcompeted by others. In forming expectations about a potential investment, firms naturally compare the most optimistic reasonable scenario to the most pessimistic. But Shackle’s potential surprise theory can just as easily describe the psychology of a football coach calling plays. A coach aims to control the surprises on the field, employing strategies to anticipate his opponents’ moves and surprise them as much as possible. Indeed, former fighter pilot and current NFL statistics guru Brian Burke calculated that surprise is the biggest factor determining the success of onside kicks. Overall, onside kicks are successful (i.e. the kicking team recovers the ball) 26% of the time. Most teams only try them when they’re desperate, and when a team is trailing at the end of a game no one is surprised by an onside kick. But in other situations, when the opponents aren’t expecting them, teams recover about 60% of attempted onside kicks.
Neither the decision to call a football play nor to make a capital investment is dominated by the calculation of probability-weighted historical statistics. Of course, considering what has worked and failed in the past is still smart practice – Shackle himself writes that it would be foolish to disregard probabilities calculated this way – but rational choice theory fails to depict the thought process of a decision maker facing a one-time-only choice with any psychological subtlety. To remember this, one need only pay attention to the fine print and sped up announcement at the end of the mutual fund advertisements at halftime: “Past performance does not guarantee future results.”
[1] If this is confusing, consider the probabilities associated with rolling a regular, six-sided die. The probability of rolling each number is one sixth, so the sum of the probabilities of all six numbers is one. This means that if you were to roll the die, it is 100% certain that it would display one of the six numbers.