The hundreds of economics books released by the popular press since the beginning of the Great Recession have grappled post hoc with the failure of the guild to predict and prevent the downturn. As with their theories, their conclusions diverge. A minority of skeptics believe that economists should admit defeat and concede that they will never be able to predict, prevent, or control recessions. Optimists and apologists would like us to believe that economists are doing well and are always improving, and that, if they just try a little harder, they will finally make the elusive Promethean breakthrough that will wrest control of the business cycle from the gods and grant it to humanity (or at least to economists, who are very nearly human).
The title and first few chapters of Richard Bookstaber’s The End of Theory seem to make a good case for the pessimistic view. The book is strongest in these pessimistic sections. Bookstaber describes oversimplified mathematical models that often ignore some of the basic human factors that influence the economy, like the visceral feeling of panic in the face of plummeting prices. They also tend to ignore historical context, like the way the trauma of growing up during the Great Depression influenced how corporate and government financial risk was managed during the rest of the 20th century. Some economic models make facile inductions from individual decision-making to whole markets, ignoring the possibility of “emergent” phenomena. And anyway, our social world is so complex that any simplified model must miss something important. All of this is true, though none of it is new.
Bookstaber argues that these failures of modern economics herald “the end of theory”—hence the title of his book. Notwithstanding the book’s extraordinarily repetitiveness—all of its essential ideas could have easily been expressed in a long-form essay—I could not find or infer a consistent definition of “theory,” whose ostensible end is its subject. What would it mean for theory to end? After all, the idea that theory has ended is itself a theory.
Bookstaber often seems to use “theory” to mean “neoclassical economics as I understand it,” for example when writing, “this book is my manifesto for financial crises, a declaration that the neoclassical economic theory has failed. . . .” At other times, however, he seems to be arguing primarily against mathematical reductionism, claiming at one point that since context matters in human experiences, “our probability theory and statistics can be thrown out the window.” Some of his criticisms of “theory” read like attacks on the very possibility of economics as a scientific field:
What we are doing is more akin to writing a story than building a theory. I do not believe there can be a general theoretical approach for understanding crises . . . when there is a high degree of complexity, you have to figure it out as you go along.
This is jarring. If we are only writing stories and improvising, why would we need economists or economics at all? Wouldn’t it suffice for the Federal Reserve to be run by historians and talented extemporizers?
At one point in the book, Bookstaber describes the way he believes the flash crash of 2010 should be understood. Without citing hard evidence, he makes a string of assertions linking liquidity events to crashes:
In a normal market, there is . . . a similar level of liquidity supply on both sides. . . . Liquidity demand starts to enter the market . . . the flood of liquidity demand reduces market-making capacity. . . . The drop in price increases the amount of liquidity demand but has yet to elicit more liquidity supply. . . .
And so on. Maybe these evidence-free statements are true, but what are they if not theories about how markets work? Even in a book about the “end of theory,” theory cannot be avoided.
To fixate on Bookstaber’s linguistic equivocations is pedantic, but his inability to clarify what he means by “theory” is but one manifestation of the broad incoherence of both the book and of the economics profession generally.
The second half of The End of Theory includes a brief for agent-based modeling, which Bookstaber describes as “an alternative to neoclassical economics that shows great promise in predicting crises, averting them, and helping us recover from them.” After centuries of the economics establishment’s failure to predict crises (much less prevent or control them), it takes some great boldness, optimism, or ignorance—or a blend of all three—to suggest that a new method could be the first to succeed.
What is agent-based modeling? My favorite example is “Boids,” an influential computer program created by graphics expert Craig Reynolds more than 30 years ago. The program draws lots of little moving dots, called “boids,” on a screen. A boid is meant to behave like a bird, and a group of dozens of boids is supposed to exhibit flocking behavior just like a real flock. If you think of how flocks behave, with their spontaneous formation, elegant movements, and appearance of almost being a single organism, creating a computer program like “Boids” seems daunting or impossible, especially on the computers that existed in the 1980s. You might have to store hundreds or thousands of vectors capturing the movement of each boid, so that it fit in smoothly with the movement of the flock as a whole.
It was not so difficult as that. Reynolds hypothesized that nothing—neither the code governing individual boids, nor any part of the program itself, nor even the computer running it—needed to govern the high-level behavior of the flock. Instead, each boid was programmed to follow three simple rules: (1) separation (don’t collide with other boids), (2) alignment (point yourself in the same general direction as the boids near you), and (3) cohesion (drift toward the center of mass of the boids near you).
Each individual boid thinks not at all of the flock as a whole, but merely follows those three rules based on its immediate surroundings. And yet when many boids follow the rules, beautiful and complex-looking flocking behavior emerges. You can see a hypnotic modern implementation of it here, which also allows you to create a new boid moving in a random direction to see how it affects the whole system. This example uses only 249 lines of code (available at the link) and yet I feel I could stare at it for hours and remain entertained.
Of course, bird behavior is not the only thing that can be modeled in this way, with autonomous agents following simple rules and generating complex emergent behavior. We could program simulated investors and banks as the agents, and simulate the economy. Simulated bankers would follow some plausible rules for interacting with each other and other economic actors—regarding when to issue a loan, how much to invest and where, and so on. We could run the program in fast-forward mode to see what the economy might look like in five or fifty years, including what types of crises we might expect and when we might expect them.
If you are like me, and enjoy economic prediction and artificial intelligence and flocks of birds, then you may find plenty to be excited about in Bookstaber’s suggestion that we use agent-based modeling to predict crises. But agent-based modeling has severe limitations that prevent it from being a viable alternative to neoclassical economics.
The first problem with agent-based modeling is that the models can lead to literally any conclusion. If you reload the simple “Boids” simulation several times, you could get birds flocking to the left, or the right, or up, or down, or some diagonal direction. Wait a few minutes, and it will look a little different. This simulation, like every other serious agent-based model, has some amount of randomness built in by design, which leads to a huge range of possible outcomes. An agent-based simulation of the economy will sometimes show that a speculative bubble pops early because one hedge fund manager makes a disastrous bet, and sometimes that it continues unpopped for years because everyone behaves angelically. The randomness underlying these models cannot be removed, because social behavior remains inscrutable to us. We cannot predict with certainty whether an individual will (for example) anxiously short sell a teetering market, and therefore we must model such decisions as the products of random chance.
Another problem with agent-based modeling is that the simulated outcomes are completely at the mercy of the input parameters. If we alter the “Boids” code just a bit so that the boids like to get a little closer to each other, we could get a radically different flocking pattern. If we tell our financial simulation that investors have a 2 percent rather than a 1 percent chance of panicking and overselling when the market shows signs of falling, we will get a very different idea of what crises are in store. Of course we do not know exactly what these percentages are, and if we have indeed reached “the end of theory” and cannot even theorize about their values, then the situation is even more hopeless.
These limitations of agent-based modeling make it useless for the purpose of precisely predicting financial crises. We could run an agent-based simulation exactly as Bookstaber describes and have it tell us that there will be a huge crash tomorrow. We could reboot the simulation and run it exactly the same way and it would tell us that things will continue humming along perfectly for another hundred years. Then we could retool a few of the huge number of unknown input parameters, and get entirely different results again.
If agent-based modeling has value, it is in the visualization and salience it provides. By setting up toy soldiers in a diorama, a general does not learn exactly how a battle will go or how to win it, but he might notice some useful hill that had escaped his attention before. If we ran agent-based simulations of the economy, we might similarly notice some financial sector or industry that could be a fault line in future crises. But these should be noticeable to an observer even without agent-based modeling. More importantly, merely noticing salient features of crisis simulations does not grant us predictive power or control over those crises.
It is not clear whether Bookstaber is unaware of the weaknesses of agent-based modeling or whether he understands them completely but didn’t want them to get in the way of his book sales. He vacillates between hyping agent-based models as an “alternative to neoclassical economics” and a “new paradigm” and making statements like “as the crisis unfolds . . . there is no solution or answer.” From one page to the next, he seems incapable of deciding whether economists are wizards who can predict and control economic movements, or spectators who can only observe and tell stories about what happened in the past. It seems to me that when economists are collecting their paychecks and Nobel prizes, they favor the former definition, and when excusing their failures, they favor the latter. This inconsistency is not unique to Bookstaber, but is common in the economics field generally.
Consider prominent economist Robert Lucas’s response to criticisms leveled against the profession after the embarrassments attendant to the Great Recession. Writing in the Economist, he defended Frederic Mishkin, formerly a governor of the Federal Reserve, who just before the crisis presented some “reassuring” simulations predicting a rosy future for the macroeconomy. Lucas wrote defensively of these simulations:
The simulations were not presented as assurance that no crisis would occur, but as a forecast of what could be expected conditional on a crisis not occurring. . . . [The] forecast was a reasonable estimate of what would have followed if the housing decline had continued to be the only or the main factor involved in the economic downturn. . . . After Lehman collapsed and the potential for crisis had become a reality, the situation was completely altered.
This is appalling. In other words, “our predictions were great except for the huge, crucial things we didn’t predict that made all of our predictions invalid.” Or “we can predict everything perfectly as long as nothing unexpected happens.” Imagine, by contrast, a weatherman who forecast dry, sunny weather all week. After four days of thunderstorms prove his predictions completely incorrect, he defends himself à la Lucas with “[The] forecast was a reasonable estimate of what would have followed if the sunny weather had continued to be the only or the main factor involved in the weather.”
If the weatherman behaved this way, with a callous indifference to the impotence of the predictions from which he made his living, we would fire him immediately. Why then should we continue to subsidize professional economists who can predict everything except the future? If their predictions are useless and we all know they are useless, we should rethink the place of economists in our society—we probably don’t need to employ so many at all levels of government or give them the credulity they have historically been afforded. No doubt Lucas and Bookstaber do not desire that outcome. But Lucas, by admitting that all economists’ forecasts are useless, unwittingly provides a good argument for it. The End of Theory, in-between its flawed claims that agent-based modeling can save the field, also denigrates our economic forecasting abilities and so implicitly agrees.
Maybe theory will continue (whatever that means), despite Bookstaber’s claims. But something needs to end. There is a radical mismatch between economists’ pretensions to a deep understanding of the macroeconomy and their repeated failures to predict or control it. Economists must either abandon these pretensions and embrace a more modest role, or end their record of failures and begin delivering on their explicit or implicit promises. Meanwhile, we the public should probably refrain from placing any trust in the economics profession at all.