The ideas of economists and political philosophers, both when they are right and when they are wrong,
are more powerful than is commonly understood.
—John Maynard Keynes
As the dust begins to settle from the current global economic crisis, one of the issues we need to confront is the role of academic economists in promoting ideas that in retrospect were both wrong and dangerous. Economists pride themselves on being simultaneously the most sophisticated social science theorists as well as the most rigorously empirical of the bunch. Yet in the case of financial-sector liberalization economists provided intellectual backing for policies for which evidence of beneficial effects was lacking and, in many cases at least, for which their own theories suggested reasons for caution. In this way professional economists contributed to a massive global recession that, from peak to trough, wiped out $40 trillion in savings and will cause U.S. public debt as a percentage of GDP to increase from 42 percent to somewhere between 60 and 80 percent, according to estimates.
As Keynes noted long ago, the views of academic economists are far more influential than those of virtually any other group of professors. Policymakers see a direct application of their discipline to issues of immediate concern to them. At the same time, non-economists are reluctant to question economists’ judgment for reasons having to do with the highly technical nature of their theories and methods. Given economists’ clout, there are relatively few intellectual checks and balances to the ideas that spill out of the discipline. Presidents, Congressmen and government officials can rarely follow the game-theoretic models that win Nobel prizes in economics, nor can they evaluate complex data analysis. When the consensus in the profession asserts that something is true—for example, that opening up a country’s capital account will spur growth and development—few non-economists feel qualified to gainsay them. But the truth is that the mathematization of contemporary academic economics lends spurious precision to a field that is pervaded by questionable premises, over-simplified models and ideological bias.
What We Thought We Knew
The Reagan-Thatcher revolution of the 1980s legitimated a shift away from state-centric economic policies and toward ones favorable to free markets. It was grounded in several historical experiences that suggested market economies were far more efficient and fast-growing than planned ones. The first was the stagflation experienced by the developed world after the oil shocks of the 1970s, which was exacerbated by accumulating levels of regulation and state ownership. The second was the rise of East Asia, which demonstrated that backward countries could join the developed world by respecting private property rights and embracing integration into the open global economy. And the third was the collapse of the communist world at the end of the Reagan Administration, which underlined the bankruptcy of socialist central planning. When critics attacked economists and public officials for being over-reliant on market mechanisms, many defended their actions with precisely this history. A senior official at the International Monetary Fund thus deflected criticism of the Fund’s handling of the Asian financial crisis:
The staff of the IMF . . . have over time become more confident about the ability to use markets to serve the public interest. What caused this shift? Quite simply, the evidence. Through the 1980s, central planning represented an important alternative to markets as a way of organizing economies. The collapse of the Soviet Union and the fall of the Berlin Wall suggested to many that markets, whatever their faults, were a more durable way of organizing a country’s economy.1
The United States and Britain also pioneered several important policy changes in this period. Labor markets were liberalized as trade unions progressively lost their power to set wages under the pressure of international competition; global trade increased through several rounds of talks under the GATT and its successor, the World Trade Organization; the rights of property owners were strengthened and taxes lowered; state-owned industries like British Steel were privatized; and levels of regulation, beginning with the airline industry, fell. The policy shifts in large part reflected a much broader shift in the fundamental structure of the global economy, which was becoming increasingly integrated. These changes set the stage for a remarkable thirty-year period of growth in the global economy.
Among the targets of liberalization was the financial sector. Financial liberalizations started under a Democratic Administration in 1980 with the Depository Institutions and Deregulation and Monetary Control Act. That act, whose implementation was completed by 1986, removed the interest-rate ceiling that could be paid on bank deposits. Two years later, Congress passed legislation permitting U.S. banks to operate across state borders for the first time. The famed October 1986 Big Bang of London involved measures that abolished the fixed commission charges on stock transactions and the switch from voice trading to electronic, screen-based trading.
Each one of these components of “free market economics” has been subjected to critique, but of this list the most problematic by far was financial sector liberalization. A substantial body of theory supports the welfare-maximizing benefits of free trade, as well as the disastrous experiences with protectionism during the Great Depression. Similarly, the advantages of liberalized labor markets became evident in the dropping rates of unemployment in the Anglo-Saxon economies during the 1990s and 2000s. But, as Jagdish Bhagwati argued in his well-known critique of capital account liberalization, there is a “difference between trade in widgets and [trade in] dollars.”2 The financial sector behaves differently from other economic sectors. While capital markets can indeed allocate capital efficiently, they are inherently much more dangerous than product or labor markets when they malfunction. If the management of an industrial corporation like General Motors makes fatal mistakes, it hurts GM shareholders, employees and customers, and it has an array of secondary effects in the regions in which it operates. If an interconnected financial institution like Lehman Brothers or Citigroup fails, on the other hand, the failure imposes huge systemic costs on the economy as a whole—what economists label negative externalities. When Lehman Brothers failed, liquidity dried up and credit spreads jumped to unprecedented levels around the world, prompting a global recession.
There was good reason to suspect that the financial sector is inherently more volatile than other parts of the economy. In the real economy, prices are sticky and investment decisions have longer-term horizons. Financial assets, by contrast, are more fungible and liquid. These qualities make the financial sector more susceptible to herd behavior and emotion—what Keynes termed “animal spirits” back in 1936.3 Directional movements in the financial markets can become self-reinforcing. George Soros has labeled this idea “reflexivity”, by which he means a process wherein price movements affect the fundamentals that in turn influence prices. For example, when the assets on a bank’s balance sheet rise in price, its balance sheet grows, enabling the bank to lend more. Borrowers will use this money to buy similar assets, which drives up the assets on that and other banks’ balance sheets, allowing then to lend more as well. This is a self-perpetuating cycle. Sooner or later, sentiments reverse; people believe the asset is overvalued and start to sell, which sends the pattern into the opposite direction. This sort of self-reinforcing behavior drives markets toward disequilibrium rather than equilibrium.
Furthermore, academic economists had plenty of theoretical tools lying around that, had they used them in concert, might have allowed them to anticipate problems in the financial sector. For example, agency theory suggests that the incentives of agents (such as traders looking to their year-end bonuses) might be misaligned with those of their principals (the shareholders of the financial institutions for which they worked) because their own compensation horizons were far shorter than the horizons of the risks they were taking on. Information economics, developed by economists like Joseph Stiglitz, George Akerlof and Michael Spence, questioned the assumption underlying the standard neo-classical models that market participants had perfect information. Information economics warned against the perverse behavior that might arise in a world of “originate-to-distribute” mortgages and complex credit derivatives, where information asymmetries are commonplace. And finally, behavioral economics has questioned the efficient-market hypothesis, which asserts that prices in financial markets represent all publicly available information. Economists like Robert Shiller and Richard Thaler have found that price volatility in financial markets is much greater than would be expected if price movements reflected only the revelation of new information. Few could argue, for instance, that any new information was revealed in early 2001, when the NASDAQ fell from roughly 5,000 to 2,000, ultimately leading to a loss of $5 trillion of wealth. Bubbles and manias, reflecting herd psychology rather than economic rationality, are as old as markets themselves.
The theoretical tools were all available, then, to make the case that the actual prices of financial assets do not always reflect their real value, and that financial markets are not necessarily efficient ways of allocating scarce capital to those who can make best use of it. But virtually no one seemed to make it.
Warning Signs Ignored
The 2001 NASDAQ episode aside, what should have triggered even louder warning signals was the actual experience of capital-account liberalization as it played out during the 1990s. Capital-account liberalization is the easing of the restrictions on cross-border capital flows, and as controls were gradually lifted in the years following the fall of the Berlin Wall, an increasing pace of financial crises around the world, beginning with the sterling crisis of 1992, followed in its wake.
The Organization for Economic Cooperation and Development (OECD) amended the Code of Liberalization of Capital Movements and obliged its member states to liberalize all capital movements in 1989. When Mexico and South Korea wanted to join this exclusive club, they liberalized their capital accounts, and both countries suffered a financial crisis as a result—Mexico in the so-called Tequila crisis of 1994 and South Korea in the Asian financial crisis in 1997. While many factors contributed to these crises, the liberalization of their capital accounts played a large role in both.
Despite these warning signs, mainstream economists insisted on the benefits of capital-account liberalization. Stanley Fischer, then the First Deputy Managing Director of the IMF, laid out a theoretical case for capital account liberalization in a speech in Hong Kong even as the Asian financial crisis was unfolding in 1997.4 He argued that the free movement of capital facilitates the efficient allocation of global savings, since developing countries would access more capital for investment. The presence of foreign financial firms that employ more sophisticated lending techniques, he added, would increase competition in the local banking sector, which would raise lending standards and allocate capital to more productive investments.
The problem was that no comprehensive empirical studies on the effects of capital-account liberalization on macro-economic growth existed to support these assertions. When empirical evidence did begin to materialize in the late 1990s and early 2000s, it failed to demonstrate any robust correlation between liberalization and macroeconomic growth.5 Empirical studies showed that liberalization led to decreased financial transaction costs and reduced lending rates. These incremental benefits had to be weighed against the risk of periodic financial crises, however. It was not clear that the foreign investors piling into emerging markets in the 1990s were any more sophisticated than local ones with more intimate knowledge of their clients. The foreigners, in fact, proved subject to the same kind of herd mentality revealed in the course of other bubble markets, piling into Bangkok real estate when it made no sense, and pulling out again at the first sign of trouble.
The Asian financial crisis should have led economists to question the agenda of financial-sector liberalization. This was precisely what most policymakers in Asia did. Having been burned by bum advice on opening up their economies to foreign capital flows in the early 1990s, most wisely closed themselves off to hot foreign money and reversed the direction of capital flows by accumulating dollar reserves to protect themselves against volatility. China, in particular, felt vindicated in its unwillingness to liberalize its financial sector in the manner demanded by Americans, and maintained a fixed and increasingly undervalued exchange rate at a level that led, in the decade following the crisis, to its accumulating nearly $2 trillion in reserves.6
Mainstream American economists and policymakers reacted very differently to these same events. The new watchword after 1997 became “sequencing”, that is, the notion that capital-account liberalization should go forward, but only after a sound financial regulatory system was put in place. Thailand’s big mistake, in this view, was not in pursuing capital-account liberalization, but in having done so without good bank regulation. But there was no questioning of the ultimate goal of a globalized world in which capital could flow freely, a world that opened up new markets for Goldman-Sachs, Lehman Brothers, Citigroup and the other top-feeders in the financial food chain. And no one defined clearly what constituted adequate regulation.
Whatever the theoretical merits of the new, post-Asian crisis consensus, it was clear that American economists at some level did not take it seriously, because they did not apply it to their own situation. Financial markets continued to evolve in such a way that the size of the financial sector grew at a much faster rate than the real economy. According to OECD data, from 1990 to 2006, the financial sector as a percentage of GDP increased from 23 to 31 percent in the United States, and from 22 to almost 33 percent in Britain. These sectors grew in part by introducing new forms of finance precisely to escape existing forms of regulation. Thus the hedge fund industry, which was not subject to the restrictions of normal stock brokerages, grew from $40 billion to nearly $2 trillion between 1994 and 2008. The credit default-swap market increased from $180 billion to $39 trillion between 1997 and 2008.
Had American economists taken their own advice to Asia seriously, they should have been concerned by the emergence of this huge, totally unregulated shadow finance sector and sought regulatory institutions to protect themselves and the rest of the world from it. Instead, precisely the opposite happened. The Gramm-Leach-Bliley Act, passed in 1999, undid the Depression-era Glass-Steagall Act’s provisions prohibiting a company from acting as both an investment bank and a commercial bank. Perhaps more importantly, the new act explicitly exempted security-based swap agreements from Securities and Exchange Commission regulation. And the Federal Reserve allowed banks to triple their leverage ratios, from 10:1 to 30:1.
Some sober voices did question the assumptions behind this deregulatory zeal. In August 2005, Raghuram Rajan, then-Chief Economist at the IMF, addressed an annual gathering of central bankers and high-level economists in Jackson Hole, Wyoming on the following topic: “Has Financial Development Made the World Riskier?” His answer was “yes”, but the audience, which included the likes of Alan Greenspan and Lawrence Summers, showed no sign of taking Rajan’s warnings to heart. Only after the crisis hit did Greenspan admit, in October 2008, that “I made a mistake in presuming that the self-interest of organizations, specifically banks and others, were such as [sic] they were best capable of protecting their own shareholders and their equity in the firms.”
Ideas and Interests
Americans pride themselves in being pragmatists, free of the big ideological schemes that have long plagued Europeans. But over the past generation, Asians have proven to be far more pragmatic than Americans—trying things to see if policies worked (including capital-account liberalization) and dropping them if they didn’t. China is a case in point: Its evolution toward capitalism since 1978 has unfolded not on the basis of any master theory, but rather through constant experimentation and adaptation. As Deng Xiaoping said, “It doesn’t matter whether a cat is black or white, as long as it catches mice.” Asia’s financial sector is one of the least liberalized among the world’s regions, but that has not prevented it from achieving historically unprecedented rates of growth over the past three decades.
Americans, on the other hand, have proven to be remarkably rigid in their economic thinking and—there is no other word for it—ideological. This has been true not just of political stalwarts in the Republican Party who are wedded to one or another form of market fundamentalism, but of academic economists who provided the intellectual underpinnings for the move toward greater market liberalization. The shift toward liberalized economies that began in the late 1970s was, at its starting point, a perfectly pragmatic reaction to nearly fifty years of growing state intervention in economic management. But the larger conceptual framework that justified liberalization—the idea that markets would be self-regulating, and that this would be true in finance as in other sectors of the economy—morphed into a dogma so strong that it held its own in the face of facts that indicated otherwise. Many economists in positions of power saw finance as the key to unleashing efficiency gains. Former Governor of the Federal Reserve Board Federico Mishkin stated, “The only way for poor countries to get rich is to provide the incentives for capital to be supplied to its most productive uses.” Economists’ zealous preoccupation with the potential benefits of finance allowed them to overlook the sector’s inherent danger.7
Some observers, most notably former IMF Chief Economist Simon Johnson, blame the Wall Street lobby for turning the head of the economics profession.8 In the 1997–98 election cycle that preceded the passage of the Gramm-Leach-Bliley Act, the finance, insurance and real estate sectors spent more than $200 million lobbying directly on behalf of this Act and gave another $150 million in campaign contributions. Wall Street’s impact on intellectuals and academics was likely subtler than simply buying their good will, however. Many economists and business school finance professors went to work for investment banks and hedge funds, helping them to devise the complex models that, in retrospect, have proven so inadequate in predicting risk. They thereby acquired a personal stake in the success of the financial sector, not balanced by any incentives to think that the sector as a whole was destroying value rather than creating it.
Indeed, any Wall Street-connected economist pulling in large consulting fees would have a strong personal incentive to believe in some version of liberalized finance. By 2007, the financial sector accounted for nearly 42 percent of all U.S. corporate profits, up from 19 percent in 1986. Common sense might have suggested that all of the repackaging of financial instruments that they cooked up to earn fees couldn’t possibly be worth two-thirds as much as all of the other goods and services produced by society, or that the outsized bonuses individual investment bankers earned represented their real marginal social product. But if one believed that an asset price really is the best indicator of all costs and risks, one could rest easy at night knowing that those huge fees were somehow deserved. Lawrence Summers, in his defense of financial liberalization, said as much to his colleagues at the American Economic Association.
As is now widely understood, the abstract argument for a competitive financial system parallels the argument for competitive markets in general. . . . Intermediation activity will be profitable when it is efficient; that is, when the gains generated outweigh the costs of the activity. Thus, for example, specialists who provide liquidity to a market will earn profits that reflect the benefit they are bringing buyers and sellers, just as those who transport goods between high- and low-price regions can earn revenues that reflect the benefits they are providing.9
One could perhaps justifiably conflate the idea of productivity gains based on new technologies with a financial-services sector that was facilitating the efficient diffusion of those technologies through the market. But now, looking in retrospect at the data separated out by sector, it seems clear that the productivity gains we thought we scored during the past dozen years were highly inflated.
Finally, belief in the efficiency of financial markets was bolstered by perceptions of the national interest. When economists and other policymakers tried to identify the comparative advantages of the United States in the world economy, they saw knowledge-based finance as an area in which the United States had a comparative advantage. The U.S. economy having ceded most manufacturing to East Asia and Germany, the U.S. Treasury tried to advance the country’s interests by creating a national and international environment conducive to the interests of Wall Street. One wouldn’t push such an agenda if one believed that an overblown financial sector was dangerous to global growth, let alone that of the United States.
Economists in the Mirror
The global economic crisis offers a moment for economists to take a hard look at their own profession. There are at least three areas in which current approaches both to research and education need rethinking.
The first concerns the interplay of theory and method. The areas where economic theory is the most empirically verifiable lie in microeconomics—that is, the economics that takes place at the level of the firm. But when microeconomic theories get scaled up to the level of complex national economies, not to mention the global economy, theory often limits our understanding of how the real world works. There are all sorts of non-linearities and unanticipated feedback effects that kick in at a macroeconomic level that microeconomic theories can’t anticipate. Thus, the concept of an efficient market equilibrium may work well in limited markets, where all buyers and sellers are sophisticated and have good information about one another. But when scaled up, these markets may attract unsophisticated participants who simply follow the signals of market leaders or other market actors such as credit-rating agencies, thereby exaggerating swings on both the up and down sides. In the case of impossible-to-value packages of mortgage securities, larger scales and aggregation may make markets less efficient allocators of resources.
Moreover, much has been written about the faulty mathematical models used on Wall Street to evaluate risk. But beyond that, no one modeled the way that models themselves might be used to provide a false sense of security to the financial sector. What social science can justifiably call itself sophisticated if it ignores the recursive impact of its own theories?
Second, academic economists get hired and promoted based on their ability to construct and manipulate highly abstract theoretical models. Few gain academic prestige by doing interdisciplinary policy work. But the fact of the matter is that applying theories to the real world requires mastering a good deal of knowledge about politics, history and local context. There is a real danger when, in the absence of empirical knowledge, one simply assumes that one’s abstract model must explain reality. While economists are great at critiquing things like the financial regulatory system, they are usually at a loss to prescribe politically feasible solutions. This is simply not what their profession trains and pays them to do.
Third, some political economic savvy that comes from historical knowledge might be of particular help. It is very hard to get tenure as an economic historian in top-tier American economics departments today. Indeed, a good many economic historians labor away in history departments, where economists are even less likely to care about their work or talk with them. We are lucky that current Chairman of the Federal Reserve Ben Bernanke made his career studying the Great Depression. But a lot rests on his shoulders, since there are not a lot of other knowledgeable voices who remember their Keynes and know the ins and outs of Depression-era policymaking.
Unjustified and empirically unsupported economic ideas sowed the policy groundwork that has prompted the worst economic crisis in 75 years. Any academic discipline that developed and communicated ideas of such devastating effects has some soul-searching to do. The question must be asked: If so many economists understood the dangers of financial sector liberalization, how did they let it happen? The senior officials in the George W. Bush Administration who planned and launched the Iraq war, as well as their external supporters, have been widely blamed for excessive hubris in their certainty about Baghdad’s supposed weapons of mass destruction, and in their excessive optimism as to how cheap and easy it would be to occupy and democratize Iraq. In the court of public opinion, at least, they have been blamed for their mistakes and held accountable. Over the past decade, many American economists have been guilty of a similar hubris. But it is not clear whether the economists who abetted the development of “financial weapons of mass destruction” (in Warren Buffet’s phrase) will ever face similar accountability.
1IMF Director of External Relations Thomas Dawson in a June 13, 2002 speech to the MIT Club of Washington, “Stiglitz, the IMF, and Globalization.”
2Bhagwati, “The Capital Myth: The Difference between Trade in Widgets and Dollars”, Foreign Affairs (May/June 1998).
3Keynes, The General Theory of Employment, Interest and Money (Macmillan, 1936), pp. 161–2.
4Fischer, “Capital Account Liberalization and the Role of the IMF”, speech at the IMF Annual Meetings, September 19, 1997.
5Two comprehensive surveys of the effects of capital-account liberalization on economic growth are: Barry Eichengreen, “Capital Account Liberalization: What Do Cross-Country Studies Tell Us?” World Bank Economic Review (September 2001); and M. Ayhan Kose, Eswar Prasad, Kenneth Rogoff and Shang-Jin Wei, “Financial Globalization: A Reappraisal”, IMF Staff Papers (April 2009).
6By saying that Asians were wiser, we do not mean necessarily to suggest that the policy of accumulating large reserves was in the end a good idea, since it facilitated the U.S. housing bubble and subsequent meltdown. The point is rather that they accurately concluded that capital-account liberalization was dangerous in a way that Americans by and large did not.
7Mishkin, The Next Great Globalization (Princeton University Press, 2006), p. 12.
8Johnson, “The Quiet Coup”, The Atlantic (May 2009).
9Summers, “International Financial Crises: Causes, Prevention, and Cures”, American Economic Review (May 2000).