As we sift through the rubble of the American political economy for clues to the causes of its collapse, forensic hypotheses have proliferated widely. Some have denied any systemic crisis at all, seeing a few contingent decisions as responsible for all our present burdens.1 Others see deep cultural maladies playing out—the moral erosion of thrift, the cultural contradictions of capitalism, the broad sclerosis of government of which financial malfunction is only part of a larger whole, and other, equally expansive explanations. In between these extremes are various theories claiming to grasp at least a piece of the puzzle: export-oriented countries flooding our financial system with more money than we could wisely invest; the shenanigans of American business schools and academics with their mathematical models; the offshore shadow economy that helped inflate the availability of cheap credit; the super-computerization of trading processes; the suborning of professional economics by the Wall Street lobby. Finally, there is the satisfying, if perhaps unsophisticated, populist explanation: the simple greed of bankers and financiers, abetted by the inexhaustible venality of Congress.
Many of these explanations make at least some sense, but none of them really touches directly on the power of a pervasive but wrong idea. Often called “market fundamentalism”, this idea posits that the laws of economics are eternal and unchanging like the laws of physics, and that the ways of government (especially socially ambitious government) invariably run violently counter to these laws. We have a specific way to describe this view: the Chicago School of economic thought.
The Rise of the Chicago School
The basic theory of the Chicago School as it has existed over the past forty years is that markets regulate themselves effectively. This theory is based on a key premise: that economic actors are value-maximizers who for practical purposes have all the information they need in prices to make rational judgments. The theory asserts that all commercial markets—product markets, service markets, financial markets and the like—operate most efficiently through voluntary contracts between buyers and sellers under competitive conditions, and that beyond defining property rights and enforcing contract law, government regulation will make American markets less efficient, and most of us poorer as a result.
To many Americans, the Chicago School view is too self-evident to require any analysis of its origins or veracity. It was not always so. Before the stock market crash of October 1929, American economists subscribed to varieties of neo-classical thinking, a term coined by Thorsten Veblen to describe the late 19th-century mathematization of standard economic thinking by William Stanley Jevons, Leon Walrus and others. Such thinking of course gave supply and demand, competition and prices pride of place, and it harmonized essentially with Adam Smith’s concept of the invisible hand, one of many examples of the dominant metaphor of Anglo-American social and political thought: the dynamic equilibrium. Economic thought at that time, however, was not a mechanistic or formulaic body of understanding. It swam in the ocean of commerce, but concerned itself only with the tides and the currents, not with the ocean itself. It recognized the role of government in setting contractual and property parameters for economic life; it could encompass populist and progressive concerns about monopoly (but never came to any solution about oligopoly); and it lived with accumulated evidence that the workings of supply and demand could be affected by various externalities—not least imperfect financial systems that both overbuilt and underbuilt railroads throughout the 19th century with the help of overly enthusiastic London investors.
What differentiated the Chicago School at the height of its influence was its post-neo-classical insistence that the market works automatically, without government oversight, as well as its insistence that theory trumps experience in proving it so. Where did the Chicago School’s ideas come from? How did they acquire such dominance in American economic policy thinking? And how did that thinking contribute to the current mess?
To understand the origins of the Chicago School we must locate it in both social and intellectual history. The University of Chicago economics department, a heterogeneous group for many years, housed some stalwart neo-classicists up to and through the Great Depression. When a new synthesis formed in the cauldron of the Depression—generally described as the fusion of John Maynard Keynes’s general theory into the neo-classical paradigm—some senior faculty at the university had their doubts about its viability. Frank Knight, Jacob Viner, Henry Simons and others criticized Keynes on both theoretical and practical grounds, and two of their best students, George Stigler and Milton Friedman, came of age at precisely this moment of synthesis and critique.
Stigler and Friedman, lifelong friends from their days as grad students in the 1930s, later taught together at the university, each garnering national reputations and winning the Nobel Prize for economics. Stigler received his for his work in the 1950s and 1960s setting out the framework of Chicago economic analysis described in his classic textbook, The Theory of Price. Friedman won his Nobel for his work on monetary policy. It is from these men and their accomplishments that the idea of a Chicago School of economic thinking, a general orientation to the subject rather than a specific, all-encompassing theory, was born. But Chicago School thought in the 1940s and early 1950s was not the same as Chicago School thought as it had evolved by the early 1970s. The differences are important, and the journey perhaps more so.
It is fair to say that Stigler’s price theory is the intellectual vortex around which all the rest of the ideas associated with the Chicago School revolve. But in its earlier iterations, another idea played a major role in shaping the School’s policy thinking. The Chicago School holds that markets reward efficiency since buyers will purchase comparable goods and services at the lowest prices. But to incentivize efficiency among producers there has to be competition, which monopolistic practices distort. Largely under the influence of Henry Simons, the Chicago School developed a dour view of big business and a positive view of government’s role in maintaining the preconditions for effective competition.
Simons insisted on draconian government regulation to limit the size of corporations and the application of antitrust laws to ensure that small firms would not conspire to eliminate competition among them. He believed that the free play of competition in an economy made up of numerous small firms would keep each firm in check. Such firms could provide both economic and social protections to the community. He also believed, however, that the economic and political landscape of the 20th century had demonstrated a dangerous tendency toward monopoly and political plutocracy. In 1952, following his teacher’s lead, Stigler wrote an influential essay in Fortune entitled “The Case Against Big Business”, and he testified before Congress that large corporations should be broken up.
Inspired by Simons, both Stigler and Friedman also advocated a guaranteed income for the poor in place of welfare. They favored anti-bigness tax policies, as well, that would have companies distribute all current earnings to shareholders, thereby discouraging the accumulation of retained earnings that could be used to purchase other corporations. So while the emerging Chicago School rejected the intrusive schemes of the New Deal that required specific economic outcomes, they were hardly opponents of active government intervention in the economy. It’s just that the purpose of that intervention, in their view, was to restore and maintain an effectively competitive playing field, not to supplant it with government planning. Thus, while Chicago School thinking has been associated with conservative politics for the past four decades, its origins were entirely compatible with liberal views—or at least with the anti-plutocratic liberalism of William Allen White and Theodore Roosevelt, if not also that of Woodrow Wilson—throughout the late 1930s, the 1940s and well into the 1950s.
Up to the middle of its career, the Chicago School also reflected the traditional conception of its subject matter: not economics but political economy. It recognized the link between competition in the marketplace and the primacy of political liberty that it both depended on and nourished. In due course, even this latter view fell by the wayside. It is particularly ironic that it did so, however. Throughout the 1960s, the banner of the Chicago School mainly belonged to Milton Friedman. His influential 1962 book Capitalism and Freedom set out a series of policy recommendations on the proper roles of government and commercial markets. Its central point was that commercial markets are not only more efficient than markets subject to government regulation, but that they also provide, in most instances, more protection for individual liberty than do government laws and programs. This message had special appeal at the apex of the Cold War, which was understood from the American side as a battle against theories of central planning, command economies and authoritarian politics ranged against Western security and freedom. So what changed?
What changed were the inner culture of American social science and the structure of the American economy. Changes in the thinking of the Chicago School amounted to a compound reaction to both.
Neo-classical analysis was microeconomics at base. It was about the behavior of firms and individuals. It was not systemically oriented, and it lacked a grand theory of the sort social scientists were then falling in love with. Keynes, however, did have a general theory. And so did Marxists of assorted stripes. Realizing that they couldn’t beat something with nothing, opponents of statism (or merely skeptics of government aspirations to direct social change) saw that they needed a theory of their own. Somewhere on the road to affirming the instincts of their mentors, Stigler and Friedman detoured away from microeconomics and empirical research and followed the siren call of grand theory. They sought to distill the pure essence of neo-classical ideas in much the same way that physicists were then trying to theorize about the universe. That is how Stigler came to argue that, while he knew real economic behavior was more complex than price theory hypothesized, it still worked better than any comparable theory at explaining observable reality in broad strokes. If reality got in the way of theory, then reality be damned.
It is also how Friedman and Aaron Director, followed by Stigler and others of the new Chicago persuasion, broke with Simons in deciding that government regulation was unnecessary (and generally harmful) for effective competition. Eugene Fama’s “efficient market” hypothesis—the demonstration that stock markets as a whole arrive at more correct answers than do even financial experts—helped smooth the road to the conclusion that government experts could not improve on market-based results.2 Similarly, the post-Simons Chicago School view held that markets work automatically, and never make a mistake that stands uncorrected for long. Therefore, if businesses get big, then they’ve done so because markets have determined that they should; if big firms fail, then it’s because markets have so ordained. Feedback loops, then a relatively new notion in social science, ensured that information would drive markets to efficiency; such feedback loops were precisely what economic management by government directive lacked. In terms of pedigree, the postulate of automatic market corrections was half Social Darwinist and half Newtonian: the survival of the fittest modulated by action/reaction sequences.
This orientation to the subject also helps to explain how Friedman’s view of the automatically self-regulating market, which based itself on an almost bucolic concept of competition between numerous small and rival firms, could arise at the very moment that American economic reality was moving rapidly away from such an environment. It was moving toward the Galbraithian Iron Triangle, an economy based on big firms locked in partnership with big labor and managed by active, if not always literally big, government. But Friedman had little interest in this structural change; he preferred to rely on theory that blamed too much government intervention in personal and business decisions, and an excessively flexible monetary policy, for endangering the economy. Like Simons before him, Friedman wanted to restore an economic Eden, one with perfect knowledge, no transactional costs, and a concept of voluntary contracts that bordered on fantasy. He identified the snake in the 1960s as New Deal liberalism, which drained the independence and freedom of Americans, rather than the Iron Triangle itself. Instead of seeing the workings of technology and innovation behind the new shape of the American economy, he saw the evil ghost of FDR.
In Capitalism and Freedom Friedman indicts government attempts to solve social problems in ways that distort markets. Obviously, much had changed between 1952, when Stigler defended vigorous antitrust enforcement, and 1962, when Friedman in effect defended big business against government interference. Much changed over the next decade, too. The Chicago School’s concern about the size of American businesses and its affirmation of the primacy of politics had both disappeared by the late 1960s. Here, the agent of change was less Friedman than Stigler himself.
These dual abandonments can be illustrated by juxtaposing two long-forgotten national reports. The first, empanelled by President Lyndon Johnson in 1968, was led by Phil Neal, Dean of the University of Chicago Law School. The second, a transition report prepared less than a year later for President Richard Nixon, was chaired by none other than Stigler himself.
The centerpiece recommendations of the Neal Report were two new antitrust laws: one to break up large firms in concentrated oligopolistic industries and another to forbid large firms from buying the leading firm in unrelated industries in order to prevent market dominance through a diffusion of economic resources. The Neal Report justified its recommendations in ways that echoed the Sherman Antitrust Act of 1890, the Clayton and Federal Trade Commission Acts of 1914, the Robinson-Patman Act of 1936 and the Cellar-Kefauver Act of 1950. Dean Neal also took comfort in formulating his recommendations from what he perceived to be a broad consensus in the economics profession that industry concentration was higher than required by economies of scale. He even cited Stigler’s “The Case Against Big Business” and quoted his The Theory of Price in making his argument.
Stigler’s 1969 report strikingly considers antitrust solely from an economic perspective. No suggestion remains that government regulation of business can contribute to maintaining democracy or freedom. The task force argued that no new antitrust laws were needed, citing data indicating little evidence of increasing concentrations of economic power in America and no indication that major manufacturing industries were becoming more concentrated. Unlike the Neal Report, the Stigler Report rejected the idea of stopping conglomerate mergers “on the basis of nebulous fears about size and economic power.”
The Stigler Report ratified the new Chicago School dogma: “The market is always right.” How did the Chicago School get from Simons’s break-up-the-big-firms stance and Stigler’s 1952 affirmation of it to the 1969 Stigler Report, which in due course evoked from its chairman an admission that he had been wrong about big business years before? The answer is not entirely obvious, but one thing is clear: Americans generally had grown accustomed to big business in the post-World War II era. There were no comparably widespread fears about Westinghouse and General Electric and Colgate-Palmolive in the 1960s and 1970s as there had been about the Union Pacific, Standard Oil, U.S. Steel and the House of Morgan 75 years before. The “man in the gray flannel suit”, the “organization man” and the union worker in the steel, auto and rubber industries had all grown to depend on big business and would have been distressed to see the profitable arrangements of the Iron Triangle disrupted.
A second part of an answer seems clear, as well. As long as the American economy was growing and vibrant under the shadow of the Iron Triangle, no one could touch the neo-Keynesian orthodoxy that had formed in the 1950s, and that was elaborated and operationalized in policy by America’s best and brightest economists at the start of the Kennedy Administration. But when the economic masters of the universe ran into trouble—when low inflation with nearly full employment turned out to be much harder to maintain than had been promised—doubts about the postwar orthodoxy emerged. One can debate whether it was the theory or the practice of Keynesianism that was wrong, but for practical purposes it doesn’t matter. What matters is that the loss of confidence in it set the stage for the most ironic moment in 20th-century American economic history: an anxious and confused Republican President quipped in 1971, “We are all Keynesians now”, at precisely the moment when the fallibility of the Keynesian paradigm should have been clear to all.
It is fair to say, therefore, that Chicago School perspectives arose victorious out of the rubble of stagflation and the nadir of the post-Vietnam war consensus on economic policy.3 When Ronald Reagan and Paul Volcker set to wring the inflation out of the American economy in 1981–82, they did so in the light of Milton Friedman’s criticisms of American monetary policy. As they did, they muted any echo of Nixon’s Keynes quip from the halls of the Republican Party. Indeed, Reagan above all others enthroned the Chicago School’s anti-regulatory message as the new American orthodoxy when he said, “Government is not the solution to our problems; government is the problem.”
The Myth of Good Times
The triumph of the Chicago School via the Reagan-Thatcher revolution, it is widely believed, ushered in an era of dramatic and real economic growth. We have heard repeatedly in recent months that current problems are the result of excesses, of basically sound ideas having been abused, or, in more philosophical assessments, of an era simply having run its course as all eras do. The fact of this belief goes far to explain the image of the Chicago School, both before and since the recent meltdown.
Perhaps this is true. If one examines the data of the 1982–2007 quarter century, one could perhaps make a case for a kind of golden age. But one could make another case, as well: that self-regulation never did work, and especially that the financial regulations dismantled beginning in 1980 did not create an era of stunning economic success. Quite aside from the well documented hollowing out of middle class incomes, the growing gap between rich and poor, the bribing via the tax code of American manufacturing to export itself, and the utter failure to invest adequately in infrastructure, consider the strictly financial history of this 25-year period from a different perspective.
The Depository Institutions and Deregulation and Monetary Control Act, a major banking “reform” in 1980 near the end of the Carter Administration, allowed banks for the first time to merge and to operate across state boundaries. It also provided for the gradual lifting of caps on interest rates (completed by 1986). The latter disadvantaged the savings and loan industry, which had been able to offer depositors higher interest rates on savings. Legislation in 1982 designed to offset the damage done in 1980 inadvertently made things worse, ending with the disintegration of the industry and a $500 billion Federal bailout in 1989.
There was of course precedent for this; in fact, there were almost regular Federal bailouts during the 1970s and 1980s: Penn Central (1970), Lockheed (1971), Franklin National Bank (1974), New York City (1975), Chrysler (1980) and Continental Bank of Illinois (1984) all preceded the S&L; bailout. Each bailout was controversial, but all were taken to avoid a domino effect that would have endangered the entire economy. The difference between the actions of the 1970s and 1980s and more recent bank bailouts is that, since 1999, additional Federal legislation authorized the growth of banks to sizes at which the failure of any of them almost automatically jeopardized the financial security of the country. The point is that the Chicago School theory that failed companies could easily be replaced by more able firms without causing undue collateral damage to the economy had been proven wrong long before the current crisis.
Moreover, the American economy was not so stable during its supposed golden quarter century as many now believe. On October 19, 1987, for example, the Dow Jones average lost 23 percent of its value, the single largest one-day decline in its history. And if the American domestic economy appeared unsteady, the rapid adoption of the American free-market formula after the end of the Cold War made many foreign economies even more unstable, not least those of Mexico, East Asia and Russia. The 1998 Russian economic crisis then doubled back on the U.S. economy, bringing down the widely admired American firm Long Term Capital Management, which had been formed in 1994 by a group of Nobel Prize-winning American economists. It managed a portfolio of investments exceeding $134 billion when it crashed in 1998.
Notwithstanding the highly erratic behavior of currency markets, international stock markets and national economies, the rush of enthusiasm for deregulated economies and liberalized capital flows continued unabated. Thanks to the performance, still not entirely accounted for, of the American stock markets from the mid-1980s to 1998, the dazzling growth in wealth on Wall Street created a dizzying climate of optimism. That is perhaps why we failed to see the Enron, Adelphia and WorldCom scandals in 2001 as the warnings they were, and why some years earlier we ignored how the troubled financial scene of the early 1980s afforded companies what appeared to be a profitable opportunity to end run the Glass-Steagall Act of 1933.
Although firms were not allowed to integrate underwriting and sale of securities with bank deposits, the Fed did allow some financial companies to buy up failing savings institutions. A wave of financial mergers ensued.4 For example, Sears had a vision of transforming itself into the nation’s “largest consumer-oriented financial service entity.” In 1981 it owned the Allstate insurance companies, a $3 billion California savings bank, a mortgage company, a mortgage insurance company and the nation’s then-largest credit card operation. It acquired Dean Witter Reynolds, then the fifth-largest brokerage firm and Coldwell Banker, the nation’s largest real estate broker. American Express attempted a similar transformation by buying two brokerage firms to become Shearson Lehman/American Express and then added Investors Diversified Services. Prudential Insurance bought Bache brokerage. Bank of America bought Charles Schwab, the nation’s largest discount broker. Merrill-Lynch formed the nation’s second largest realty firm after Coldwell Banker. These and other mergers of financial companies occurred within the space of half a dozen years, albeit with little evidence that the resulting corporations could be successfully operated.
Alas, almost none of these mergers panned out, and most of the acquisitions were subsequently divested. But, contrary to what Chicago School economics would have predicted, few suffered from the judgment of the market. The CEOs of banks and other companies that were acquired were not punished for poor management; indeed, most were richly rewarded with golden parachutes for having allowed their companies to be taken over. The merger lawyers and investment bankers made money from large transactions whether the mergers succeeded or not. Stockholders of acquired firms were paid a premium for their shares even though the stockholders of acquiring firms typically did not see the price of the combined firm rise. And the lawyers, commercial banks and investment bankers made money again when unworkable conglomerates were broken up and sold off. Chicago School doctrine, which was by then grand theory, could not account for the micro-behavior of specialized firms in specialized sectors of the economy.
Despite the dismal experience with financial and other mergers in the 1980s, banks, investment bankers and other financial institutions continued to press for authority to grow and to make ever more speculative investments. Chicago School theories were enlisted to send forth the message that financial markets should be freed further from the bonds of regulation. We would all be rich if only the remaining misguided New Deal rules were repealed. Social Security could be privatized and retirees would be richer by investing in the stock market. School districts could park their tax revenues in the stock market and reap rewards until they had to use their funds, allowing the districts to lower their tax rates. And so on. In more recent times this sort of thinking grew into a global delusion, as Icelandic and Scottish banks joined a frenzy of financial investments that have bankrupted small and large organizations alike around the world.
The critical moment came in 1999–2000, during the Clinton Administration, a time when accumulated evidence should have induced caution. The Gramm-Leach-Bliley Act passed in 1999 and the Commodities Futures Modernization Act (CFMA) passed in 2000 were most responsible for eliminating Federal and state regulation of important banking and securities transactions. Both acts were sponsored by then Republican Senator Phil Gramm, an ardent believer in Chicago School economics and a former economics professor at Texas A&M.; But they were supported by Fed Chairman Alan Greenspan and by all the major Clinton Administration economic policy principals as well. As political catastrophes go, this one was wholly ecumenical. Let’s look closer into what these laws actually did.
Gramm-Leach-Bliley repealed remnants of the Glass-Steagall Act, which forbade investment banks from owning depository banking institutions whose accounts are federally insured and from investing the funds of their depositors. The Act had been passed originally to separate these financial functions after extensive Congressional hearings showing that banks had routinely abused their position of trust by investing depositors’ money to shore up the value of their own stock, the stock of companies to which the banks had loaned money, or companies in which they held stock. To restore confidence in the banking system, Glass-Steagall created a guarantee for bank deposits (the FDIC), but eliminated the potential conflict of interest by forbidding depository institutions from engaging in investment banking functions such as the underwriting and sale of securities. That way, banks could not gamble with and lose other people’s money—only their own. After 1999 they again could, and, of course, they did.
The CFMA effectively repealed the jurisdiction of any Federal or state agency to regulate the sale and trading of securities derivatives.5 Indeed, it specifically removed the sale and trading of securities derivatives from the jurisdiction of the Commodities Futures Trading Commission, the Securities Exchange Commission, and state gambling and criminal laws. It also eliminated the disclosure obligations and the requirements to pay any money at the time of contracting for the derivatives, obligations that had derived from the original 1933–34 legislation regulating all securities. The CFMA was carefully drafted to exempt such transactions from state gambling laws, too, because of the concern that state law enforcement officials might decide that derivative transactions were simply bets. The legislative assumption was that people who entered into contracts for securities derivatives would be sophisticated investors fully able to protect themselves when they designed the contracts for the securities derivatives.
The result of these two laws was a huge growth in the financial services industry and in the average size of commercial operations. The dollar size of the U.S. financial sector grew from 4 percent of GDP in 1998 to more than 8 percent in 2006. Giant financial institutions grew so large that they no longer understood the businesses operated by their own divisions. Employment boomed at places like Citicorp, Bank of America, Wachovia, JPMorgan Chase, Goldman Sachs, Wells Fargo, Lehman Brothers, Merrill Lynch, Bear Sterns, AIG and Prudential. Despite automated teller machines replacing tellers and computer trading programs replacing financial analysts, employment in the financial sector grew from five million in 1980 to a peak of 8.3 million in 2006. In that year JPMorgan Chase employed 174,000; Goldman Sachs, which has no retail banking offices, employed around 23,000. These financial firms were not only too big to fail; they were too big to succeed.
The Derivatives Well
We may not yet have experienced the full brunt of our liability in this regard. The nickel has yet to hit the bottom of the well from the 1999–2000 legislation, and that well concerns the derivatives market most specifically. The unregulated securities derivatives market created by the CFMA in 2000 has been estimated to have a total nominal value between $200 trillion and $600 trillion. To understand the magnitude of these numbers, it may be useful to consider that estimates of the annual GDP of all the world’s economies are around $60 trillion.
No one knows precisely the nominal value of securities derivatives because the trades are not listed on any exchange and are unreported. We got a peek at the derivatives market because the subprime crisis revealed disarray in the derivative market for credit default swaps (CDSs). There are an estimated $67 trillion of these CDSs, some of which were supposed to protect subprime investors if their mortgage-backed securities turned out to be worthless. As the AIG bailout illustrated, when the time arrived to make good on the swaps contracts, no funds were available for payment.
We were warned before the CFMA was passed in 2000. In May 1997, Brooksley Born, Chairperson of the Commodities Futures Trading Commission, opposed proposals to deregulate the trading of futures, options and derivatives. In an interview posted on DerivativesStrategy.com she said:
I do think they need protection against fraud and manipulation in these markets. Currently the CFTC conducts market surveillance and oversight over all the futures exchanges. . . . The exchanges are required to keep detailed records. . . . None of those things would be available to professional markets under the proposed legislation. The federal government would not have an oversight role at all.
Born’s warnings were ignored, even though she had previously been a senior partner in charge of securities matters at the prestigious Washington law firm of Arnold & Porter. After the Fed decided it had to arrange a multibillion dollar bailout of Long Term Capital Management (LTCM) in the summer of 1998, Born testified before the House Banking and Financial Services Committee that the collapse of LTCM demonstrated the danger that an unregulated derivatives market posed to the American economy:
While the CFTC and the U.S. futures exchanges had full and accurate information about LTCM’s on-exchange futures position, no federal regulator received reports from LTCM on its OTC [non-exchange] derivatives position. Indeed, no reporting requirements are imposed on most OTC market participants. This lack of basic information about the positions held by OTC derivative market users and the nature and extent of their exposure potentially allows OTC derivatives market participants to take positions that may threaten our regulated markets or, indeed, our economy without any federal agency knowing about it. . . .
While traders on futures exchanges must post margin and while their positions are marked to market daily, no such requirements exist in the OTC derivatives market. LTCM reportedly managed to borrow so much that it was able to hold derivatives positions with a notional value of as much as 1,300 times its capital. . . . This unlimited borrowing in the OTC derivatives market—like the unlimited borrowing on securities that contributed to the Great Depression—may pose grave dangers to our economy.
Despite these warnings, Clinton Administration officials did nothing. Worse, according to one source, Treasury Secretary Robert Rubin, Undersecretary Lawrence Summers and Fed Chairman Alan Greenspan tried to persuade Born to keep quiet lest she cause a panic.6 The upshot is that when the CFMA legislation was introduced, passed and signed in December 2000, there was no discussion or debate in Congress, and virtually nothing was said about the matter in the press.
Why were Born and others ignored? Several reasons come to mind. After the British deregulated derivative contracts in 1986, American financial institutions worried that they would lose a valuable market to the British. And perhaps Greenspan, Rubin and Summers were confident that, having avoided disaster after the 1987 stock market crash, the failure of LCTM, the Southeast Asian currency crisis and the Russian ruble crisis, they understood and could manage the financial markets. But third, and probably most important, was their faith in the Chicago School notion that markets normally self-regulate and are most efficient without government regulation. Indeed, Greenspan admitted that he had put too much faith in the self-correcting power of free markets and had failed to anticipate the self-destructive power of wanton mortgage lending. As he testified in Congress on October 23, 2008: “Those of us who have looked to the self-interest of lending institutions to protect shareholders equity, myself included, are in a state of shocked disbelief.” What is really shocking, however, is that after so much evidence had accumulated concerning the frailties of Chicago School thinking, any intelligent person could still believe in it.
Over the past thirty years a series of Administrations and Congress have dismantled a system of public and private rules, developed over a century and a half of intermittent economic crises, that were intended to safeguard the American economy. These rules made transactions more transparent and our economy less vulnerable to the mistakes of supersized corporations. The faith that led to the abandonment of these rules has a name: Chicago School economics. That faith was the primary source of support for the creation of the least regulated parts of our financial sector.
The Chicago School’s basic assumptions about human nature and the intersection of economic and political life were not doctrinaire at the start, and they may have been in that day a useful warning against then-fashionable tendencies toward planned economies, the hubris of social engineering and assumptions of ultimate convergence between socialist and capitalist economies. But ironically, as economics asserted itself as an independent science, divorced from its partner, political philosophy, it became more politically entangled than ever. Modern politicians searched for the expertise necessary to manage a changing modern economy in the Chicago School and found an empty closet. Once it abandoned its political concerns with economic power, Chicago theory, with its axioms of profit maximization, perfect information and self-correcting markets, had no advice to limit the downside risks of economic and financial disaster. The fruitful blending of social and economic concerns pioneered by Simons may not be suitable to a modern economy, but his concern about the dangers of centralizing economic power remains an issue that is ignored by the Chicago School. Doctrine supplanted healthy intellectual doubt, theoretical purity trumped common sense and historical memory, acolytes took over from masters, and a different kind of irrational exuberance was the result. We’re all now paying the price.
1See Alan Blinder, “Six Errors on the Path to the Financial Crisis”, New York Times, January 25, 2009.
2So did related earlier work associated with the University of Chicago, including that of Harry Markowitz (a University of Chicago Ph.D. who did not teach at the university) and Merton Miller (a Johns Hopkins Ph.D. who taught finance at the University of Chicago from 1961 until the mid-1990s), among others. See Justin Fox, The Myth of the Rational Market (HarperCollins, 2009).
3The drama is covered by Robert J. Samuelson in The Great Inflation and Its Aftermath (Random House, 2008).
4See my Megamergers:Corporate America’s Billion-Dollar Takeovers (Ballinger, 1985).
5Securities are stocks or bonds and other rights to assets owned by a business. Unlike securities, derivative contracts confer no rights of ownership in a business or even rights against the owner of a business. They are contracts with third parties about the future value of a business or asset. Such contracts might be viewed as similar to insurance contracts where a third party agrees to pay the buyer of the contract if the value of an asset falls below its original value, even though neither party to the contract owns the asset. Such contracts might also be viewed as bets between bystanders about whether the value of the assets will rise or fall.
6Katrina vanden Heuvel, “The Woman Greenspan, Rubin & Summers Silenced”, The Nation Online, October 9, 2008. See also the warnings of Lynn Stout, now the Paul Hastings Professor of Corporate and Securities Law at UCLA, in “Insurance or Gambling: Derivatives in a World of Risk and Uncertainty”, Brookings Review (Winter 1996).