It has been two years since the demise of Lehman Brothers and the ensuing rescue of incumbents in the U.S. financial sector. Many Americans are still furious that their government helped the rich and politically connected few while leaving the rest hung out to dry. The government bailed out Wall Street financiers who live in the top tenth of the top hundredth of the income distribution. Meanwhile, almost one quarter of families with mortgages remains stuck with negative equity in their homes.Of course, the bailout was necessary: After Lehman collapsed the world economy quickly buckled, proving that the government would not have done ordinary Americans any favors if it had left other financial institutions to go down as well. But there can be no doubt that welfare for the well-to-do is corrosive to American capitalism and democracy. In an interview on 60 Minutes last December, President Obama protested that he “did not run for office to [bail] out a bunch of fat cat bankers on Wall Street.” Yet bail them out is what he did. The Treasury likes to say that it has made money on many of its rescues. It points out that its direct injections of equity into financial institutions, conducted under the $700 billion Troubled Asset Relief Program (TARP), will end up costing taxpayers remarkably little—according to Treasury Secretary Tim Geithner, less than $50 billion, although this estimate is contingent on the Treasury’s ability to sell its remaining stakes in AIG, Citigroup and General Motors at valuations that many private sector analysts consider to be wishful. Geithner has called TARP “one of the most effective emergency programs in financial history”, while Steve Rattner, the Wall Streeter who oversaw the Obama Administration’s rescue of the auto sector, affirms that, “without exaggeration, this legislation [establishing TARP] did more to keep America’s financial system—and therefore its economy—functioning than any passed since the 1930s.” The message is that, rather than being denounced as politically corrosive, the bailouts should be celebrated as a bargain—a fantastically cheap victory against the threat of a second Great Depression. Geithner and Rattner may be partially correct. The government is likely to resell its stake in Citigroup, for example, for more than it paid in the depths of the crisis. But the omens are less good on its other big gambles. AIG has raised nearly $37 billion from selling foreign operations, but that will not be sufficient to repay the more than $180 billion that it received from the American taxpayer. Meanwhile, despite the triumphalism that accompanied the recent IPO of General Motors, the government only sold half its stake. To break even, the remainder would have to be sold at an average price more than 60 percent higher than its November market valuation. Even if the government could emerge with a profit from these deals, the “bargain bailout” narrative would be a false one. For one thing, as Rattner acknowledges, the TARP bailouts should have ended up costing even less than they did. In the heat of the crisis, the government provided capital when nobody else was willing to do so. It therefore had a right—indeed, it had a duty to taxpayers—to impose commensurately tough terms. It did no such thing. Instead, the government often provided its rescue funds at highly concessional rates. Three weeks before Goldman Sachs received $10 billion of TARP money, for example, it raised $5 billion from Warren Buffett on far more expensive terms. Furthermore, the government has been willing to renegotiate its deals with bailed-out firms at taxpayer expense. So far, it has altered the terms of the AIG bailout three times. All in all, by providing subsidized capital during the crisis, the government was enriching shareholders and protecting incumbent managers in the financial sector. In this sense there is a grain of fairness in the verdict reported by opinion pollsters: 58 percent of Americans deem TARP an “unneeded bailout.” Moreover, direct capital injections under the TARP program were only the smallest part of the broader government bailout of the American financial sector. The rescue also came via accommodative monetary policy. Low interest rates drove Wall Street’s cost of funds to zero, allowing banks to lock in easy profits. They could borrow from the Federal Reserve at zero percent and buy a Treasury bond yielding 3 percent: Simply by borrowing short-term money from one branch of the government and lending it back as long-term funds to another branch, they could generate profit without incurring any risk. Meanwhile, even as it pushed short-term interest rates to zero, the Fed purchased trillions of dollars of mortgage securities and Treasury bonds. The stated purpose was to stimulate the broad economy by driving down long-term interest rates. But this policy created a further windfall for anyone who owned mortgages and Treasuries—namely, the banks. Any full accounting of the costs of the bailout must reckon with this part of the policy. After all, zero interest rates come at the expense of ordinary savers. The government’s rescue of Wall Street also included the extensive use of asset guarantees. These guarantees were extended to more than $5 trillion of debt issued by government-sponsored mortgage insurers Fannie Mae and Freddie Mac, more than $306 billion of Citigroup assets and the nearly $3.5 trillion in the money market funds that provide short-term financing to the country’s major corporations and financial institutions. These guarantees amounted to free, unbudgeted assistance to Wall Street and big business: Holders of impaired bonds were spared losses, and issuers of impaired bonds were saved from being shut out of the market, a calamity that might have forced many of them under. Taxpayers have already paid more than $140 billion to cover losses on the debt of Fannie and Freddie, and Congress recently voted to remove the $400 billion cap on taxpayer liability for future losses. The government’s guarantees were perhaps most controversial in the AIG case. AIG Financial Products had written insurance contracts against the default of more than $100 billion worth of bonds and sold these credit default swaps to large investment banks around the world. Had AIG gone into bankruptcy proceedings, the value of these contracts would have been written down to perhaps 65 percent or so of face value, imposing losses on anyone who owned them. But the government declined to impose any such losses on AIG’s counterparties after it took over the firm, effectively transferring more than $30 billion from taxpayers to the banks in a kind of back-door bailout. The deliberations leading up to this decision have been treated like a state secret. As of late December, a Freedom of Information Act request to the New York Fed by the Wall Street Journal had yielded no results. In sum, the U.S. financial sector received subsidized TARP funds, subsidized financing from rock-bottom interest rates and free credit guarantees from the government. On top of all this, financiers pocketed a huge bonanza from the general rally in financial assets that followed the authorities’ rescue of the system. After the low point of March 2009, stocks gained about 60 percent, industrial commodities more than doubled, and “junk” bonds gained about 55 percent. In these circumstances, it was nearly impossible for the financial sector not to make money. And yet, as if to add insult to injury, the bankers, having pocketed the taxpayer bailout, then rewarded themselves handsomely with huge bonuses. Small wonder that large numbers of Americans regard financiers’ pay packets as undeserved. The mixed effects of the bailout can be detected in the non-financial economy as well. The corporate credit market has bifurcated into those big businesses able to take advantage of the bailout’s effects on the capital markets and small businesses that rely on financing from their local banks. Big businesses were among the principal beneficiaries of the blanket guarantee of money market funds, since these are their main source of short-term financing. They benefited from the Fed’s determination to pump money into the markets—a policy known as quantitative easing—since the resulting low interest rates allow them to borrow longer-term funds more cheaply than ever before. Microsoft, for example, recently issued a three-year bond at a record low interest rate of only 0.875 percent while Johnson & Johnson locked in a ten-year rate of only 3.14 percent. Investment-grade companies as a whole have borrowing costs that are more than one percentage point lower than they were before the crisis. Small businesses, however, which employ nearly three-quarters of all Americans, are still starved of financing despite the bank rescues and the fall in market interest rates. The Federal Reserve’s Senior Loan Officer Opinion Survey reported that, as of October, credit conditions for small businesses were beginning to ease for the first time since the crisis began. However, a survey of small businesses conducted by the New York Fed that same month shows that access to credit is still extremely difficult to come by: Nearly half of all loan applications were rejected. Additionally, according to a survey published in November by the National Federation of Independent Business, small business owners who are regular borrowers were experiencing tightening credit conditions and were expecting the situation to get worse. From the perspective of small business owners, there has been no recovery. Ordinary households are out in the cold, too. They face years of higher taxes as the government struggles to pay off the big jump in the national debt. One-quarter of all homeowners are trapped in homes with negative equity. Many are un- or underemployed. Unable to sell their homes, they cannot readily move to pursue new work opportunities. Those who still have jobs cannot refinance their mortgages at the low rates theoretically available because of much tighter credit standards. Meanwhile, more fortunate Americans are free to move jobs and fully able to take advantage of cheap financing. Again, the pain of the weak economy is unevenly shared, and those on the losing end often cannot help but wonder if the game is somehow fixed against them. That suspicion is heightened by the fact that the bailouts appear distressingly arbitrary. In the spring of 2008, Bear Stearns was saved from bankruptcy by the Fed, which injected its own money into the firm to induce J.P. Morgan to rescue it. Compare that with the fall of 2008, when Lehman Brothers found itself in a similar predicament and was left to go bankrupt. Or, to cite another contrast: In the aftermath of the Lehman bust, creditors of the Detroit automakers were forced to endure a haircut on their debt even as creditors of Fannie Mae and Freddie Mac, the largest of whom were Middle Eastern sovereign wealth funds and the People’s Bank of China, were protected by the government.* If the answer is that some bailouts are essential because certain institutions are “too big to fail”, then why has the government promoted consolidation of the sector into even fewer and bigger institutions? The regrettable fact is that whenever the government acts to save the system while also trying to teach financiers a few lessons, it will make decisions in the heat of the moment that prove arbitrary in hindsight. This is bound to fuel conspiracy theories that particular institutions were treated better because they pull bigger strings in Washington. It can only reinforce public frustration with the bailouts and deepen suspicions of plutocracy at work, even when those suspicions are not justified by reality. This thumbnail history of the bailout and both its economic and political consequences suggest that, even if the acute phase of the economic crisis has abated, the crisis of governance has not. As American corporations earn their largest profits ever, the un- and underemployed constitute nearly a fifth of the labor force. Long-term unemployment is at a record high. The spread between the interest rates on mortgage-backed securities and Treasury bonds is lower than it was before the crisis, yet the value of the houses that had served as the principle form of savings for much of the middle class has eroded. Had no public money been spent, had the Federal Reserve not taken unprecedented action, had the national debt not increased so dramatically, this might have been an acceptable, albeit painful outcome. Yet the fact is that these are the disappointing results of a round of government activism in the economy the likes of which haven’t been seen since the 1930s. Of course, Americans have long tolerated disparities of income and wealth that would be intolerable in other rich nations. Even in the face of a widening income gap since the late-1970s, this tolerance has held up, partly because perception of the gap was obscured by a consumption binge financed by cheap credit. Moreover, even though the widening gap has made it steadily harder for those born into the bottom quintile to vault into the top quintile, most Americans still feel that they live in the land of opportunity. They do not begrudge the outsized success of the whiz kids in Silicon Valley, California, and Redmond, Washington, because they see it as a product of innovation, daring and raw intelligence. Nevertheless, the Wall Street bailouts of 2008–09 have strained this generosity of spirit. They have nourished the suspicion that many of the wealthy—especially those connected with the financial sector—are super-rich not on account of their acumen or enterprise but because of purchased political favors. It’s not hard to take large risks with other people’s money if you know your losses will be backstopped at taxpayer expense. Damned If You Do, or Don’t
The crisis of governance in America now consists of finding ways to assuage a growing sense of basic injustice in the system, and to demonstrate that while some level of plutocratic “capture” of political behavior may exist, it is not to blame for the larger and broader problems with which we have been struggling.One obvious option would be for the government to end the expectation that it will deliver more rescues in the future. It could erase the implicit promise of bailouts with an explicit declaration that financial institutions can fail, even if their failure threatens other institutions. This course of action has some theoretical appeal: There is no better way to ensure that banks behave responsibly than to force them to fend for themselves. In the era of “free banking” before the 1913 establishment of the Federal Reserve, depositors had a healthy fear for the security of their savings while banks had a healthy fear of being victims of a run. The result was that bankers managed their enterprises far more cautiously than they do now. In 1880, the average American bank held enough gold in reserve to repay about one-quarter of its debt at a moment’s notice. For a point of comparison, the new Basel III accord mandates that banks hold equity capital worth only 7 percent of their assets. Unfortunately, the matter is not so simple. The Federal Reserve System was established for a reason, and the conditions of its creation provide warning to those who would now deprive bankers of their security blanket. One may feel nostalgic about the plush capital cushions held by banks in the pre-Fed era, but capital cushions exist to stabilize banks, and cushions or no, 19th-century banks were not actually very stable at all. To the contrary, they often failed, and with broadly painful consequences. The United States experienced banking panics in 1819, 1837, 1857, 1873, 1884 and 1893. The crisis of 1873 closed the New York Stock Exchange for ten straight days, while the fallout from the crisis led to a depression in the United States that lasted nearly six years, the longest period of contraction on record. The crisis of 1893 forced President Cleveland to appeal personally to J.P. Morgan for a loan of $65 million in gold bullion. Not only did the 19th-century American financial regime fail to deliver stability; it would have failed even more spectacularly had it been allowed to survive into the modern era. The reason is that the effects of financial crises on the rest of the economy grow as the financial sector itself grows. And, much recent commentary notwithstanding, a dynamic economy requires a robust financial sector to support innovation and growth. As technology advances and firms specialize in ever more abstruse areas of production, the challenge of allocating scarce capital grows harder: Financial institutions are likely to hire more people to do the job. As consumers grow richer and communications grow cheaper, the best companies will realize economies of scale. Big companies need big loans, and it takes big financial institutions to provide them. Like it or not, the growth of the financial sector is inevitable as innovation expands the range of services it can provide. Currency hedges, for example, can usefully insulate manufacturers from foreign-exchange risk and give them the confidence to invest in new factories. Commodity futures allow producers and consumers to shield themselves against price swings. But to have currency or commodity hedges you need financial institutions that trade derivatives. For all these reasons, the U.S. financial sector is likely to grow even larger as the economy grows more sophisticated. Yes, Wall Street’s exuberant growth over the past quarter of a century accelerated thanks to the easy profits associated with the government’s implicit backstop, but we would be kidding ourselves if we supposed that absent this distortion the financial sector would shrink back to the share of GDP it occupied in the days of free banking. (Those who still doubt this point should note that other high-end service industries also tend to expand as economies grow more complex. Think of corporate law firms or management consultancies.) The best illustration of why the era of free banking had to end (and why we should not restore it) resides in the episode that did the most to end it: the Panic of 1907. Because it was the last financial crisis to occur in the United States in the absence of a government safety net, the Panic of 1907 provides a window on what a world without such safety nets might look like.1 Its first lesson is that dangerous credit cycles can emerge even in the absence of moral hazard. The quarter century before the panic in some ways resembled the quarter century before the crisis of 2007–09: Strong growth and technology-driven productivity gains gradually reduced the perceived risk of lending capital in the United States, with the result that the conservative reserve ratios of the early 1880s were eroded from about one-quarter of liabilities to about one-sixth. Strong growth plus easier credit drove up asset prices, which in turn drove up collateral values, which in turn encouraged even easier credit. People felt wealthier; consumer spending rose; the result was more business investment and more credit. Then an earthquake struck San Francisco in April 1906. In addition to the damage caused directly by the quake, ruptured underground gas mains fed a fire that consumed about half the city. The estimated damages amounted to about 1.5 percent of the entire national income of the United States at the time. Like AIG in the credit crisis, the insurance companies that had sold protection for an apocalyptic disaster now became the natural vectors for financial contagion—with the difference that at the dawn of the last century, the Feds were not about to step in and cut the contagion off. To meet the flood of earthquake claims, insurance companies shipped gold from London and New York to San Francisco, draining the financial system of liquidity. Easy money turned so tight that by the end of March 1907, American stocks had declined about 20 percent from their peak. The situation deteriorated further in the summer when the Bank of England, which was worried about domestic gold shortages after the earthquake triggered a spike in shipments to the United States, banned all refinancing of American debt. One consequence of this decision was that about a tenth of America’s gold reserves were sent back to the United Kingdom between May and August. Coming on top of the diversion of gold to San Francisco, this draining of liquidity from U.S. debt markets left borrowers scrambling. In June, New York City failed to find sufficient subscribers to a bond issue and nearly defaulted. In October, bank runs threatened several large trust companies, the forerunners of today’s investment banks. One prominent trust, the Knickerbocker, nearly went broke after participating in a failed leveraged buy-out of a copper firm. Another, Moore & Schley, had used its holdings in the Tennessee Coal, Iron and Railroad Company (TC&I;) as collateral for loans. As it faced a run by depositors it became a distressed seller of TC&I; stock, nearly bankrupting one of America’s largest industrial firms. By late 1907, American equities were worth half as much as they were on the eve of the earthquake. Unemployment jumped from less than 3 percent to more than 8 percent. The crisis in the financial economy was spilling over into the non-financial economy. In the absence of a TARP-like program, credit guarantees, emergency central bank liquidity and the whole paraphernalia of 21st-century government intervention, no mechanism existed to halt the downward cycle. Each collapse threatened to trigger another, the contagion potentially carrying on indefinitely. And so, in one of the legendary episodes of Wall Street history, a private citizen stepped forward to fill the policy vacuum. J.P. Morgan, the era’s preeminent banker, literally locked the biggest financial players of his day in his library until they agreed to participate in a multipurpose bailout fund. A hybrid between the TARP and the emergency Federal Reserve credit lines, this fund prevented runs on major securities houses and trust companies while also financing the acquisition of TC&I; by its rival, U.S. Steel.2 The government supported this bailout by improvising the sort of intervention that the Federal Reserve would undertake later. The Treasury extended a credit line of $25 million to J.P. Morgan’s consortium and simultaneously issued $40 million in new gold bonds. While ostensibly meant to finance the Panama Canal, the banks could use these bonds as safe collateral for the issuance of new banknotes. Even in the era of the gold standard, there was no alternative to printing money. The bailout worked; both the real and financial economies recovered quickly. But the shock was sufficiently profound to impel Congress to investigate the causes of the panic so as to devise ways to prevent a recurrence. President William Howard Taft seized on the popular idea of using anti-trust law to break up the financial sector into smaller institutions, and in that effort he launched a series of suits from the Attorney General’s office. Meanwhile, Congressmen inspired by William Jennings Bryan called for a central bank that would be, not a lender of last resort to Wall Street financiers, but a printing press that would help poor farmers. The leaders of the big banks lacked enthusiasm for either course, and in the end they prevailed. The titans of Wall Street and senior representatives from Treasury met in secret to devise an alternative approach, at whose center was a government-backed yet industry-operated National Reserve Bank. In 1913, Wall Street’s preferred plan was realized with the creation of the Federal Reserve System. The main lesson to be learned from all this is that even at the dawn of the 20th century, when the financial sector was far smaller and the economy less complex and interdependent than today, the United States was unable to bear the pain of a world without safety nets. The notion of returning to the laissez-faire idyll of the 19th century is therefore nothing short of fanciful. And yet its appeal is understandable because, in 1907 just as now, the consequences of financial safety nets are so disturbing. Just as they are today, taxpayers then were caught in a vise. On the one hand, the absence of an official safety net had created a panic so profound that the government felt compelled to intervene, even if on an unofficial, ad hoc basis through the aegis of a private citizen. On the other hand, the response to the panic was to create precisely the sort of safety net that ultimately made the banks larger, more prone to risk-taking and more important to the welfare of the real economy than ever before. Over the course of the ensuing century, financial crises led inexorably to an even bigger safety net and even more risk-taking, producing a vicious cycle of increasing profits for bankers and increasing liabilities for everyone else. We found ourselves damned if we didn’t create a safety net, and damned if we did. The traditional response to breaking this cycle, especially in the wake of the Great Depression, has been regulation. If the financial system required safety nets, and if the safety nets encouraged greater risk-taking, then the solution was for regulators to impose limits to risk-taking. Reserve capital requirements for banks have been one expression of this logic. Because of the government backstop, banks were taking on excessive leverage; the 25 percent capital ratios of the late 19th century were a quaint memory. So government decreed what the safe level of leverage was and imposed that limit on banks. Having incentivized risk-taking through one intervention (the creation of the Federal Reserve), the government constrained it with a second intervention (the minimum reserve capital ratio). Yet regulation has proved to be an inadequate tool for breaking the vicious cycle. For one thing, regulations tend to sow the seeds of their own undoing. The better they are at maintaining the stability of the system, the less necessary they appear. The longer markets stay calm and financiers go without suffering the indignity of a sharp loss, the more it becomes tempting to suppose that new financial technologies have enabled investors to tame risk. In these circumstances, calls for reforming seemingly outmoded regulations become common. Surely, it is claimed, limitations on leverage are obsolete in light of new, sophisticated risk modeling? (This argument made possible the Basel II capital accord, which deferred too much to the bankers’ own risk models.) Surely there is no need to supervise new markets in complex securities? (This view caused government to be too slow in driving systemically dangerous over-the-counter transactions onto exchanges.) The problem is compounded when genuinely bad regulations, such as ceilings on interest rates that banks can offer depositors, live among good ones, such as minimum reserve requirements. At some point, the rules are loosened, either by conscious choice or as the enforcers soften supervisory standards bit by bit, perhaps without realizing that they are doing so. The second problem with regulation is the assumption that regulators are capable of understanding risks at financial institutions better than the risk managers at those same institutions. A financier is paid very well to understand the dangers in his portfolio, has the potential to lose much of that income if he is wrong, and likely has superior information regarding the underlying quality of the assets. A regulator is not likely to be well-compensated compared to someone equally qualified in the private sector, has less at stake if he misses something, and lacks specific information about the assets he has been assigned to assess. No one in government, after all, was fired for failing to foresee the financial crisis. For all these reasons it is naive to expect preventive miracles from regulators. Moreover, it is not at all surprising that those regulators who did see the crisis coming failed to clamp down on excessive risk-taking. The Monday morning quarterbacks now declare that the disaster was obvious, and that only a blind faith in markets (or plutocratic corruption) prevented government from acting decisively. But it’s just not so. Suppose that, in 2005, a regulator had been concerned about the extent to which the health of the financial system depended on the high performance of mortgage loans. Would that regulator have been willing to assert that AAA-rated mortgage securities were worth far less than their face value, especially when the bankers who held them, the ratings agencies and the government-sponsored mortgage guarantors all seemed confident? Had the regulator originated the securities? Done due diligence on the borrowers? What about the presumably sophisticated investors from all over the world who had purchased trillions of dollars of these securities? What could the regulator possibly know that everyone else, including people with far greater financial incentives, supposedly missed? And why were the bureaucrats so hostile to the ideal of homeownership? In the wake of the financial crisis, regulators now declare that henceforth they will engage in “macro-prudential surveillance”, using their power over the financial sector to force more conservative behavior when a bubble seems to be building. But bubbles are only obvious in retrospect. Prospectively, all judgments in markets are uncertain. Nobody can have perfect information. It is too much to expect that regulators will face down financial mania when the best and brightest in the private sector are convinced that all is well. Many argue that a regulatory agency can nonetheless add value by aggregating the information from major institutions that would otherwise lie about in fragments. In principle, this exercise might make it possible to determine risks to the financial system that spring not from weaknesses at one company but rather from the “crowding” of multiple institutions into particular positions. Because individual private institutions are not in a position to collect information from their rivals, this sort of “systemic” analysis might indeed provide regulators with an informational edge over the private sector—and hence with a basis for second-guessing their behavior in the run-up to a crash. That is the rationale for the new systemic regulator established by the Dodd-Frank financial reform. Its theoretical promise notwithstanding, the systemic regulator’s job may turn out to be a mission impossible. Many major market players are foreign and so may not cooperate with U.S. demands for information. Even if the foreign players do cooperate, it is doubtful that a regulator can collect portfolio positions in real time and in detail. A failure on either front will render it impossible to determine the true risks to the financial system. Moreover, even if the systemic regulator clears these hurdles, it will have a hard time turning information into action. If it determined, for example, that too many leveraged traders had piled into Brazilian stocks, it might fear that a shock to the market might trigger destabilizing fire sales as everyone rushed for the exit. To prevent a crash, it might want to order U.S. institutions to reduce their Brazilian bets. But the U.S. banks might retort that, if they sold positions, Europeans or Asians would pile in instead. If so, the risk of a Brazilian blow up would persist and so would the risk to U.S. banks: If the Europeans and Asians took losses in Brazil, they would dump positions elsewhere, driving down other assets held by U.S. institutions. From the point of view of U.S. banks, therefore, pulling out of Brazil would bring uncertain risk-reduction benefits in the future in exchange for foregone profits in the short-term. So if the systemic regulator wants banks to pull out of Brazil, it must expect resistance. It must be prepared to press its point even in the absence of certainty about Brazil’s prospects, and in the knowledge that, if it is wrong, it will be penalizing its own country’s financial sector and benefiting foreign rivals. Time will tell whether the Dodd-Frank systemic regulator will have the fortitude required by its mission. And when one also considers that the existence of a systemic regulator could lead market participants to believe that risks are better monitored and controlled than they actually are, the Dodd-Frank innovation may turn out to be the source of harm, not benefit. A Different Hedge
If regulatory initiatives such as bank capital standards and systemic regulation are frail, we are back to the vexing question: What happens when we can’t live with government bailouts of the “too big to fail”, and can’t live without them? One partial answer harkens back to one of the paths not taken after the Panic of 1907: Rather than stand by as the financial sector is consolidated into ever larger firms, we should spread the risk of the system across more, smaller institutions that are better at handling it.There are Taft-like voices in the post-2007 debate similar to those in the aftermath of 1907 that support versions of the small-enough-to-fail doctrine. Some have suggested that institutions which are too big to fail must be deemed too big to exist: They should be broken up in a new round of trust-busting. Although the Dodd-Frank financial reform mainly shrank from this idea, the so-called Volcker rule, which was included in the reform, is intended to force banks to spin out some of their proprietary trading. The UK government and the London School of Economics, in anticipation of the November G-20 meeting in Seoul on financial stability, published a report in August that argued for far tougher measures.3 Andrew Haldane of the Bank of England and Simon Johnson, formerly of the IMF, have both taken outspoken positions against the current concentration and size of the banks. Charles Goodhart, formerly of the Bank of England and a professor at the London School of Economics, has proposed taxing bank leverage, while Beatrice Weder di Mauro, a prominent economist at the University of Mainz who serves on Germany’s version of the U.S. Council of Economic Advisers, advocates a “systemic risk” tax on banks to discourage consolidation. These proposals are eminently reasonable. The size of a financial institution is like the pollution emitted from a car: It imposes costs on society, so society should tax it. Unfortunately, for reasons of political correctness, governments are not embracing another possible solution to the too-big-to-fail problem: encouraging hedge funds. To most voters, the very idea sounds outlandish. Aren’t hedge funds the most dangerous part of the financial system, the bit that is most in need of taming? No, they’re not. Contrary to myth, hedge funds make fewer egregious misjudgments than most other financial institutions. And when they do make mistakes, they require no taxpayer bailouts. Rather than focus energy on the difficult task of regulating the part of the financial sector that has proved itself dysfunctional, policymakers should consider the complementary approach of enabling the part of the sector that navigated the crisis successfully. Why are hedge funds superior? The short answer is that hedge funds have better incentive structures than other financial vehicles.4 Because they take a large performance fee, they have a motive to do their own research rather than follow the crowd. This leads them to be more contrarian and more likely to avoid buying into bubbles than their rivals. Likewise, because hedge fund managers often keep their own savings in their funds, they have a powerful reason to avoid crazy risks. Unlike proprietary traders within an investment bank, who take risk in the knowledge that a large failure will be the shareholders’ problem, hedge fund managers take risk in the knowledge that failure will be their problem, too. Yet far from embracing hedge funds, most governments are inclined to restrain them. In the United States, the Dodd-Frank reform requires even small hedge funds to register with the Securities and Exchange Commission, a burden that will discourage smaller funds—precisely those that are most attractive from the standpoint of avoiding systemically dangerous consolidation. In Europe, France and Germany have opposed hedge funds with the same vigor that they defended their notoriously unhealthy and over-leveraged banks at the Basel negotiations. Regulators across the developed world have bridled particularly at hedge fund secrecy. Classic hedge fund practices such as algorithmic trading and short-selling have been depicted as hostile to the interests of investors, despite the fact that they increase market liquidity and help dampen excessive swings in asset prices. Even when they admit that they have no evidence to suppose that hedge funds are destabilizing, policymakers still insist that they want to regulate them anyway out of a sense of caution. The problem with this mindset is that it is not cost-free. Already, the Securities and Exchange Commission is set to expand its staff by about one third in order to implement the Dodd-Frank reform. Adding an unproductive mission of hedge fund regulation to the SEC’s overflowing plate only increases the chances that the implementation of reform will fall short of expectations. The truth is that hedge funds represent an opportunity to maintain the quality of our capital markets while reducing the risk they pose to taxpayers and society. They offer a chance to return to the path not taken after 1907: Rather than responding to a crisis with safety nets from the government plus consolidation on Wall Street, we should prefer safety nets combined with fragmentation. It is almost Orwellian that policymakers everywhere should be lamenting the power of too-big-to-fail financial institutions and yet refusing to celebrate a ready-made alternative. Hedge funds cannot replace all of a bank’s functions. They are not going to issue credit cards or serve retail customers. But when it comes to complex asset management, they can take over from safety-net-distorted rivals, and we would all be better off if they did. Of course, there is an irony in starting an essay with the problem of rising inequality and ending it with a prescription to embrace those famously rich hedge-fund moguls. But the solution to hedge fund riches is not to rein in hedge-fund trading. Unlike banks, which extract hidden subsidies from government, hedge fund profits are a sign of health. They are a measure of the funds’ success in allocating capital and absorbing risk efficiently. The goal of public policy must be, first, to welcome that efficiency, which has benefits for the wider economy, and second, to tax a larger share of the profits in order to address legitimate concerns about inequality. Embracing small-enough-to-fail hedge funds as a partial alternative to too-big-to-fail banks is not a panacea. We will still have crises and there will still be bailouts. But the tradition of robust, unsubsidized risk management at hedge funds is surely a window into a better financial system than the one we have now. To a surprising and unrecognized degree, the future of finance lies in the history of hedge funds.
*The version of this article that appeared in the print edition of The American Interest included this sentence: “Why did the Fed extend emergency liquidity to GMAC, the finance company of General Motors, yet refuse to do the same for GE’s consumer finance arm?” However, on December 1, after that issue went to press, it was revealed that GE did, in fact, borrow $16 billion from the Federal Reserve in late 2008.
1The history that follows is based on Robert F. Bruner and Sean D. Carr’s The Panic of 1907: Lessons Learned from the Market’s Perfect Storm (John Wiley & Sons, 2007).
2This last act bears striking similarities to the purchase of Bear Stearns by JPMorgan Chase with a $29 billion loan from the Fed in March 2008.
3Adair Turner et al., The Future of Finance: The LSE Report (August 2010).
4Readers who seek the long answer can consult Mallaby, More Money Than God (Penguin Press, 2010).