The answer to the question “Is America a plutocracy?” might seem either trivial or obvious depending on how one defines the term. Plutocracy, says the dictionary, simply means “rule by the rich.” If the query is taken literally to mean that the non-rich—the vast majority of American citizens—have no influence in American democracy, or that the country is self-consciously ruled by some hidden collusive elite, the answer is obviously “no.” On the other hand, if the question is taken to mean, “Do the wealthy have disproportionate political influence in the United States?” then the answer is obviously “yes”, and that answer would qualify as one of the most unsurprising imaginable. Wealthy people have had disproportionate influence in most polities at most times in history.
Of course, one can argue endlessly over who qualifies as being rich, whether the rich constitute a social class capable of collective action, how open or closed that class is, what constitutes real political power in today’s America, and so on. But if the question remains as simple as those articulated above, the basic answer will not change or be of much interest.
This is not, however, what this issue of The American Interest means by plutocracy. We mean not just rule by the rich, but rule by and for the rich. We mean, in other words, a state of affairs in which the rich influence government in such a way as to protect and expand their own wealth and influence, often at the expense of others. As the introductory essay to this issue shows, this influence may be exercised in four basic ways: lobbying to shift regulatory costs and other burdens away from corporations and onto the public at large; lobbying to affect the tax code so that the wealthy pay less; lobbying to allow the fullest possible use of corporate money in political campaigns; and, above all, lobbying to enable lobbying to go on with the fewest restrictions. Of these, the second has perhaps the deepest historical legacy.
Scandalous as it may sound to the ears of Republicans schooled in Reaganomics, one critical measure of the health of a modern democracy is its ability to legitimately extract taxes from its own elites. The most dysfunctional societies in the developing world are those whose elites succeed either in legally exempting themselves from taxation, or in taking advantage of lax enforcement to evade them, thereby shifting the burden of public expenditure onto the rest of society.
We therefore raise a different and more interesting set of questions regarding the relationship between money and power in contemporary America. All these questions come together, however, in a paramount puzzle: Why has a significant increase in income inequality in recent decades failed to generate political pressure from the left for redistributional redress, as similar trends did in earlier times? Instead, insofar as there is any populism bubbling from below in America today it comes from the Right, and its target is not just the “undeserving rich”—Wall Street “flip-it” shysters and their ilk—but, even more so, government policies intended to protect Americans from their predations. How do we explain this?
Let us start by describing the contemporary landscape in which this question arises. It is well established that income inequality has increased substantially in the United States over the past three decades, and that gains from the prolonged period of economic growth that ended in 2007–08 have gone disproportionately to the upper end of the richest layer of society. A study by Thomas Piketty and Emmanuel Saez shows that between 1978 and 2007, the share of U.S. income accruing to the top 1 percent of American families jumped from 9 to 23.5 percent of the total. These data point clearly to the stagnation of working class incomes in the United States: Real incomes for male workers peaked sometime back in the 1970s and have not recovered since.1
The growing disparity in outcomes has coincided with a period of conservative hegemony in American politics. Conservative ideas clearly had to do with the rise in inequality: The liberal (in the original 19th-century meaning of the term) economic model favored by Ronald Reagan was intended to open the doors to greater competition and entrepreneurship, which necessarily meant that gains from growth would go disproportionately to those best prepared to create wealth. Periods of rapid growth nearly always increase concentrations of capital and hence income inequality, but, as pro-market advocates have repeatedly told us, growth also nearly always trickles down over time to all or nearly all class cohorts.
As the years went by and those outsized gains at the top of the income distribution pyramid failed to trickle down in any substantial way, one would have expected growing demand for a left-leaning politics that sought, if not to equalize outcomes, then at least to bound their inequality. That did not happen. The Democratic Party, which one would have expected to be the principal focus of such political advocacy, floundered. It managed to regain majorities in the House and Senate, and it did hold the presidency between 1993 and 2001 (and, of course, regained it in 2009), but its electoral successes have not turned on economic fairness issues. To an unexpected degree, Democrats drank the Kool-Aid of market fundamentalism during the 1990s and in so doing reflected larger intellectual trends. (Indeed, when Al Gore’s 2000 campaign deigned to invoke class inequality issues as one of its themes, it arguably backfired.)
The financial crisis of 2008–09 has only deepened the mystery. The crisis laid bare some unpleasant facts about American capitalism. The banking industry lobbied heavily in the 1990s to further free itself from regulation, a trend that began in earnest with the Depository Institutions and Deregulation and Monetary Control Act of 1980. This resulted in, among other things, the 1999 Gramm-Leach-Bliley Act, which enabled the emergence of large “universal” banks and a non-transparent market in derivatives. Before the bust in the U.S. housing market, the rapidly expanding financial sector took home some 40 percent of all corporate profits, and yet it was responsible for an implosion that not only wiped out the banks themselves but imposed huge costs on innocent bystanders both in the United States and abroad. It also cost U.S. taxpayers an enormous sum in bailouts.
What was truly troubling, however, was that the collapse undermined the fundamental moral justification for material inequality in a politically egalitarian society. Basic to the legitimacy of market capitalism is the efficient market hypothesis—that is, the notion that in a truly competitive market everyone earns something close to his or her “social” rate of return. This means, in other words, that if your investment banker earns 100,000 times as much as your plumber, it’s because he or she is contributing roughly 100,000 times as much to society’s total pool of wealth.
The crisis made it glaringly obvious that the efficient market hypothesis was wrong: Oversized returns were flowing to innovative financial entrepreneurs who, in their avidity to create new and more complex financial instruments and products, were destroying rather than creating value for society as a whole. The crisis also shed bright light on the fact that corporate America was doing very well for its officers and shareholders (many of whom were not American citizens), but much less well for those Americans awaiting the trickle as jobs were outsourced and automated by the millions. Perhaps corporate America’s social rate of return approximated the expectations of the efficient market hypothesis, but only if “social” no longer referred to American society alone.
The crisis, exploding as it did in the midst of the 2008 presidential election, clearly helped Barack Obama at the expense of his Republican rival John McCain, in part because the public associated Wall Street with Republicans, and also, of course, because the debacle broke forth during the tenure of a Republican President. The new Administration went into office believing, however, that its victory signaled a fundamental realignment of American politics along the lines of the 1932 election that swept Franklin Roosevelt into power. Administration principals thought they had a mandate to move the country sharply to the Left—hence the fiscal stimulus bill, a bailout of the auto companies that left the government owning a large share of them, a major healthcare reform initiative, and an attempt to design a new regulatory framework for the banks.
But as it turned out, Obama was not riding a tide of left-wing populism. While the Democratic majorities in Congress succeeded in moving this ambitious legislative agenda forward, the results fell far short of expectations. The stimulus package did not produce stunning economic successes. The healthcare bill did not include a public option, and failed to address the real sources of cost inflation. Above all, the Dodd-Frank financial regulation reform bill did not change the perverse incentives that led to the crisis in the first place. Indeed, while Wall Street brought considerable opprobrium on itself, it was arguably the sector of the U.S. economy that suffered the least in the long run. Bank earnings were restored after a couple of quarters. And though the banks now face tougher regulation, Congress failed to do anything about the fact that investment banks are still too large and too interconnected to fail, and will surely be bailed out again when they get in trouble. Indeed, the U.S. financial sector is now concentrated in fewer hands than it was before the crisis.
One of the reasons for all these political shortfalls is that populist momentum swung sharply to the Right, as evidenced by the rise of the Tea Party movement and the Republican capture of the House of Representatives in the 2010 midterm elections. This swing did not happen all by itself. Wall Street spent a huge amount of money lobbying to make sure that the inevitable financial regulation was as weak as possible. This led former IMF chief economist Simon Johnson to argue that the United States was dominated by an oligarchy not too different from the ones he encountered in Russia and a variety of developing countries. Huge pharmaceutical companies and their lobbyists loomed over the healthcare legislation, as well, to the point that the White House, sensing their clout, deferred before the fact to most of their preferences.
But money alone does not create political trends in the United States. Within a year of Barack Obama’s inauguration, the most energized and angry people on the American political scene were not the homeowners with subprime mortgages who faced foreclosure as a result of the crisis, but rather those who faulted the government for taking steps to protect those homeowners, and to prevent the crisis from deepening. It was a strange phenomenon that saw many of those most deeply injured by the crisis become, in effect, objective allies of those who caused it.
This, then, is the contemporary context in which we raise the question of plutocracy in America: Why, given the economic history of the past thirty years, have we not seen the emergence of a powerful left-wing political movement seeking fairer distribution of growth? Why was Obama pilloried during the 2008 campaign for even using the word “redistribution”, when all modern democracies (including the United States) already engage in a substantial degree of redistribution? Why has anti-elite populism taken a right-wing form, one that sees vast conspiracies not among private-sector actors like bankers and hedge-fund operators, but among government officials who were arguably trying to do no more than protect the public against real collusions if not outright conspiracies? Why have there been so few demands for a rethinking of the basic American social contract, when the present one has been revealed to be so flawed? How can it be that large numbers of congressional Democrats and arguably the most socially liberal President in American history are now seriously considering extending, and even making permanent, the Bush tax cuts of 2001 and 2003? Is this not prima facie evidence of plutocracy?
There are several possible answers to these questions. The most frequent response from the Left is “yes”—corporate America can protect its interests through lobbyists and campaign contributions, and money does lock in their advantages and defeat all efforts at campaign finance reform. The American plutocracy, they add, has also profited from a pliant Supreme Court, which in its Citizens United decision of January 2010 ratified the view that corporations are tantamount to individuals with constitutionally protected rights not only to be a party to business contracts but also to political speech.
There is no question that money buys political influence in ways large and small in contemporary America, and that lobbying has become a form of legitimized corruption in many cases. But there are a number of problems with seeing this as the sole explanation for the absence of a cohesive political Left. Corporate America is not the only source of campaign donations; labor unions, Hollywood moguls and many liberal Wall Street financiers donate generously to their favored causes. Corporate America, moreover, is not a monolithic actor, but represents a huge variety of often-conflicting interests. Money often follows grassroots political trends rather than creating them.
A second explanation has to do with American exceptionalism. Many observers through the years have noted that Americans are much less bothered than Europeans by unequal economic outcomes, being far more concerned about equality of opportunity. The classic explanation for this has to do with the fact that America was (for recent immigrants, at least) a land of new settlement with few inherited status privileges, imbued with a Lockean liberal belief in individual opportunity. Americans tend to think that individuals are responsible for their own life outcomes; they often distinguish between the “deserving” and “undeserving” poor, the latter of whom are poor as a result of their being, in Locke’s phrase, “quarrelsome and contentious.” Americans care less about equality of outcomes than the possibility of social mobility, even if such mobility takes generations to achieve.
This Lockean emphasis on individual responsibility manifests itself in several distinctive ways. Large numbers of Americans, for example, favor abolishing all inheritance taxes (commonly denounced by the Right as the “death tax”), even though only a very small minority of them can ever hope to leave the world with sufficient assets to be subject to it. It also explains why Congress, with the support of President Clinton, abolished the New Deal program Aid to Families with Dependent Children as part of a broad welfare reform, under the rubric of legislation tellingly labeled the “Personal Responsibility and Work Opportunity Reconciliation Act of 1996.”
This aspect of American political culture is insufficient, however, to explain why there has been so little left-wing populism in the early 21st century. For despite their Lockean beliefs, Americans of past generations have supported substantial redistribution, not just during the New Deal and Great Society eras, but when the nation first imposed a highly progressive national income tax around the time of the First World War.
Moreover, while there is no evidence that America’s rate of intergenerational social mobility has declined over time, that rate is not nearly as high as many Americans believe, and indeed it is not as high as the rate in some other developed countries. The distinction between equality of outcome and equality of opportunity is in any event not as clear as it might first appear: Better-off people employ all sorts of strategies for passing their status on to their children, from the sorts of neighborhoods they can afford to live in to legacy admissions to elite universities. So we need further explanations for why there has not been more of a backlash from those left behind.
A third possible reason for the absence of redistributionist populism is much more time-specific: Americans have learned to distrust big government in a way they had not in the period from 1933 to 1969. Like taxpayers in Latin America, but unlike various Swedes, Danes and Germans, Americans don’t want to pay taxes because they are convinced that the government will waste whatever it takes in. Whether this is a fair assessment of our state capacity is a different matter; the efficiency of the U.S. government varies tremendously depending on level, geography, function and the like. It has both excellent agencies (the Marine Corps) and terrible ones (the former Immigration and Naturalization Service). Since the Reagan years, however, many Americans have come to believe that the experience of the New Deal and Great Society demonstrated the inability of “big government” to spend money wisely, or indeed to spend it at all without producing harmful unintended consequences. They are therefore unwilling to countenance expansion of the state into areas like health care even if they do not object to such spending in principle.
A fourth explanation is offered by Raghuram Rajan in his recent book Fault Lines: How Hidden Fractures Still Threaten the World Economy (2010). Rajan argues that the working and middle classes whose incomes either stagnated or fell during the past generation were in effect bought off by cheap credit: The flood of capital coming in from Asia and other surplus countries, creatively packaged by the banks and quasi-public institutions like Fannie Mae and Freddie Mac, allowed people to borrow against the future and enjoy standards of living that were in the end unsustainable. In his view, the day of reckoning has finally arrived: Cheap credit masked inequality at least in the sense that it enabled many people to increase their consumption, even if they could not keep up with richer cohorts who were increasing their consumption even faster. Now that easy credit has dried up, people grow angry when confronted with the stark reality that their bankers have done far, far better than they.
A final explanation lies in the realm of ideas, and comes closest to a Marxist plutocracy-conspiracy theory. Simon Johnson’s view that Wall Street constitutes an oligarchy manipulating the political system in a manner uncomfortably similar to the Russian oligarchs or other developing country elites does not ring true because it does not take account of ideas. At some level, corrupt developing-country elites know they are getting away with murder (sometimes for real); they rarely try to justify their self-enrichment to themselves in moral terms. American elites, however, tend to believe they are helping society as a whole even as they help themselves. Thus the centrality of the efficient market hypothesis: Financiers proudly see themselves as “value creators”, not as highbrow pickpockets of widows and orphans.
Standing behind this moral view is the entire edifice of modern neo-classical economics, which has played a hugely important role in legitimating the contemporary, finance-heavy version of market capitalism. The intellectual turn taken by the economics profession from Marxist or Keynesian models to strict monetarism and the Chicago School occurred right about the time that Reagan and Thatcher emerged on the political scene, and served to provide a seemingly scientific justification for market liberalization. As Seth Colby and I have argued, much of this framework was empirically justified with regard to trade and investment, but little empirical ground existed for believing that capital market liberalization would have beneficial effects.2 (Indeed, as Kenneth Rogoff and Carmen Reinhart’s book This Time Is Different demonstrates, data from the past two hundred years shows that hasty liberalization of the financial sector is highly dangerous.) Well-founded microeconomic theories concerning the efficiency of single markets were blown up and applied to all sectors of the macroeconomy, despite the fact that at an aggregate level many get rich by taking advantage of market failures, asymmetric information or political influence. The mathematization of modern economics, too, gave it the aura of a true science, the only one of the social sciences whose practitioners believe they stand in the same league as physicists. Ben Bernanke’s “great moderation” of the 2000s was in effect just the latest iteration of the mantra “this time is different.”
So here is the evidence for an American plutocracy of a narrow and discrete but hardly harmless sort. Wall Street seduced the economics profession not through overt corruption, but by aligning the incentives of economists with its own. It was very easy for academic economists to move from universities to central banks to hedge funds—a tightly knit world in which everyone shared the same views about the self-regulating and beneficial effects of open capital markets. The alliance was enormously profitable for everyone: The academics got big consulting fees, and Wall Street got legitimacy. And it has kept the system going despite the enormous policy failures it has generated, not to exclude the recent crisis.
Another set of ideas was of even more direct help to the wealthy: Reaganomics. Supply-side economics provided a principled justification for the rich paying lower taxes on the grounds that entrepreneurial incentives unleashed by lower marginal tax rates would not merely trickle but pour down both via public finance and through the creation of employment. This argument was likely true at the near 90 percent marginal rates that prevailed after World War II, but those rates were reduced in several waves beginning in the 1960s. Clinton’s tax increases of the early 1990s brought rates up only slightly, and didn’t have the growth-killing effects widely predicted by Republicans—just the opposite, they preceded one of the great economic expansions of recent memory. The benefits of the Bush-era cuts flowed overwhelmingly to the wealthy, and yet were promoted on the grounds that lower rates would redound to everyone’s benefit. This is still a gospel that many people continue to believe, including, oddly enough, all too many of those left behind.
These different explanations are not, of course, mutually exclusive. In the end, they do not make a simple, straightforward case for or against the existence of American plutocracy. They do, however, point to the fact that money, power and class continue to play out in American politics in highly complex and puzzling ways.
2Fukuyama & Colby, “What Were They Thinking? The Role of Economists in the Financial Debacle”, The American Interest (September/October 2009).