Although the poor have always been with us and, as the Bible suggests, will continue to be so (Deut. 15:11), it was not until the late 19th century that Charles Booth made the first scientific effort to determine their numbers. From his seminal 1880s survey of the working classes in London up to the mid-20th century, poverty typically meant a level of subsistence that barely afforded sufficient food, lodging and clothing. Booth’s initial estimate put the poverty line at about a thousand shillings per year.1 Around the same time in the United States, Robert Hunter reckoned the figure to be $460 per year for the average family of five.2
One can quibble with the definitions and math behind these early estimates, but the living conditions of the poor at that time were such that one knew poverty when one saw it. Poverty was visually documented, for example, in Jacob Riis’s images of squalid tenements and in the Great Depression newsreels of bread lines and Hoovervilles. Through the 1930s, Progressive-minded reformers urged government to take responsibility for eradicating poverty, an agenda that gained political impetus as poverty spread during the Depression. With the New Deal, Federal bureaucracies assumed major responsibility for welfare programs under the Social Security Act of 1935, which laid the political foundation of the American welfare state. By the 1940s, political concerns for the poor ebbed before the exigencies of war. Poverty as a public issue remained in the shadows until the 1960s, when Michael Harrington’s evocative account of life in The Other America (1962) drew public attention to the plight of the poor.
Harrington’s “other America” was a bleak place where at least half of the elderly could not afford decent housing, proper nutrition and medical care, and where low-income farm families suffered from “hunger in the midst of abundance.” According to his calculations, about 25 percent of the American people were poor. So powerful was Harrington’s impact that in his wake a host of publications sought to raise national awareness about poverty in the United States.
Against this tide, however, stood another highly regarded “progressive” thinker. In his best-selling 1958 book, The Affluent Society, Harvard Professor John Kenneth Galbraith argued that in a society where the median family income was $3,960, poverty “can no longer be presented as a universal or massive affliction. It is more nearly an afterthought.” Galbraith described what remained of American poverty as falling into two broad categories: insular poverty, which stemmed from living in economically depressed regions like Appalachia, and case poverty, rooted in personal handicaps such as “mental deficiency, bad health, inability to adapt to the discipline of modern economic life, excessive procreation, alcohol, and insufficient education.”3 But in neither instance, he argued, could poverty be remedied by government transfers of income to lighten the hardships and increase the consumption of the poor.
Not that Galbraith was opposed in principle to government intervention to reduce poverty. From a typically progressive viewpoint—Galbraith was a co-founder of Americans for Democratic Action in 1947 with Eleanor Roosevelt and Hubert Humphrey—he envisioned government as the vehicle to advance the good society. But his particular view of what made for a good society favored curbing the almost mindless materialism spawned by a new kind of consumer demand induced by advertising, a phenomenon he labeled the “dependence effect.”
Galbraith’s book appeared shortly after Vance Packard’s popular 1957 exposé, The Hidden Persuaders, which unveiled the advertising industry’s psychological efforts to manipulate consumer appetites. There is no reason to think that Packard formed Galbraith’s view; it merely deepened a long-held conviction. (If he were writing today, however, Galbraith would no doubt be interested in Martin Lindstrom’s Brandwashed (2011), which shows how advertisers use modern technology to fine-tune their messaging based on consumers’ previous purchasing patterns.)
Not everyone agreed. In his classic The Road to Serfdom, Friedrich Hayek dismissed the “dependence effect” as a non sequitur, since consumers have no innate or spontaneous desires for any of the amenities of modern civilization. Advertising is simply how producers present these amenities in the most favorable light. As Steve Jobs observed, “A lot of times people don’t know what they want until you show it to them.” Hayek might have added that today’s consumers can use websites like Yelp and Trip Advisor to learn more about the quality of products and services from the experiences of others.
Given a choice, then, between public and private spending to reduce poverty, Hayek preferred policies to grow the economy and eliminate barriers to entry. In contrast, Galbraith favored trimming the sort of private consumption that satisfies manipulated desires for all sorts of trivial stuff but that adds little to or even harms the quality of life. Thus, although he favored increased taxation, he wanted to use the added revenue not for cash transfers to the poor but rather for greater investment in public services designed to enhance community life and build human capital through, for example, educational programs for poor children. In short, Galbraith stood in the center, between those who wanted to give government money directly to the poor and those who wanted government to do nothing directly about poverty, but preferred instead to ameliorate it by “raising all boats” in a more prosperous economy.
Galbraith’s appeal to boost public spending and his inclination to deal with poverty through increased investment in services rather than cash transfers foreshadowed a major shift in the nature of the modern American welfare state. It also highlighted the key difference between his view and Harrington’s: Galbraith stood firmly in the liberal capitalist tradition, Harrington in a democratic socialist one. In a way, Galbraith’s view as expressed in The Affluent Society helmed a rear-guard effort in the fight over how to deal with poverty. In the late 1950s, Galbraith’s view had no challengers from the left. But by the time the blue-ribbon National Commission on Technology, Automation, and Economic Progress reported in favor of a guaranteed national income in 1966, Galbraith found himself well behind the curve. His view that poverty had become and should remain an “afterthought” in Federal policy obviously fell out of step politically with President Johnson’s “war on poverty.”
During the two decades following publication of The Affluent Society, public spending on social welfare programs in 21 of the richest Western democracies nearly doubled as a proportion of GNP, climbing from an average of 12.3 percent in 1960 to 23.3 percent in 1980.4 That spending included the direct cash transfers to the poor that Galbraith had said would not work. But more important by far, as the general level of material well-being continued to rise, the median U.S. household income of $49,455 in 2010 afforded 25 percent more purchasing power than the median household income in Galbraith’s affluent society of the late 1950s.5 The purchasing power per capita climbed even higher, since average household size has declined from 3.6 to 2.6 people between 1958 and 2010. Added to that is the sizeable increase in employee benefits, which are not reported as income. Between 1960 and 2003, employer spending on these benefits, mainly for healthcare and retirement, more than doubled from 8 to 18.8 percent of total employee compensation. Although those in the upper income brackets gained the most, these benefits were widely spread among employees.
At least as far as the “working poor” are concerned, therefore, it seems that some combination of Hayek’s view about “lifting all boats” and Galbraith’s social spending is responsible for the vast reduction of poverty between the late 1950s and the dawn of the 21st century. Between increasing affluence and the growth of Federal spending, both direct and indirect, on welfare since around 1964, the tangible signs of material deprivation faded.
By the 1990s, American welfare policy no longer emphasized alleviating poverty through cash transfers. Social welfare remained a worthy agenda buttressed by the moral force of Biblical edicts, but as the 21st century dawned, the campaign against poverty in the United States had almost vanished from the public square. Bill Clinton’s promise to “end welfare as we have come to know it” was not a pledge to eradicate poverty but to alter the behavior of welfare recipients. In 2006, John Edwards’s campaign staked on addressing the needs of the poor failed to gain political traction. Poverty did not factor in the November 2008 elections, though it was then on the rise. When the Census Bureau reported that the national poverty rate had climbed from 13.2 percent in 2008 to 14.3 percent in 2009 (about what it had been 15 years earlier), media coverage was brief and perfunctory. In 2010, a poverty count of 15.1 percent made headlines that quickly faded amid the worst economic downturn since the Great Depression. And with no sign of genuine economic recovery in sight, President Obama’s 2011 State of the Union address marked the second time since 1948 that such an address by a Democratic President excluded any mention of poverty or the plight of the poor.
At first glance, this is very strange. Between the early 1960s and the early 1990s, we anguished over poverty even as poverty levels fell continuously. Since the mid-1990s, we have discussed it hardly at all even as poverty rates, as officially measured, are rising. Concerns about poverty in the United States have ebbed for basically two reasons: practical considerations that resonate with Galbraith’s notion about the complex nature of poverty in modern times and widespread reservations about what official U.S. government poverty rates really measure. Let us take these in reverse order.
What Poverty Is
T
he Census Bureau’s official poverty line rests on a formula devised by labor economist Mollie Orshansky in 1963, adjusted over time for inflation.6 In 2011, this line was drawn at a pre-tax cash income of $22,350 for a family of four. However, across the political spectrum the overwhelming majority of policy analysts doubt that this measure accurately reflects the number of people who are poor and the essential condition of poverty as it is understood throughout much of the world. Not to put too fine a point on it, Acting Deputy Secretary of Commerce Rebecca Blank described the official poverty measure as “nonsensical numbers.”
In light of these misgivings, the Census Bureau has considered a range of alternative poverty measures since the 1980s. Efforts to revise the established index intensified after a 1995 report by the National Academy of Sciences addressed the many technical issues involved in developing a more rigorous measure. Drawing heavily on that report, in autumn 2011 the Census Bureau unveiled the Supplemental Poverty Measure (SPM). The new metric was tagged “supplemental” to stipulate that it would not replace the official count in determining eligibility for government programs. Offering a more sophisticated formulation than the established poverty index, the supplemental measure calculates income using a wider range of family resources; makes adjustments for taxes, certain costs of living, family composition, housing status and geographic areas; and fixes the poverty line according to the amount spent on food, clothing, shelter and utilities by families at the 33rd percentile of the distribution, or just below what two thirds of American families routinely spend on these necessities, plus a small allowance for other needs.
An aura of scientific authority surrounds the Supplemental Poverty Measure thanks to the imprimatur of the National Academy of Sciences and the exemplary academic credentials of the panel that contributed to the report. Nevertheless, those engaged in the process of measurement well know that the further the definition of poverty moves beyond subsistence, the more subjective and arbitrary the results. This is not the place to detail the frailties of the SPM. Suffice it to say that, as Rebecca Blank put it, “those who engage in poverty measurement can often be quite influenced by their sense of where they want to wind up.” Of course, Galbraith’s claim that poverty was no longer an issue of pressing concern in the affluent society of the late 1950s was also a subjective judgment, but he presented it as such—unlike the SPM, which carries the weight of scientific endorsement.
As it happens, the differences between the standard measure and the SPM are small in numbers but significant in distribution. The 2011 official index placed 15.2 percent of the American population below the poverty line, compared to 16 percent in the SPM. But the percent of elderly poor increased significantly under the SPM while the percentage of children living in poverty declined. Moreover, the SPM showed an increase in the percentages of poor whites, Asians and Hispanics, but a decline in the percentage of blacks in poverty.
These numbers settled nothing; instead they widened the debate and darkened the political shroud that hangs eternally over it. The AARP loved the new bottom line; children’s advocates did not. And then, just a few days after publication of the SPM rates, a Pew Research Center analysis of government data showed that older adults have made spectacular gains in wealth relative to younger adults over the past quarter century, rising from 10:1 in 1984 to 47:1 by 2009. So the elderly may earn less, but they have more and spend more. How does that distinction play into how we define poverty, and hence what we do about it? And in contrast to the relative increase in poverty among whites, the median wealth of white households actually rose from 7.5 times that of black households in 2005 to twenty times that of black households in 2009.
Thus, far from settling the debate about how to gauge the degree of poverty in America, the SPM highlights a range of thorny issues in defining and computing the rates. The SPM calculations also underscore the inherently arbitrary (some might say political) nature of the measurement process. But even beyond normative concerns about how values and subjectivity affect where the poverty line is drawn, and technical judgments about what resources count as income, there are at least three persuasive reasons for skepticism about what these measures represent.
First, there is a huge gap in the data between what the poor earn and what they spend.The excess of spending over reported income has grown dramatically since the early 1970s, from 139 percent of income to about 212 percent today. Actual consumption of goods and services may be even higher than out-of-pocket spending suggests, since these figures exclude public benefits to low-income households available through eighty income-tested programs such as school breakfast and lunch programs, nutrition programs for the elderly, housing vouchers, legal services, home energy assistance and day care.
There is no solid consensus that explains this gap. Some of it no doubt reflects increasing debt. There is also a marked tendency for poorer people to underreport income. It is well documented that a significant proportion of welfare recipients regularly worked for pay that was not reported.7 Obviously, part of the reason for that is that transfer payment qualifications drive economic activity below the licit line of sight; if a poor person or family reports “too much” income, the benefits disappear. Indeed, the increased understanding of how welfare benefits can affect economic behavior in undesirable ways is plainly conveyed by the work-oriented reforms recently imposed in most of the advanced industrial welfare states. Among Europeans this understanding is expressed in the idea that welfare provisions produce “poverty traps” or “enforced dependency”—phrases prudently crafted to avoid blaming the victims.
Even if expenditures could be accurately assessed, they are not a precise gauge of economic well-being. Not only is it difficult to know how much families actually spend over time; what this spending represents is also not self-evident. Do low levels of spending signify poverty, practicing a frugal lifestyle, or saving for future consumption? The data don’t tell us. In the short run, a household’s expenditures on certain durable goods, such as an automobile, are not closely related to its normal consumption. And in the long run, some goods, such as owner-occupied housing, are consumed without registering as expenditures at all.
Second, the official measure of poverty relates to a temporal dimension. Many households endure brief spells during which their incomes fall beneath the poverty line, just as many people experience periods of unemployment. Chronic poverty, though, is relatively rare. Thus, for example, between 1996 and 1999 the household income of 34 percent of the population dipped below the poverty threshold for two months or more, while only 2 percent of the population remained below the poverty line for the entire period. Similarly, from 2004 to 2007 the income of 31.6 percent of the population fell below the poverty line for two or more months, but just 2.2 percent of the population remained under the poverty threshold for the full four years. In 2009, 7.3 percent of the population was under the poverty line for the entire year, even as unemployment hovered around 9 percent.
This is an extraordinarily complex phenomenon to capture in data. At any given point in time, low-income households include a high proportion of families experiencing a temporary if sharp reduction in income. These families typically seek to maintain their standard of living by borrowing or spending down assets and smoothing out consumption to match their wealth and expected earnings over time. But poverty data-collection mechanisms are not sensitive to the duration of a family’s or an individual’s status below the poverty line. Are you living in poverty if your income falls below the line for a week, a month or a year? When people think of poverty as a social problem, they generally assume a long-term or chronic condition. But how many middle-class professionals reading this today can look back to some period in their lives when they would have qualified as being poor based on their income, if only through their graduate school days, in the first year after graduation, or during a spell of unemployment?
Third, the range of material possessions enjoyed by people living below the poverty line provides a final reason for questioning what the official measure claims to represent. As with the household expenditure data, these amenities reflect higher than expected levels of consumption. Thus, for example, at the height of the recession in 2009, 40 percent of the families officially designated as poor owned their own homes, which were mainly single-family units and had a median value of $100,000. Most of these homes had three or more bedrooms, a porch or patio and a garage. The median size is 1,470 square feet. Although smaller than the homes of Americans with incomes above the poverty line, they are equal to the average size of new homes in Denmark and larger than the average newly built homes in France, Spain and the United Kingdom.
Moreover, 92 percent of poor households had microwaves, 76 percent air conditioning, 50 percent computers, 64 percent a clothes washer, 99 percent a refrigerator (23 percent an additional freezer), 98 percent color televisions (70 percent more than one television) and 77 percent owned a car, truck or van (22 percent owned two or more vehicles). This describes a level of material well-being that corresponds with neither public perceptions of poverty nor Biblical dictates to aid the needy.
Drawn from scientific surveys conducted by Federal agencies, these facts raise credible doubts about the extent of material deprivation among those counted as living under the official poverty line in the United States. The majority of people currently deemed poor have access to the basic necessities of life, not only to what is needed for subsistence—food, clothing, shelter, entertainment, education, healthcare and transportation—but also to those commodities that symbolize the proverbial linen shirt without which Adam Smith’s day laborer would have been ashamed to appear in public. There are, of course, differences of opinion about the exact bundle of goods and services embodied by this standard in 21st-century America, and it is indeed the case that today’s luxuries have a way of becoming tomorrow’s necessities. It is also true that production costs of conveniences that may seem lavish today tend to decline over time.
W
hen I suggested to one of my colleagues that the real scale of poverty in the United States is considerably below the rate typically depicted, he stared in disbelief, blurting out, “What are talking about? Use you eyes for God’s sake! Are you blind?” Of course, take a short walk from our campus, down Telegraph Avenue in Berkeley, or stroll around downtown San Francisco, and you can see what he meant. Throughout the country on a given night there are approximately 650,000 homeless people wandering the streets in ragged attire, many of whom do not know where their next meal will come from. Most studies show the vast majority of homeless people are not only impoverished, but also suffer from at least one disabling condition such as alcohol addiction, drug abuse and mental illness. In short, what we are seeing are examples of Galbraith’s “case” poverty.
Just how much case poverty there is in the United States remains open to question. Various estimates put the current rate of chronic poverty at 2–7 percent. At the middle range, if 5 percent of the population is chronically poor, we’re talking about roughly 15.5 million people. Just a fraction of the official poverty rate, this relatively small percent nevertheless signifies a huge number of people living in distress. As with the homeless, a large proportion of this group suffers from mental illness, addictions and other disabilities.
Thus, when consumption patterns, material possessions and the persistence of low-income are drawn together into the frame, what appears is a picture of poverty in America that is indeed large enough so that one would have to be blind not to see it, but small enough so that it bears little resemblance to the massive affliction portrayed by official poverty measures. When we look more closely, we see a genuine poverty problem restricted largely to people with physical and psychological conditions that make it difficult to be productive. From this perspective, not only does the size of the problem become more manageable, but the solutions take on a very different hue from those mainly seeking to provide additional cash for low-income people. A closer look also reveals that low-income people (as well as many above their level) who struggle daily to make ends meet are in truth engaged in a battle to match resources with modern appetites for material consumption. They are not in a battle to put a roof over their heads, clothes on their back and food on the table.
In sum, more than a half century later, it seems Galbraith was right all along. Had it been possible to staunch the mad materialism Galbraith disdained at the time, the appetite for material consumption at the heart of the supposed poverty problem today would not be so voracious. That was never in the cards, however. Aside from moral exhortation, Galbraith knew no way to achieve it.
It is therefore noteworthy that in the revised edition of The Affluent Society (1998) Galbraith deleted his “afterthought” comment, one of several changes he made to the text on the subject of poverty. Indeed, he appears to have moved toward a post-Harrington/pre-Clinton view, emphasizing the structural causes of poverty and the transfer of income as a solution to it. Thus the 1958 edition: “The most certain thing about modern poverty is that it is not efficiently remedied by a general and tolerably well-distributed advance in income.” In the 1998 edition a similar view is expressed (“The most certain thing about poverty is that it is not remedied by a general advance in income”) only to be contradicted a few pages later: “The notion that income is a remedy for indigency has a certain forthright appeal. . . . The provision of such a basic source of income must henceforth be the first and strategic step in the attack on poverty. ” In the 1958 edition, he argued that the first and key strategic step in an attack on poverty was to ensure that children in poor families attend first-rate schools. Although continuing to describe the causes of poverty in terms of personal deficiencies as well as structural factors, by 1998 Galbraith concluded that “most modern poverty is insular, involving forces that restrain or prevent participation in economic life.” In general, the 1998 version of The Affluent Society placed greater emphasis on the elimination of poverty—a curious shift in light of the fact that according to the official measure the national rate of poverty was 73 percent higher in 1958 than it was in 1998, and unemployment was at its lowest level in thirty years.
It is tempting to assert that Galbraith was right the first time. He was in the sense that, unlike Harrington’s assertion, poverty did not represent a structural failure of capitalism. Indeed, as Galbraith saw it (here in common with Hayek), by the mid-20th century the increased output of industrial society effectively eliminated poverty for all who worked.
Yet while current data support Galbraith’s earlier observations about the state of poverty in America, his explanation for how we would get here has turned out to be only partly correct. The Affluent Society went to press at the dawn of what some describe as the golden era of welfare state expansion. According to the official index, the poverty rate in 1960 was more than 22 percent, or almost 50 percent higher than the official rate in 2011. The battle against want of basic necessities was powerfully reinforced by social expenditures on a broad package of benefits (social security, unemployment, public assistance, supplemental security income for the blind, aged and disabled, daycare and much more) that has increased vastly since the 1950s. Although Social Security, unemployment and Medicare benefits are distributed across the board to people with varying incomes, a large part of the package is designed expressly to benefit low-income people. Allowing for inflation, Federal and state spending on income-tested programs climbed by 557 percent between 1968 and 2004; over that period the U.S. population grew by about 46 percent. In 2009, thanks to economic stimulus from the American Recovery and Reinvestment Act, the Federal government alone spent $708 billion on eighty income-tested programs, a 23 percent increase over the amount spent on this population in 2008. These need-based benefits were limited to people with low incomes, but not all of those incomes were below the poverty line. For those identified as living below the poverty line in 2009, these Federal expenditure amounted to nearly $16,000 per person—or $64,000 a year in benefits (not cash) for a typical family of four.
The impact of the 2008 recession was clearly cushioned by this array of social programs. Fifty years ago, Galbraith proposed a cyclically graduated unemployment scheme under which benefits would rise in periods of high unemployment and decrease as jobs became available. In recent years Congress has essentially complied. In the 2009–12 recession, Congress temporarily raised unemployment payments and extended the time limit to 99 weeks. Once firmly in place, the social safety net expanded to cover necessities such as food and shelter. According to the HUD annual count, even the rate of homelessness continually declined between 2008 and 2011. Although this decline started before the implementation of the Homelessness Prevention and Rapid Re-Housing Program in 2010, the $1.5 billion funding of this measure surely reinforced the downward trend. Finally, funding for food stamps more than doubled from $2.76 million in 2007 to $5.78 in 2011, at which point the average household benefit amounted to $3,400 per year.
What Poverty Isn’t
I
f actual poverty in the United States is way down, for reasons Galbraith predicted, and if his countercyclical policy prescriptions for hard times are now both accepted and effective, then we should declare general victory, focus on the special circumstances of case poverty, and that would be that. Except that it isn’t. Galbraith the sensible, pragmatic liberal almost certainly could not have anticipated what has happened to the definition of poverty over the past half century at the hands of a combination of almost preternatural power: the relentless utopianism of the post-Harrington progressive spirit joined to the unaccountable juggernaut of public bureaucracy.
By any historically literate measure, the rise of modern welfare states, accompanied by the decline of material deprivation in the United States and many other advanced industrial democracies, is an extraordinary achievement of the 20th century. We have witnessed the realization of the progressive agenda to alleviate the tangible adversities of poverty. But having won the battle against want elicits little celebration, and so proves the shrewd wisdom of Deuteronomy after all. The progressive spirit is temperamentally disposed to relentless discontent. So in response to the unprecedented level of material well-being among low-income people, those seeking to advance the human condition have moved the goalposts.
This spirit now insists that conventional measures of poverty cannot capture the real adversities of modern life or the basic failures of capitalist society. Hence the slow but steadily encroaching introduction of a new vocabulary to alert the public and revitalize the case for still more government action. Now absolute measures of poverty are replaced by relative measures: Poverty has to do with inequality of outcomes, and what Europeans call “social exclusion”, an expansive concept of disadvantage purportedly empirically defined by at least 17 indicators. Among these indicators is an index of material deprivation, one suggested version of which includes factors like living in noisy areas and areas suffering from pollution and grime caused by traffic or industry—upscale parts of Manhattan Island would probably qualify.
Such measures reflect the aspirations of an environmentally pristine peaceful middle-class life for all—a highly desirable objective, no doubt, but one very far removed from authentic material deprivation as it was evident in the 19th and 20th centuries, as it is evident today in many parts of the developing world, and as it is understood by the public in advanced industrial nations. But the measures themselves, as always, rely on soft data exposed to political manipulations. Take inequality, for example, a matter that has become more salient politically in the United States in recent years.
There is little doubt that inequality has increased in the United States (as in most other wealthy countries) since the 1960s, reaching its highest point in 2007, at which time it was widely reported that the top 1 percent took in 23.5 percent of all earned income. By 2009 the recession’s impact had cut their share to 17.6 percent of the total earned income, about what it had been in the late 1990s. (A sharp reversal in GDP is among the quickest ways to advance equality—a reminder that the social desirability of increasing equality is sensitive not only to how the pie is distributed but whether it is growing or shrinking in the process.) Careful calculations by Richard Burkhauser and his colleagues show that after 1993 there was no palpable increase of inequality among the bottom 99 percent of the population. Compared to the rapidly rising income of the top 1 percent, everyone else moved at a relatively slow pace.
Looking still deeper into the data, one finds other problems. The common perception that in recent decades the rich have gotten richer while the poor have gotten poorer does not factor in, for example, the impact of taxes, government transfers and the cash value of health insurance benefits as well as adjustments for household size. An arguably more refined measure of disposable income that incorporates these factors yields a somewhat brighter picture of the American experience.
Moreover, the practical meaning attributed to increasing inequality depends in large measure on how it impacts people’s lives. Thus, despite the widening difference in income, Tyler Cowen points out that the inequality of personal well-being has narrowed remarkably over the past century. Short of taking a voyage into outer space, there are few forms of travel on this planet—from bicycles, to automobiles, jet planes and ocean liners—beyond the reach of the middle classes. From the profile of a Toyota that looks like a Mercedes to Asian knock-offs of designer apparel, modern production has blurred the distinction of up-market goods that once proclaimed an accredited position in society. People enjoy nearly universal access to entertainment and the comforts and conveniences of modern amenities. Even as they struggle to make ends meet, average Americans are instinctively aware that their living standards are pretty good.
Although increasing inequality is generally undesirable, the preference for equality is ultimately contingent on other choices; in other words, one’s sense of economic inequality is largely contextual. As Robert Frank and others have taken pains to point out, where one stands depends on whether the comparison is to neighbors, people in the next state or across the border, millionaires in the state capital or the billionaires on Wall Street. Thus it has been duly noted that most members of the “Occupy Wall Street” movement claiming to be among the 99 percent are really part of the prosperous 1 percent when compared to the world as a whole. Some Americans may be astounded to hear that Hungary has greater equality and less poverty (as measured by the European index) than the United States. Whether Hungary is a more just society than the United States depends on what one makes of the fact that Hungary’s median monthly income of roughly $800 is approximately 55 percent of the poverty line for a two-person family in the United States—and comes to less than half the unemployment benefit in Wyoming. As William Frankena has explained, the spirit of distributive justice involves a vaguely defined but palpable concern for “‘the goodness of people’s lives’, as well as for their equality.”
W
riting in the late 1950s, Galbraith detected little interest in inequality as an economic issue for a variety of reasons. At the time, first of all, inequality was on the wane. Moreover, the rich had become a less visible annoyance as the ostentatious display of wealth had lost the power to convey membership in a privileged caste. Increasing prosperity allowed so many people to indulge in the purchase of luxury goods, or copies of them, that they ceased to serve as a mark of distinction. But above all, Galbraith maintained that the material gains of increasing output eliminated the social tensions associated with inequality. At the same time, he had his doubts about the extent to which rising tides could dull envy. The good liberal, as Galbraith explained, was haunted by “the cynical Marxian whisper hinting that whatever he does may not be enough. Despite his efforts the wealthy become wealthier and more powerful.”
Thus did concern about inequality fade during the post-World War II period of optimism and relative prosperity. The sharp economic downturn in recent years has raised simmering resentment over inequality to a boil, fueled in part by the unseemly and wildly disproportionate compensation awarded to the captains of finance and industry even as their boats went down. To many Americans, this reeks of a con job pulled off by an insular corporate and financial elite.
The irony in all this, however, is that resentment over inequality seems to have displaced concern for the poor. When during his 2011 White House address on economic growth and deficit reduction President Obama called for the wealthiest Americans to pay more taxes, he couched his demand in the language of equity for the middle class. There was a passing reference to the poor in this address, a fleeting afterthought at most. While conservatives charged Obama with stirring up class warfare, it was a battle line drawn between the upper and the middle classes, not the classic struggle of rich against poor.
This focus on redistribution to achieve economic equality does little to alleviate the disabilities of the chronically poor. It does not develop opportunity, strengthen family life, educate children, or encourage the civic virtues that are independent of market capitalism. It does nothing to address the acute suffering of those afflicted by case poverty. Instead, it conveys an image of the good society as one dedicated to increasing private consumption. It reinforces the unbridled materialism that Galbraith saw as irrelevant, if not detrimental, to the essential quality of modern life.
To redress what he deemed a disproportionate emphasis on the production of material wares and individual consumption, Galbraith argued that the good society required greater social balance between public and private spending. To this end he advocated the expanded use of state and local sales taxes as the best way to enlarge and enrich public goods and services: recreation facilities, public safety, community services, transportation and most of all education. He was adamant about the advantages of this tax despite its potential impact on the distribution of income and financial costs to low-income people. He was adamant because he understood that it was a means to create what we call today both human and social capital.
Beyond improving the social balance of public and private consumption, Galbraith was also concerned about the balance of work and leisure and how to limit the drudgery of manual labor. Amid modern affluence he observed the rise of a “New Class” for whom agreeable work is a rich source of satisfaction that lends purpose and structure to life. Galbraith’s call for the expansion of the New Class as the major social goal of society reflected the utopian tendencies of the progressive spirit, but at least his utopianism transcended the merely material. Galbraith’s progressive agenda included making work easier, more pleasant and personally satisfying through greater societal investment in human capital and more leisure. His agenda transcended conventional materialistic concerns about poverty and inequality, concentrating instead on the profound issues of what makes a good society, and on the purpose of human labor after survival in modest comfort is no longer at issue. These are the real questions, not those about poverty or inequality, that we as an affluent society have yet to answer. In this, too, Galbraith was right all along.
1A version of this essay with more complete citations is available online at www.the-american-interest.com.
2Hunter, Poverty: Social Conscience in the Progressive Era, Peter d’A. Jones, ed. (Harper Torch Books, 1965; originally published by Macmillan, 1904). See also Edward Allen Brawley, “Finding the Fair Way”, The American Interest (September/October 2007).
3Earlier, H.G. Wells predicted that industrial society would give rise to the “people of the abyss”, a group he described as criminal, immoral, parasitic and those born with disadvantages that would allow no opportunity to enter the world of work. See Wells, Anticipations of the Reaction of Mechanical and Scientific Progress: Upon Human Life and Thought (1901). While Wells flirted with eugenics, which at that time was vaguely fashionable in his circle of Fabian Socialists, Galbraith’s solution emphasized social services and educating children.
4See my Transformation of the Welfare State: The Silent Surrender of Public Responsibility (Oxford University Press, 2004). Incidentally, the United States is often criticized as a miser among high-roller welfare states because it spends a lower percent of its GDP on social welfare than most other countries. According to an August 16, 2007 New York Times editorial, it’s “almost the stingiest among industrial nations”—“long a moral outrage.” This is just not so: Change the metric from the percent of GDP to the actual dollars spent for each citizen and by 2001 the United States stands at the top of the list with the highest per capita net expenditure on the full package of social welfare benefits typically used to compare social spending in the OECD countries. See my “The Least Generous Welfare State: A Case of Blind Empiricism”, Journal of Comparative Policy Analysis (September 2009). Even as a percent of GDP, the United States climbs to sixth place when measured by the OECD’s most rigorous and comprehensive index of net social expenditures.
5This is a conservative estimate based on the official calculation of median income, which is adjusted according to the Consumer Price Index Research Series (CPI-U-RS). The CPI-U-RS is a controversial measure often charged with understating the real increase in household income. An alternative calculation adjusting for this bias reveals a substantially greater rise in the median income. For details see, Bruce Meyer and James Sullivan, “American Mobility”, Commentary (March 2012).
6Orshansky, “Children of the Poor”, Social Security Bulletin (1963). A little later she refined the measure to include poverty in all families, not just those with children. See Orshansky, “Counting the Poor: Another Look at the Poverty Threshold”, Social Security Bulletin (1965).
7In a study of welfare recipients in Illinois between 1988 and 1990, Kathryn Edin and Christopher Jencks found that almost 80 percent of their sample worked (in both legal and illegal activities) without reporting their income. Edin and Jencks, “Welfare”, in Christopher Jencks, ed., Rethinking Social Policy (Harvard University Press, 1992). Other studies reveal significant employment rates, though not quite as high. For example Maureen Marcenko and Jay Fagan report 27 percent in “Welfare to Work: What Are the Obstacles”, Journal of Sociology and Social Welfare (1996); Dave O’Neill and June O’ Neill report 49.9 percent in “Lessons for Welfare Reform: An Analysis of the AFDC Caseload and Past Welfare-to-Work Programs” (Upjohn Institute for Employment Research, 1997); and Kathleen Harris reports 51 percent “Work and Welfare Among Single Mothers in Poverty”, American Journal of Sociology (1993).