Increased income inequality; wage stagnation; skill-biased technological change; productivity growth slowdown; rising college wage premium; labor-market polarization; declining prime-age labor force participation; low intergenerational relative mobility; declining absolute mobility—all of these are concepts developed by economists to describe the dimming prospects for ordinary American workers. Taken together, they inform the consensus view that something is wrong with the American economy that isn’t going away anytime soon.
But if we follow the experts in looking at our problems solely from an economic perspective, we will fail to appreciate the true gravity of our situation. Yes, the relevant data on “real” or inflation-adjusted incomes have been disappointing and worrisome for decades. In particular, the sharp rise in income inequality, created mostly by a rollicking rise in the top 1 percent of incomes, has meant that incomes for typical American households have not kept pace with the overall growth of the economy. Nevertheless, a careful and dispassionate review of the data shows that incomes continued to inch upwards since the 1970s. Indeed, of those who “fell” out of middle-class status over the past 25 years, depending on how one defines it, a good many fell “up” to higher income brackets. Although the Great Recession knocked incomes downward, they have now recovered almost all the ground they lost. When we factor in the fact that comparisons of real incomes can never capture access to new products that previously were unavailable at any price, the reasonable conclusion is that overall material living standards in the United States today are at their highest levels ever. Relative stagnation may frustrate our expectations, but isn’t the same thing as collapse.
If we pull back from a narrow focus on incomes and purchasing power, however, we see something much more troubling than economic stagnation. Outside a well-educated and comfortable elite comprising 20-25 percent of Americans, we see unmistakable signs of social collapse. We see, more precisely, social disintegration—the progressive unraveling of the human connections that give life structure and meaning: declining attachment to work; declining participation in community life; declining rates of marriage and two-parent childrearing.1
This is a genuine crisis, but its roots are spiritual, not material, deprivation. Among whites, whose fall has been from greater heights, the spreading anomie has boiled over into headline-grabbing acts of self-destructive desperation. First, the celebrated findings of Anne Case and Angus Deaton have alerted us to a shocking rise in mortality among middle-aged whites, fueled by suicide, substance abuse—opioids make headlines these days but they hardly exhaust the list—and other “deaths of despair.”2 And this past November, whites in Rust Belt states made the difference in putting the incompetent demagogue Donald Trump into the White House.
What we are witnessing is the human wreckage of a great historical turning point, a profound change in the social requirements of economic life. We have come to the end of the working class.
We still use “working class” to refer to a big chunk of the population—to a first approximation, people without a four-year college degree, since those are the people now most likely to be stuck with society’s lowest-paying, lowest-status jobs. But as an industrial concept in a post-industrial world, the term doesn’t really fit anymore. Historian Jefferson Cowie had it right when he gave his history Stayin’ Alive the subtitle The 1970s and the Last Days of the Working Class, implying that the coming of the post-industrial economy ushered in a transition to a post-working class. Or, to use sociologist Andrew Cherlin’s formulation, a “would-be working class—the individuals who would have taken the industrial jobs we used to have.”
The working class was a distinctive historical phenomenon with real internal coherence. Its members shared a whole set of binding institutions (most prominently, labor unions), an ethos of solidarity and resistance to corporate exploitation, and a genuine pride about their place and role in society. Their successors, by contrast, are just an aggregation of loose, unconnected individuals, defined in the mirror of everyday life by failure and exclusion. They failed to get the educational credentials needed to enter the meritocracy, from which they are therefore excluded. That failure puts them on the outside looking in, with no place of their own to give them a sense of belonging, status, and, above all, dignity.
Here then is the social reality that the narrowly economic perspective cannot apprehend. A way of life has died, and with it a vital source of identity. In the aftermath, many things are falling apart—local economies, communities, families, lives.
This slow-motion catastrophe has been triggered by a fundamental change in how the capitalist division of labor is organized. From the first stirrings of the Industrial Revolution in the 18th century until relatively recently, the miraculous technological progress and wealth creation of modern economic growth depended on large inputs of unskilled, physically demanding labor. That is no longer the case in the United States or other advanced economies. Between automation and offshoring, our country’s most technologically dynamic industries—the ones that account for the lion’s share of innovation and productivity growth—now make little use of American manual labor.
The U.S. economy still employs large numbers of less-skilled workers, of course. They exist in plentiful supply, and U.S. labor markets are functional enough to roughly match that supply with demand for it. But all of this is occurring in what are now the backwaters of economic life. The dynamic sectors that propel the whole system forward, and on which hinge hopes for continued improvement in material living conditions, don’t have much need today for callused hands and strong backs—and will have less need every year going forward.
Economists describe this situation drily as “skill-biased technological change”—in other words, innovation that increases the demand for highly skilled specialists relative to ordinary workers. They contrast the current dynamics to the skill-neutral transition from an agrarian to an industrial economy. Then, workers displaced from farm jobs by mechanization could find factory work without first having to acquire any new specialized expertise. By contrast, former steel and autoworkers in the Rust Belt did not have the skills needed to take advantage of the new job opportunities created by the information technology revolution.
Here again, exclusive reliance on the tools of economics fails to convey the full measure of what has happened. In the heyday of the American working class during the late 1940s, 1950s, and 1960s, the position of workers in society was buttressed by more than simply robust demand for their skills and effort. First, they had law and policy on their side. The Wagner Act of 1935 created a path toward mass unionization of unskilled industrial workers and a regime for collective bargaining on wages and working conditions. And during World War II, the Federal government actively promoted unionization in war production plants. As a result, some three-quarters of blue-collar workers, comprising over a third of the total American workforce, were union members by the early 1950s. The Wagner Act’s legal structure allowed workers to amass bargaining power and direct it in unison against management, suppressing wage competition among workers across whole industries. Unionized workers were thus empowered to negotiate wages roughly 10 to 15 percent above market rates, as well as a whole raft of workplace protections.
It is important to note that the strictly legal advantages enjoyed by labor at the height of its powers have diminished very little since then. There has been only one significant retrenchment of union powers since the Wagner Act, and that occurred with the passage (over President Truman’s veto) of the Taft-Hartley Act in 1947—a few years before organized labor reached its high-water mark. What really transformed labor law from words on a page into real power was the second great prop of the working class’s position in society: collective action. Congress did not unionize U.S. industry; mass action did, never more dramatically than in the great General Motors sit-down strike of 1936–37, which led to the unionization of the U.S. auto industry. And once unions were in place, labor’s negotiating strength hinged on the credibility of the threat of strikes. Coming out of World War II, when strikes had been strongly discouraged, American workers hammered home the seriousness of that threat with a wave of labor actions, as more than five million workers went on strike during the year after V-J Day—the most strike-ridden year in American history.
This militancy and group cohesion paved the way for the 1950 “Treaty of Detroit” between Charlie Wilson’s General Motors and Walter Reuther’s United Automobile Workers. The deal provided the basic template for labor’s postwar ascendancy, in which workers got automatic cost-of-living adjustments and productivity-based wage increases while production schedules, pricing, investment, and technological change were all conceded to fall within the “managerial prerogative.” “GM may have paid a billion for peace,” wrote Daniel Bell, then a young reporter for Fortune, but “it got a bargain.”
The declining fortunes of organized labor are a direct result of workers’ ebbing capacity for collective action. After the great wave of unionization beginning in the 1930s, organizing rates peaked in the early 1950s and then went into long-term decline. As employment in smokestack industries started falling in the 1970s, the number of newly organized workers lagged badly behind and the overall strength of unions progressively waned.
This flagging commitment to union solidarity cannot be explained satisfactorily without reference to the changing nature of the workplace. The unique—and uniquely awful—character of factory work was the essential ingredient that created a self-conscious working class in the first place. Dirty and dangerous work, combined with the regimentation and harsh discipline of the shop floor, led workers to see themselves as engaged in something like war—with their employer as the enemy. Class warfare, then, was no mere metaphor or abstract possibility: it was a daily, lived reality.
“It is a reproach to our civilization,” admitted President Benjamin Harrison in 1889, “that any class of American workmen should in the pursuit of a necessary and useful vocation be subjected to a peril of life and limb as great as that of a soldier in time of war.” At that time, the body count of workplace deaths and injuries hovered around one million a year. Such conditions begat efforts to organize and fight back—often literally. The “Molly Maguires” episode in the Pennsylvania coal fields, the Great Railroad Strike of 1877 that claimed more than a hundred lives, Haymarket, Homestead, Cripple Creek, the Ludlow Massacre—these are just some of the more memorable episodes among countless violent clashes as the agents of capital struggled to keep a lid on the pressures created by the demands they made of their workers.
The best part of working-class life, solidarity, was thus inextricably tied up with all the worst parts. As work softened, moving out of hot, clanging factories and into air-conditioned offices, the fellow-feeling born of shared pain and struggle inevitably dissipated.
But at the zenith of working-class fortunes, the combination of law and collective action gave labor leaders powers that extended far beyond the factory floor to matters of macroeconomic and geopolitical significance. This capacity to affect domestic politics and international relations further bolstered the position and influence of the working class. When steel or autoworkers went on strike, the resulting disruptions extended far beyond the specific companies the unions were targeting. Labor unrest in critical industries affected the health of the overall U.S. economy, and any threat to the stability of America’s industrial might was also a threat to national security and international order. Consider Harry Truman’s decision in April 1952, during the Korean War, to nationalize the U.S. steel industry just hours before workers were planning to walk out on strike. We generally remember the incident as an extreme overreach of Executive Branch power that was slapped down by the Supreme Court, but the point here is to illustrate the immense power wielded by unions and the high stakes of any breakdowns in industrial relations.
The postwar ascendancy of the working class was thus due to an interlocking and mutually reinforcing complex of factors. It was not just favorable labor laws, not just inspired collective action, but the combination of the two in conjunction with the heavy dependence on manual labor by technologically progressive industries of critical importance to national and global welfare—all of these elements, working in concert—that gave ordinary workers the rapid economic gains and social esteem that now cause us to look back on this period with such longing. And the truly essential element was the dependence of industry on manual labor. For it was that dependence, and the conflicts between companies and workers that it produced, which led to the labor movement that was responsible both for passage of the Wagner Act and the solidarity that translated law into mass unionization.
No sooner was this working-class triumph achieved than it began to unravel. The continued progress of economic development—paced by ongoing advances in automation, globalization, and the shift of output and employment away from manufacturing and into services—chipped relentlessly away at both heavy industry’s reliance on manual labor and the relative importance of heavy industry to overall economic performance.
These processes began in earnest longer ago than many observers today remember. U.S. multinational corporations quadrupled their investments overseas between 1957 and 1973—from $25 billion to $104 billion in constant dollars. And back in 1964, the “Ad Hoc Committee on the Triple Revolution” made headlines with a memorandum to President Johnson on the threat of mass technological unemployment as a result of automation. But this was just the beginning. As information technology supplanted smokestack industry at the vanguard of technological progress, and as demand for labor generally shifted in favor of more highly skilled workers, the working class didn’t just go into decline. It eventually disintegrated.
There is a great deal of nostalgia these days for the factory jobs and stable communities of the egalitarian 1950s and 1960s—when working-class life was as good as it ever got. The sense of loss is understandable, as nothing as promising or stable has replaced that way of life now gone. But this lament for what has been lost is the cry of the Children of Israel in the wilderness, longing for the relative comforts of Egypt. We must remember that, even in the halcyon postwar decades, blue-collar existence was a kind of bondage. And so the end of the working class, though experienced now as an overwhelmingly negative event, opens up at least the possibility of a better, freer future for ordinary workers.
The creation of the working class was capitalism’s original sin. The economic revolution that would ultimately liberate humanity from mass poverty was made possible by a new and brutal form of domination. Yes, employment relations were voluntary: a worker was always free to quit his job and seek a better position elsewhere. And yes, over time the institution of wage labor became the primary mechanism for translating capitalism’s miraculous productivity into higher living standards for ordinary people. Because of these facts, conservatives and libertarians have difficulty seeing what was problematic about the factory system.
We can dismiss the Marxist charge of economic exploitation through extraction of surplus value. Meager pay and appalling working conditions during the earlier stages of industrialization reflected not capitalist perfidy but objective reality. The abysmal poverty of the agrarian societies out of which industrialization emerged meant that nothing much better was affordable, or on offer to the great majority of families.
But that is not the end of the inquiry. We need to face the fact that workers routinely rebelled against the factory system that provided their livelihoods—not a normal response to mutually beneficial exchanges. First were the individual mutinies: no-shows and quitting were commonplace. During the early 20th century, absenteeism rates stood at 10 percent or higher in many U.S. industries, and the usual turnover rate for factory employees exceeded 100 percent a year. For those who made it to work, drinking, drug use, monkeywrenching to slow the line, and other acts of small-scale sabotage were regularly availed outlets for sticking it to the man.
More consequential than these acts of private desperation were the incessant attempts to organize collective action in the teeth of ferocious opposition from both employers and, usually, the state. Mass labor movements were the universal reaction around the world to the introduction of the factory system. These movements aimed to effect change not only in the terms of employment at specific workplaces, but in the broader political system as well. Although socialist radicalism did not dominate the U.S. labor movement, it was the rule elsewhere as the Industrial Revolution wrought its “creative destruction” of earlier agrarian ways. Whether through revolutionary or democratic means, elimination of private ownership of industry and the wage system was the ultimate goal.
Since grinding poverty had long been the accepted norm in agrarian economies, what was it about industrial work that provoked such a powerfully negative response? One big difference was that the recurrent want and physical hardships of rural life had existed since time immemorial, and thus seemed part of the natural order. Likewise, the oppressive powers of the landed aristocracy were inherited, and sanctified by ancient custom. By contrast, the new energy-intensive, mechanized methods of production were jarringly novel and profoundly unnatural. And the new hierarchy of bourgeois master and proletarian servant had been erected intentionally by capitalists for their own private gain. There had been solace in the fatalism of the old Great Chain of Being: all the orders of society, from high to low, were equally subject to the transcendent dictates of God and nature. Inside the factory, though, industrialists subjected both nature and humanity to their own arbitrary wills, untethered from any inhibition of noblesse oblige. The traditional basis for the deference of low to high had been wrecked; the bourgeoisie’s new position at the top of the social pyramid was consequently precarious.
Another reason for the restiveness of industrial workers was the factory system’s creation of enabling circumstances. In other words, workers engaged in united resistance because they could. In the agrarian era, highly dispersed and immobile peasants faced nearly insuperable obstacles to organizing on a large scale—which is why peasant revolts were as uncommon as they were futile. The factory system dramatically reduced the costs of organizing for collective action by concentrating workers in large, crowded workplaces located in large, crowded cities. Toiling and living together at close quarters allowed individualized discontent to translate into concerted resistance. Solidarity was a consequence of falling transaction costs.
At the heart of the matter, though, was the nature of the work. According to the cold logic of mechanized production, the technical efficiency of the human element in that process is maximized when it is rendered as machine-like as possible. Machines achieve their phenomenal productivity by performing a sequence of discrete, simple tasks over and over again, always the same, always precisely and accurately, as rapidly as possible. Humans are most productive in filling in the gaps of mechanization when they perform likewise.
The problem, of course, is that people are not machines, and they don’t like being treated as such. By inducing millions of people to take up factory work and creating a social order in which those millions’ physical survival depended upon their doing such work for most of their waking hours, industrial capitalism created a state of affairs deeply inconsistent with the requirements of human flourishing—and, not unrelatedly, a highly unstable one at that.
Adam Smith saw the problem clearly at the very dawn of the Industrial Revolution. He opened his Wealth of Nations with a celebrated discussion of a pin factory, elaborating how the division of labor—breaking down pin manufacturing into numerous simple tasks that can be performed repetitively and speedily—made possible an enormous increase in output. Later in the work, however, he worried about the human toll of this highly specialized efficiency:
The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become.
When Smith was observing and for a long time thereafter, this psychological toll was all mixed up with acute physical suffering. But even as pay increased steadily and workplace hazards to life and limb receded over the course of the 20th century, the essential inhumanity of industrial work never changed. Consider these recollections from End of the Line, an oral history of Ford’s Michigan Truck Plant, published in the 1980s just as the industrial era was drawing to a close:
The next day I went in after school and worked ten hours. I thought I had gone to hell. I couldn’t believe what people were doing for money.
Management’s approach is that the simpler the work, the easier it is to train workers and the easier they are to replace. You can’t keep that from sinking into a person’s self-esteem…. Even though it gives us a certain amount of financial freedom, we are prisoners of the assembly line. You’re tied to a machine, and you’re just another cog. You have to do the same thing over and over again, all day long.
The way foremen talked to people, you soon realized you were a serf and the foreman was your master…. In those days ex-athletes, especially prizefighters, were highly valued as foremen in automobile plants, especially at Ford. They were big barroom brawlers, bouncers, scrappers, and fighters, people who could bully people and command respect because of their size.
To have the human body work like a machine—consistently, continuously, hour in, hour out, to produce a product—is inhuman…. It’s like you’re incarcerated from the minute you get there until it’s time to leave…. The first few weeks I was there, I thought the world was going to end.
I was going to quit after that first week. I was so tired. My hands were aching, and my whole body was a wreck. But when I got my first check, it was over $400 and I told myself, “Maybe I don’t hurt as bad as I thought I did.”
To have to mimic an unthinking machine all day, every day, was bad enough at the purely individual level. But to be subjected to this fate wasn’t a merely personal predicament; it was to be relegated to a whole class of people on the wrong side of an invidious social comparison. In pursuing the technical efficiency of mass production regardless of its human costs, the class system created by industrial capitalism divided people along very stark lines: those who work with their brains and those who work with their bodies; those who command and those who obey; those who are treated as full-fledged human beings and those who are treated as something less.
Conservatives and libertarians have tended to dismiss the issue of class. If there is formal legal equality, and if the wage bargain reflects supply and demand rather than expropriation, what could be the problem? The problem eludes them because they are blind to the sociological dimension of economic behavior. Although workers and managers were legally equal, their relationship was one of deep social inequality. If the capitalist class system wasn’t about narrowly defined exploitation or oppression, it was most certainly about domination.
The social inequality of the workplace fed off and in turn sustained other, pre-market sources of inequality. In England, where industrialization originated, a preexisting class hierarchy based on the enormous land holdings of the hereditary aristocracy made it easier for capitalists to think of their workers as a lower order who were useful only from the neck down. In America, a social order noted for its egalitarianism arose while the country remained an agrarian economy—an egalitarianism restricted, though it was, to white Protestant men. Even that beachhead of equality was lost, though, when the American mass-production economy took off after the Civil War. The country imported a steep social hierarchy by feeding the insatiable demand for factory workers with thronging millions of non-Protestants from Ireland and southern and eastern Europe. Ethnic and religious prejudice by America’s white Protestant business class buttressed its sense of rightful dominance in the workplace, and the association of ethnic and racial minorities with dirty, menial drudgery reinforced the supremacist arrogance of their white-collared, white-skinned “betters.”
Even in the glory days of the Treaty of Detroit, the pact between capital and labor was a Faustian bargain. The wages paid to industrial labor were always a bribe to surrender one’s brain, and part of one’s soul, at the factory gate. In time the physical assaults and indignities of industrial work softened, and the pay packet fattened to afford material comforts earlier workers would never have dreamed of enjoying—but, however sweetened, it was still a deal with the devil. And as mass affluence prompted a cultural turn away from mere material accumulation and toward self-expression and personal fulfillment as life’s highest desiderata, the terms of that deal only grew more excruciating.
The nightmare of the industrial age was that the dependence of technological civilization on brute labor was never-ending. In Metropolis, Fritz Lang imagined that pampered elites in the gleaming towers of tomorrow would still owe their privileges to the groaning toil of the laboring masses. H. G. Wells, in The Time Machine, speculated that class divisions would eventually sunder humanity into two separate species, the Eloi and the Morlocks.
Those old nightmares are gone—and for that we owe a prayer of thanks. Never has there been a source of human conflict more incendiary than the reliance of mass progress on mass misery. In its most destructive expression, the nuclear arms race between the United States and the Soviet Union, it threatened the very survival of humanity. We are lucky to be rid of this curse.
But the old nightmare, alas, has been replaced with a new one. Before, the problem was the immense usefulness of dehumanizing work; now, it is feelings of uselessness that threaten to leach away people’s humanity. Anchored in their unquestioned usefulness, industrial workers could struggle personally to endure their lot for the sake of their families, and they could struggle collectively to better their lot. The working class’s struggle was the source of working-class identity and pride. For today’s post-working-class “precariat,” though, the anchor is gone, and people drift aimlessly from one dead-end job to the next. Being ill-used gave industrial workers the opportunity to find dignity in fighting back. But how does one fight back against being discarded and ignored? Where is the dignity in obsolescence?
The scale of the challenge facing us is immense. What valuable and respected contributions to society can ordinary people not flush with abstract analytical skills make? How can we mend fraying attachments to work, family, and community? There are volumes to write on these subjects, but there is at least one reason for hope.
We can hope for something better because, for the first time in history, we are free to choose something better. The low productivity of traditional agriculture meant that mass oppression was unavoidable; the social surplus was so meager that the fruits of civilization were available only to a tiny elite, and the specter of Malthusian catastrophe was never far from view. Once the possibilities of a productivity revolution through energy-intensive mass production were glimpsed, the creation of urban proletariats in one country after another was likewise driven by historical necessity. The economic incentives for industrializing were obvious and powerful, but the political incentives were truly decisive. When military might hinged on industrial success, geopolitical competition ensured that mass mobilizations of working classes would ensue.
No equivalent dynamics operate today. There is no iron law of history impelling us to treat the majority of our fellow citizens as superfluous afterthoughts. A more humane economy, and a more inclusive prosperity, is possible. For example, new technologies hold out the possibility of a radical reduction in the average size of economic enterprises, creating the possibility of work that is more creative and collaborative at a scale convivial to family, community, and polis. All that hold us back are inertia and a failure of imagination—and perhaps a fear of what we have not yet experienced. There is a land of milk and honey beyond this wilderness, if we have the vision and resolve to reach it.
1For perspectives from contrasting ideological vantage points, see Robert D. Putnam, Our Kids: The American Dream in Crisis (Simon & Schuster, 2015); Charles Murray, Coming Apart: The State of White America, 1960-2010 (Crown Forum, 2012).
2See Anne Case and Sir Angus Deaton, “Mortality and Morbidity in the 21st Century,” prepared for Brookings Panel on Economic Activity, March 23-24, 2017, final post-conference version dated May 1, 2017.