by Peter N. Stearns
New York University Press, 2012, 269 pp., $35
Contrary to what you may think while surveying the vast cornucopia of our vibrant consumer landscape—fleece Snuggies, Hot Pockets, doggie umbrellas, Gulf Stream jets, Blazin’ Jalapeño Doritos—modernity has not turned out to be quite as awesome as some early optimists had hoped. Sure, there’s plenty of stuff to buy, use up, and buy again, but people in modern industrialized societies have higher rates of depression and dissatisfaction than do those in premodern societies, in places like Ghana or Bangladesh. Indeed, as wealth has increased in the West, general happiness (admittedly a slippery mood to quantify) has declined proportionately.
This may be no causal relationship. Perhaps modernity and human happiness don’t actually have much to do with each another at all—except that in modernity you’re expected to be, well, happy. This uneasy situation, bracketed by the amorphousness of what “happy” and “modern” actually mean in variable social contexts, is what George Mason University historian Peter N. Stearns has aptly called modernity’s “happiness imperative.”
Stearns’s latest book, Satisfaction Not Guaranteed: Dilemmas of Progress in Modern Society, takes the reader through the past two centuries of modern change, delving into the ways in which some eternal human practices—eating, sex, childrearing, death—have been transformed from their premodern precursors. Since the end of the 19th century, Stearns explains, residents of industrialized nations have lived longer lives, enjoyed better health, more education, better working conditions, more recreational sex, fewer children and more leisure time. American income doubled between 1954 and 2004. On the whole, modernity has been kind to many people, and despite some fashionable debates over its “post-” status over the past four decades, “modernity has not failed, and it is not being rejected.” So then why are so many people dissatisfied?
It’s hardly a new question. Deuteronomy 16:15 commands its believing readers to be “very happy” at the time of the autumn harvest; even ancient scholars were perplexed about how God could command human happiness. Thirty years ago, Robert Nisbet made the following observation, not directly about happiness but about the conditions conducive to it:
What we know least about as the result of thousands of years of civilized history is affluence. We are the first affluent state, politically, psychologically and sociologically. And there is something about affluence that does not seem to produce community. Poverty will produce community—a sense of spirit, of organization, or working together. Affluence doesn’t seem to.1
And books about the happiness paradox have proliferated ever since, such as Arthur Brooks’s Gross National Happiness, Derek Bok’s The Politics of Happiness, Nick Powdthavee’s The Happiness Equation, Richard Layard’s Happiness: Lessons from the New Science, Robert Lane’s Loss of Happiness in Market Democracies, and the aptly titled American Paradox: Spiritual Hunger in an Age of Plenty, by David G. Myers.
Despite the crowded turf, Stearns’s uncanny ability to carefully examine the quotidian generates some genuinely new insights. Taken-for-granted attitudes toward things like time, the image of women and children, the meaning of work, and portliness and old age, blind us to signs of immense social change. These perceptions are not fixed and eternal, but are shifting artifacts of a changing culture. To know how to read the changes is to understand the culture.
The problem with unhappiness amid plenty, Stearns argues, is not just simple ingratitude, that perennial human feature (though there is plenty of that). It is that along with all its momentous gains, modernity also unleashed a “rapid escalation of new expectations, misplaced fears, and misperceptions” that have generated new kinds of problems. These problems have resulted in a “tenuous embrace” between happiness and modernity, by which Stearns means the process of social change unleashed by the Industrial Revolution: the rise of urban centers, advancing technology, the relative decline of agriculture, increased longevity, regularization of the food supply, higher levels of education, and more broadly shared wealth.
Stearns discusses modernity’s maladjustments and “false starts” in Part Two of this comprehensive study. Part Three concludes with a chapter apiece on his three main foci: death, childhood and consumerism. As with his two-dozen previous books—ranging from studies of the 1848 European revolutions to the history of emotions to the origins of American cool—Satisfaction Not Guaranteed shows Stearns again as one of America’s most erudite, resourceful and commanding social historians, at once narratively compelling and intellectually intriguing, able to give form to modernity’s many abstractions through clear prose and colorful examples. That’s no easy task.
L
et’s start at Stearns’s beginning: As the Enlightenment reached its golden years at the end of the 18th century, a series of novel social, political and moral commitments offered departures from the entire history of Western life. The “pursuit of happiness”, to quote a familiar phrase, was one of them. That phrase was shorthand for a host of subsidiary efforts grouped more or less in two baskets: increasing bodily comforts through science and technology while at the same time reducing illness and premature death; and affirming universal rights and social progress while assailing the political regimes that suppressed them. Taken together, these applied-science and applied-political philosophy efforts bespoke the Benthamite utilitarianization of happiness—namely, seeking the most happiness for the most people through a law-based society predicated on a free-market economic system. For many deists, particularly in the New World, the affluent, orderly and predictable life that would flow from this arrangement would create such contentedness with earthly pleasures that illusions of an afterlife would simply fade away, or at least have no relevance in the function of the state. (Stearns wisely reminds us of the four million people who were not invited to this party: American slaves.)
These noble dreams spurred the moral incentives and industrial means to keep them alive, but at the same time their vigorous employment wrecked the old bucolic communal ways. From Ned Ludd to a young Thomas Carlyle, and in the writings of countless romantics ever since, critics were not slow to disparage the trade-off. Nostalgia for that mythic past has never left Western culture (currently read: “artisanal” food items and home schooling, the raw-milk movement and neo-folk music). Stearns details the spectrum of social and economic changes and their disruptions of premodern life, including the Industrial Revolution’s introduction of clock time. Bells, whistles and other prods kept workers from dilly-dallying or being late to the factory floor. These measures eventually spread to office work and led the new bourgeoisie to internalize the demand by carrying watches (initially for show, as few knew how to read them). Modern work, Stearns emphasizes, implements the strict regimentation of labor and, by extension, recreation—a shock for premodern people who had been accustomed to living by the natural rhythm of day and night, the change of the seasons, and from an internalized ancient sense of what to do and when to do it.
Urban factory labor also drove a wedge between work and home. Unless the premodern man of the house was a journeyman, soldier, hunter or fisherman, his work and life took place mostly in the same place: on the farm, in the workshop, or in any case surely within walking distance from where he lived. When industrialization drew people to urban centers, small family businesses and farms declined or were crushed by industrial competition. Urban conditions, as the novels of Dickens, Eliot and Gaskell made apparent, abounded with sprawling slums, low wages, child labor and occupational injury. “Manual weavers”, Stearns notes, “frequently became permanently deformed because they had to activate looms with their chests.”
As Marx and Engels observed at the time, modern factory work did something even more existentially upsetting: It alienated workers from the things they were making. Premodern craftsmen had a view of their product from beginning to end. Modern work, with its division of labor and its introduction of assembly lines, time pressures and hyper-supervision, turned workers into machines (until actual machines replaced them). All of this was no easy adjustment, Stearns says: “Employers in most industrial societies reported that the first generation or two of factory workers were the hardest to accommodate, since their expectations had been shaped by preindustrial conditions.” Given the right incentives, however, people will acquiesce to almost anything, and so “clock-based work time . . . no longer seemed strange to third-generation personnel.”
Workers justified this toil, humiliation and boredom by the exchange of their labor for money, a concept called instrumentalism, whereby men gained the means to feed their families and bathed in the pride that came with it. This exchange should lead to upward mobility—also a product of modern economics and new class elasticities—and the increased capacity to participate in the world of consumption, its bodily comforts and the social status it bestowed. The trade-off may not have appealed to everyone, but it did offer a degree of wealth, opportunity and mobility hitherto impossible for most.
These new arrangements freed children, previously occupied in family businesses, for the rigors and joys of public education. Though far from universal or compulsory until the beginning of the 20th century, the eventual triumph of schooling revised the idea, and the economics, of what it meant to have children. Having a lot of them used to increase family productivity and earnings; now kids were financial liabilities. Tellingly, birthrates throughout the Western world dropped by half (in the United States, from an average seven children per woman in 1800 to 3.5 in 1900).
That’s not all that changed. Of the infants born in 1880, a quarter died before age two. Babies in rural areas were often not even named before two because, really, what was the point? If a child did make it past two, a common premodern practice was to give him or her the same name as a predeceased sibling. Premodern children, in other words, were more or less interchangeable, not unique individuals in the way we conceive of them now. Advances in medicine lowered the number of infant deaths to 8 percent by 1920, but “rather than enjoying the gains over pre-modern conditions—including the effective conquest of infant death”, Stearns writes, “moderns turned distressingly quickly to more demanding definitions of the successful childhood, definitions that have proved hard to achieve with any satisfaction.”
Particularly after “the halcyon 1950s”, for example, 20th-century parents have become increasingly overprotective and overbearing, encourage far more schooling and extracurricular activity, rely on newly crowned experts and advice-givers for what to do, direct more attention to each child’s moods and perpetual happiness, in the end raising less self-reliant and more mood-focused children. But it all makes modernist sense, since “a desire to transform childhood was central to the modernist impulse from the Enlightenment onward”, Stearns writes, accompanied by “a belief that traditional childhood had been badly handled.” Whereas childhood had been traditionally tinged with the notion of original sin, John Locke enabled tots to be seen as tabula rasa; the Romantics, along with the arts and crafts enthusiasts, then laminated childhood with pure innocence and charm, leading to what Stearns calls the “century of the child.”
And what about women? Once an integral part of the home economic unit in the early 19th century, women became increasingly dependent on their husbands for money and, thereby, survival, as increasing numbers of men began to work in factories. This was literally disempowering, and soon tropes about women’s “weakness” and “dependence” flourished—in the latter case because economic reality had made it true. Yet this new distinction between men’s “work life” and women’s “home life” heralded a new valuation of women as symbols of escape, and of the family as a “haven in a heartless world”, as Christopher Lasch once described it. As true heads of their households (even when men blustered to the contrary), women were recast as moral pillars. Breadwinner husbands might have had to enter a fallen for-profit world to bring home the bacon, but women remained morally pure, untouched by the dirty capitalist machine. “Our men are sufficiently money-making”, Stearns quotes the prominent female editor Sarah Josepha Hale, “let us keep our women and children from the contagion as long as possible.”
This moral division unleashed new demands for domestic bliss and encouraged the expectation that women, being unsullied and uncompromised, were to be the always cheery housewife and mother, at once upright, happy and conforming to new ideas of submission—in other words a domestic goddess, if not an outright Stepford Wife. These vast changes in the conception and, in due course, the reality of women’s roles—as well as the new choices, expectations and responsibilities foisted upon them—sometimes gave rise to what was known in the 19th century as “hysterical paralysis”, a condition whereby (primarily) women became paralyzed with no known physical cause. Then suddenly, in 1920, after people had largely adjusted to modernity and its new expectations, not least through the psychological drubbing occasioned by World War I, the condition of hysteria virtually disappeared. Freud labeled it a “conversion disorder” because physical symptoms derived from the patient’s overwhelming anxiety over some stimulus—in this case, the changes wrought by modernity itself.
The combination of reliable, mass-produced birth control (enabled by the 1844 development of vulcanized rubber, thanks to Charles Goodyear) and the increased cost of having a large brood led to the legitimization of sexual pleasure as an end in itself—a long-arcing trend which, Stearns explains, had been underway since the Protestant injunction against celibacy and, of course, the prurient adventures of the Marquis de Sade. Victorian morality made sure to impose its Christian restrictions on pleasure whenever it could, particularly on the lower orders, lest they do nothing but get randy and boff all day. And because children stemmed primarily from boffing, having too many of them, particularly when birth control was readily available, became a sign of stupidity or bad character. Traditionalists and moralists also fretted over the effects of too much sex on the decent and upright. “Nervous middle-class people now learned that having sex too often, possibly more than once a week”, Stearns writes of Victorian nail-biters, “could induce premature death or insanity.” It fell to good Christian women to steward their husbands’ sexual appetites. And if, God forbid, a good woman did have to engage in lurid behavior, advice columnists suggested she take a deep breath, close her eyes and, for the sake of the nation, “think of England.”
Real sexual freedom for women, of course, begins in the 20th century, though not without bringing in tow very unhappy tendencies toward pornography, teen pregnancies and the abundant use of sex in advertising. In this way, Victorian moralists and some modern feminists, Stearns observes, agreed on honoring the female body and keeping it out of consumer culture. But as the 20th century unfurled, no matter how many tried to put the lid back on sexuality, “modernity and just-say-no did not mix.” Despite the complications, modernity’s influence on sex worked out well on the whole, Stearns offers, not least because modernity championed the idea that sex and love were key ingredients of human happiness. That had not been so much the case when financial and class considerations weighed far more heavily on who got hitched to whom.
Modernity also reshuffled our relation to food. After experiencing horrific shortages, Western governments took it upon themselves in the 19th century to ensure the food supply. The last naturally occurring widespread famine in the Western world, Stearns notes, was in 1840, which is no small gain. Government-backed agricultural developments created a sharp increase in crop yields, overall food production and the creation of food surpluses for export. Developments in nutrition science and the introduction of the Food and Drug Administration in the United States in 1906 aided consumers in deciding what to put in their bodies. Soon there was more than enough healthy food for everyone, and consumers could choose from a range of products.
The problem in the United States today is not too little food, but too much of the wrong kinds. Increased abundance and decreased need have relentlessly expanded American waistlines. A full third of us today are now obese, according to recent NIH statistics, and that number, like the waistlines, is poised to grow. This late 20th-century abundance of food, Stearns explains, revised notions of self-restraint and personal character, of economic might and status, and of health. Obese people experience the opposite of modernity’s gains: poorer health, less sex and shorter lives. Given modernity’s demands for self-control, rationality and happiness, fat people are seen as lazy, uncontrolled and sad—permanent outcasts from modernity’s thin and happy few. And because they are so seen and so represented in art and popular culture, that is what they have often become—a far cry from the jovial image of the portly fellow and matron in premodern times. Likewise, an epidemic of eating disorders has grown up right alongside obesity.
O
n the whole, modernity’s advances have allowed more people to enjoy more sex and better food for many, many more years, thanks to more effective medicines, hygiene and healthcare, which has, in turn, led to new ideas about the elderly. In pre-industrial societies, Stearns observes, old people accounted for roughly 4 percent of the population. Today, those over age 65 account for a full quarter of Western populations, which is having a well-known effect on demographic projections and economic conditions in Europe.
Like the incredibly elastic meanings of weight, sex, and children, so too the social meaning of age, which, Stearns observes, is an even greater paradox: Beginning in the 17th century and up to the middle of the 19th, gray wigs and makeup were worn, primarily by people of rank, to make a person look older in order garner respect and authority. Yet when people began to live longer they began to dress and act in ways that made them look younger, a phenomenon that in our day has morphed into consumer appetites for hair implants, youthful clothing, facelifts or getting a Facebook account. This may be in part because, in contrast to Asian cultures, the modern West, Stearns writes, “has not been very kind to old people.” This unkindness might have to do with economics: For the upper and middle classes, traditionally the elderly were the only obstacles between them and inheritance. In early 19th-century France, old men were the victims of murder more than any other demographic. Modernity’s insistence on youth, vigor and speed—even though the demands of modern work have become less strenuous—has re-designated old folks as decrepit, sick and socially marginal rather than as individuals with wisdom and insight into life’s mysteries and troubles. Prejudice against the elderly has often generated in them a self-image of being worthless—the self-fulfilling prophecy again at work.
The answer to this modern dilemma was systematic retirement, which began in the 1870s. Businesses such as American Express agreed with a host of labor organizations that the elderly should not simply be nudged out but should rather be given a pension—an encouragement to make room for the young and less costly. Following the Great Depression, this sentiment led to the creation of the Social Security system. “As societies gained enough wealth to improve support”, Stearns writes, “retirement seemed to be a universally desirable solution to the problem of old age in modern society.” There may have been some sincere concern for the elderly, but mostly, he concludes, “it fit corporate thinking about bureaucratizing the labor force.”
Death, modern and not, marks the next step in the human story, and it is here that modernity, Stearns says, collided with its most staunch contender. “Modernity and death are not friends”, he writes, because
death fits awkwardly when happiness is phrased in terms of earthly pleasures and pursuits. It interrupts work; it contradicts the normal joys and lures of consumerism. Without question, from the 18th century onward, definitions of modernity worked best when death was simply ignored.
Alas, ignoring death entirely is not exactly possible, particularly for those who have to accompany the dying and bury the dead. Victorians relished the Final Act, giving children dress-up funeral dolls to practice for the real thing and transforming the funeral coffin from a plain pine box into a luxurious deathbed, lined with plush upholstery so that the dead would not experience, Stearns writes, “aches and pains even after sensation had ceased.” Back then, people died “good deaths” at home, surrounded by family and loved ones in known environs. But thanks to modern hospitals becoming more technically advanced and no longer just places for the poor (doctors used to attend the well-to-do at home), the extension of life over the past half-century—sometimes for no reason other than the American inability to fully accept death, a theme with which Stearns deftly yet sensitively deals—has led more people to die in hospitals.
It was also during the Victorian era when what Stearns calls the “suburbanization of death” begins: cemeteries, once located in urban areas, where people actually lived and could easily visit their deceased loved ones, were now built outside the city. Part of the move had to do with space constraints and the prevention of contagion, but Stearns observes how the entire culture surrounding death came to accentuate its serenity and naturalness rather than the fact of permanent loss. Returning the dead to nature facilitated this narrative, and thus cemeteries reappeared as landscaped memorial gardens. The verdant Mt. Auburn Cemetery, for example, sprang up in Boston in the late 1820s, offering weeping willows, meandering paths and sunken hollows in which to contemplate the affinity between nature and the beyond. Progressively throughout the 19th century, “new cemetery directors saw their calling as a form of art”, whereby all things suggesting sorrow, pain, grief and death were banished. Stearns cites an unnamed landscape architect who wished in his creations to “put such smiles” into cemeteries that their visitors experienced them as “cheerful places.”
The American version of the happiness imperative could even penetrate the metaphysics of the Big Sleep: American churches encouraged the faithful to think of the afterlife as a place to happily reunite with loved ones—a belief, Stearns writes, that is both “unprecedented and highly unorthodox.” Nevertheless, modernity’s actual advances have been nothing short of remarkable: “Not having to mourn the deaths of children, not having to be reminded constantly of death’s threat, even (probably) not having to deal with death directly, at home, are all pretty clear benefits of modernity, despite some downsides.”
“M
an will develop more in the twentieth century than he has in the last 1,000 years”, chirped a giddy Los Angeles Times editorial from 1900. And when it came to medicine, technology and buying things, that pre-tweet was right. Accordingly, Stearns ends his sweeping foray with an analysis of the social history of consumerism, primarily because consumerism, he argues, is what drove modernity in the first place.
Acquiring more stuff than you need to survive has been beset with critics from the 18th-century onward (if you don’t count similar warnings from Proverbs and Ecclesiastes). Some said that it was bad for your health; that it was “dangerously secular, distracting people from proper religious and spiritual concerns”; that it was bad for class structure because it enabled lower classes to (inappropriately) ape upper classes; that it played to frivolity; was a sign of bad taste, foreignness and, most recently, that it is environmentally unfriendly.
Consumerism was also convicted of creating new, negative feelings: boredom, for example, which prior to the 18th century was interpreted as a kind of spiritual alienation from the community, the antidote for which was more human engagement. The mid-19th-century grants recognition (via Dickens’ Bleak House) to this new kind of unsatisfactory state, which now meant that a person was not being appropriately stimulated by his or her environment. At first this was seen the fault of the bored, with Emily Post advising readers to always avoid showing it. But as consumer culture offered an expanding array of objects to relieve boredom, particularly in children, it fell to parents to intervene with new toys and games—and to repeat as necessary. Boredom in modern consumer culture is cured foremost by consumption.
Despite the hits it has taken, consumerism Bon Marchés on, harder and faster than ever, in part because its advent actually does make people happier than their immediate predecessors. Having nothing when your neighbor has lots of cool things makes it hard to remain happy—but only up to a point. A series of studies Stearns cites shows that, as more consumerism emerges, its effects plateau. Consumerism becomes “too consuming”, Stearns writes, when “used to address an impossibly large basket of needs, not only disappointing but becoming a problem in its own right.”
Yet consumerism seems to be a part of, dare we say, human nature—or, at least, human history. While some scholars suggest that consumerism after the Industrial Revolution is an anomaly that survives mainly because of the kicking-spurs of marketing departments, others note that upper classes in all societies, reaching back to hunter-gather groups, seem to reach a point of sustainability and then promptly switch to dealing in luxury goods like jewelry, spices, odiferous oils and other decorative fineries. Agricultural societies engaged in consumerism, a benefit of their creation of food surpluses; classical civilizations created trade routes and sometimes entire empires based on getting new stuff from far away. Goods with symbolic import have always been charged with the task of conveying status and relative prosperity, an enormous part of human civilization too meaningful to be categorized as silly or gauche.
Nevertheless, it’s the dose that makes the poison. “The contemporary phase of consumerism opened a few decades ago”, Stearns offers, and things have become increasingly cartoonish ever since. Modern consumerism survived the Great Depression and surged in the postwar period as the middle class ballooned to prosperity. This was all fine. But by the 1980s, consumerism had become counterproductive, Stearns argues, “no longer responding to needs so much as compelling compliance despite a number of discernible and often truly painful disadvantages”—for example, the pain of debt brought on by the pressures to keep up, even with a paycheck that didn’t let you do so. By 1995, 66 percent of American households had three or more television sets, and more than half of all teenagers had one of them in their rooms. Multicar families became the norm, based partly on work and school needs, but also partly on unaligned patterns of consumption. Advertising made inroads to the family like never before, and by 1997 consumer debt had swollen to $1.25 trillion, over 10 percent of national GDP. As of February 2012, the average American credit-card debt was nearly $16,000, and a full fifth of average post-tax income went toward paying it down. The result: more indebted, disintegrated families who were encouraged to cheer themselves up by buying more stuff. Because, come on, You’re Worth It.
Consumerism’s promises of heightened feelings of newness, confidence and euphoria made it, Stearns writes, “the strand of modernity that provided the clearest path to fulfillment, if not of happiness then at least of a fair semblance thereof.” After all, consumption of new things could distract one from unpleasant thoughts of aging, sickness and death, of unhappy realities better deferred—or dealt with by someone else. In this way, Stearns suggests, consumerism has been expert at heightening the themes of individualism and personal fancy, but not so great at strengthening community or national cohesion. One need only recall the message from the Bush Administration in the days and weeks following 9/11: Get out there and go shopping, or take a vacation—advice Stearns generously characterizes as “unimaginative.”
Increasingly over recent years, the quintessential modern tradeoff between working for money and spending that money on consumer goods has proven to be more of a nightmare than a dream. For a great many Americans, the upkeep of one’s goods and the maintenance of one’s social status has become so demanding that there is little time left over to enjoy one’s accomplishments. People work more hours in multiple jobs they do not enjoy in order to buy more things that are supposed to relieve them from the toil they engage in to get those things. Time pressure and debt actually create persistent emotional problems that threaten well-being, which counter, in Stearns’s words, “any real satisfaction of compensation” that consumption might provide. This, he suggests, reflects a consumerism “running off the rails.”
S
atisfaction Not Guaranteed undertakes to highlight modernity’s historical developments by offering “a vantage point on current behaviors and attitudes rather than simply to examine these behaviors and attitudes as static artifacts.” This diligent task—de-naturalizing what we think of as normal and determined—recalls the late Christopher Lasch’s 1979 bestseller, The Culture of Narcissism: American Life in an Age of Diminishing Expectations, which excavated the troubled present to show that things are not now how they had to be. Stearns’s own foray into premodernity and back again amid our own era looks carefully at modernity’s quotidian processes to see how things unraveled not quite as planned, but in a way as equally unforeseeable.
This book, of course, is no springboard for a “good ol’ days” lament. It is “not intended as an attack on the modern condition”, Stearns writes. “Modern change has brought all sorts of benefits, and where downsides have emerged, they may yet be more successfully addressed.” He’s not out to claim that earlier times were even close to better than ours—unless mass famine, widespread disease, slavery and the subjugation of women are your kind of thing. His purpose is rather to critically examine the stock of advances modernity has achieved, where it got snagged and how, what new complications countervail their origins and, ultimately, how a dose of morally concerned historical insight, peppered with levity, might help us get closer to some of the ideals some of our forebears thought would take but little trouble to achieve.
1From Nisbet’s Prejudices: A Philosophical Dictionary (Harvard University Press, 1983).