—Mark Twain,
Travels with Mr. Brown (1866)
When I learned a few years ago that some of my Western colleagues in Saudi Arabia possessed blow-up vinyl sex dolls, I rolled my eyes but resisted the urge to snicker. Odd though this behavior seemed, it made a warped, Saudi kind of sense. In the Kingdom, women are more forbidden to the unwed Western contract worker than alcohol. While the former are genuinely unavailable, the latter is merely proscribed. This combination explains its own consequences: After several months of unrequited natural longing, inebriated men will often lower their standards to the point where, as Tom Waits says, even “the crack of dawn ain’t safe.”
I was less sympathetic this past January when TrueCompanion.com introduced a conversant female sex robot for the domestic American market (a male robot is in the works, ladies, never you fear). The target market for this fembot clearly extends beyond lonely men working on oil rigs out in the middle of nowhere. The manufacturer apparently thinks there is a large number of Americans who, whatever they may think or say, are in fact closer emotionally to their smart phones than to any living, breathing human being—who, in other words, use gadgets as social prostheses.
All this led me to wonder when and why people began thinking, or rather assuming, that humans could replace flesh-and-blood companionship with a connection to plastic and silicon. I soon found that I am not the only one to wonder. Sherry Turkle, a professor in MIT’s Society and Technology program, claims that “we are witnessing a new form of sociality in which the isolation of our physical bodies does not indicate a lack of connectedness.”1 We are at a point, she suggests, where computers and mechanical objects are subsuming human relationships without our being fully aware of it. If so, the online avatar—not to mention the movie Avatar itself—signals a fundamental change in our concept of human connectedness. We can now wonder, for example, whether friendship has become less durable now that both “friend” and “unfriend” have become verbs.
There is no doubt that Blackberries and iPhones have become our constant companions in a hybrid universe of wireless technology and human mobility. At a time when Iron Man 2 has one of the largest grossing premiere weekends, there also seems little doubt that the interpenetration of machines and people will grow more pervasive, more ornate, and also more invisible as nanotechnology matures. Americans born years from now may presume that all of this is perfectly natural (word chosen carefully). Already, modern machines can certainly be made to seem more reliable than people or their governments. In Iron Man 2, Tony Stark boasts of the reliability of his “prosthesis”, saying that together they have privatized world peace.
Whether or not these new tech toys are in fact more dependable than human beings, our relationships with them in films are apparently more entertaining. In recent years the two Transformer films idealized a relationship between Sam Witwicky (Shia LeBeouf) and Bumblebee, a robot protector who transforms into a Chevy Camaro. I watched both films with my sons, aged 5 and 15. They understood immediately when Bumblebee invents a one-way system to communicate with Sam using extracts from old radio or television broadcasts. The boys loved it. I was entertained, too, but also left strangely uneasy. When the vestigial noise finally left my head (these are very loud films, crosses between a heavy metal concert and a demolition derby), I recognized the long curve of our relationship with personal machines as a series of increasingly explicit prosthetic friendships.
Bumblebee, of course, is not the first car to be personified. That has been an American cultural constant since the debut of the Tin Lizzie in 1903. We used to name our horses, after all, so it was not a big stretch for people to name their cars. But it is the trajectory of our personification of the automobile that is really interesting. At first car names were merely incidental, examples of casual, playful humor at best. Then the names acquired personalities with their own narratives, as with, for example, the mostly harmless 1965–66 NBC sitcom “My Mother the Car.” Then came feature length films and television shows about machines with far more elaborate personalities and complex tales to tell, including The Love Bug (1968), Christine (1983), Knight Rider (1982–86), and, of course, Lightning McQueen in Cars (Pixar 2006).
What accounts for this trajectory? Well, loneliness clearly has something to do with it. Actions and images that anthropomorphize or personify inanimate objects are one of the distinctive symptoms of extreme loneliness, and in America loneliness has reached epidemic proportions. In a 2006 survey, one in four Americans claimed not to have a single personal confidante or friend.2 The underlying boredom and loneliness that make us enjoy stories about cars as friends, rely on smart phones as social prostheses, or even “talk” to a sex doll have become driving forces behind the growing gadget-forest of personal technology.
This dynamic speaks volumes about what is, upon some reflection, a remarkable inversion of perception and behavior. We used to be ashamed about sex and sexuality, not so much about being lonely, which was taken to be both an occasional reality and, particularly for women, a condition worthy of sympathy. Traditionally, men were expected to be more stoic, but even they caught a break from time to time. Mark Twain, the 19th century’s premier poet of loneliness, described matters in 1880 when Tom Sawyer and Huck Finn escaped to the woods in The Adventures of Tom Sawyer: a “sense of loneliness”, Twain wrote, “began to tell upon the spirits of the boys. . . . But they were all ashamed of their weakness and no one was brave enough to speak his thought.” Nowadays, we take sex in stride, but loneliness has become more taboo than ever. University of Chicago neuroscientist John Cacioppo observes that, in the second decade of this century as opposed to the 20th, we cannot say “I’m lonely” without experiencing shame. In America, loneliness has become the emotion that dare not speak its name. Tellingly, we don’t even have an emoticon for it.
Despite the contemporary power of this inhibition, 20 percent of all Americans—about sixty million people—now confess that they feel “sufficiently isolated for [loneliness] to be a major source of unhappiness in their lives.”3 And it seems fairly clear that this acute lack of human company has stimulated some unusual, if not to say desperate, responses. Some escape their lonely worlds by entrusting the desire for companionship to expensive toys. In a troubling essay about suicide, Karl Marx long ago asked: “What kind of society is it wherein one finds the most profound loneliness in the midst of many millions of people?” But in the age of Facebook, when we have paradoxically become lonelier and sadder than ever before, Marx’s question seems positively naive. According to the 2000 census, we live in a society in which 25 percent of American households are single-person dwellings—more than at any other time in history. How did this happen, what does it mean, and how does it help us to understand the expanding, if mostly unwitting, use of machines either as social prostheses or as analgesics that numb the pain of loneliness?
Themes of human solitude trace back to communication technology’s beginnings. The telegraph required its operator to isolate himself in a room or shed in order to focus on a coded signal from a distant source, ignoring any immediate distractions around him. In the years since the telegraph’s debut, we have become accustomed to paying attention to electronic signals at the expense of our immediate surroundings. Headphones have clearly played a major role in our migration into this technological fifth dimension. Candlestick telephones offered rural Americans a respite from loneliness, but the earpieces of early phones forced callers to focus on a signal coming into their right ear. Most people screened out external noise by hunching over the receiver and plugging their free ear with a thumb.
Distractions were a constant problem in an age that could not amplify incoming calls. In the late 1870s, noise overwhelmed telephone exchange rooms “manned” by young, rowdy boys with big-city manners. In desperation, phone companies switched to female operators, who were quieter and cheaper. After 1880, these women were given headphones to shut out distractions. Operators soon became so oblivious to their surroundings that they could be (and were) packed into tiny switchboard rooms.
As Frederick Winslow Taylor’s scientific management principles began to constellate in the 1910s, the American workplace became increasingly mechanical. Life and blood were sucked out of offices and factories in order to raise productivity and reduce labor costs. The management habit of squeezing women into small workspaces expanded. Row upon row of young women supplied with gramophones and stethoscope-like “ear-tubes” were packed into typing rooms where they transcribed an endless flow of business letters. Gossip and interaction were forbidden and, as a result, both boredom and loneliness were as endemic in this literally crowded mechanized workplace.
The personal devices that are so ubiquitous today were late starters in the development of American technology. Their earliest predecessor may be the Belmont Boulevard Pocket Radio, which debuted in December 1945 in Life magazine. (The Boulevard preceded the fictive two-way wristwatch radio, which appeared a few weeks later in Dick Tracy’s comic strip.) Transistors had not yet been invented, so the Boulevard relied on miniature vacuum tubes and delivered sound through a single monophonic earphone. It didn’t entirely shut out the rest of the world, but the ability to accompany its owner enabled the Boulevard and its successors—transistor radios—to deepen the isolation and loneliness of its users while appearing to ameliorate it.
Perhaps the best way to track the development of prosthetic social relationships is to look not at individual behavior, but rather at how migrant groups seeking a place in American society dealt with the loneliness of the immigrant experience. The explosive growth of America’s industrial centers in the late 19th and 20th centuries resulted mainly from an influx of European peasants and American farm children. Both groups were pushed by economic desperation and pulled by opportunity to the Northeast and the new cities of the Midwest. In the whirlwind of this new environment, every newcomer was rushed, disoriented and probably, to some degree, lonely. Stepping out of a train or off a gangplank into the American mainstream meant joining an urgent competition to de-marginalize and acculturate. From the first moment, city life was packed with interpretive challenges. To satisfy immediate needs like food, shelter, work and rest, every migrant had to orient himself quickly. As never before, Americans merged their sources of information and diversion, expanding them rapidly to fill up the spaces of both their work and private lives. In The Homeless Mind (1973), Peter Berger, a pioneering sociologist of knowledge, suggested that the “mechanisticity” of America’s new technological society demanded “the creation of a private world in which the individual [could] express those elements of subjective identity that were denied in the work situation.” New spaces and new machines helped them do this.
Much of the new entertainment technology was devoted to replacing the tradition of singing at work that faded in the decades of rapid immigration and urbanization following the Civil War. The working culture Whitman describes in “I Hear America Singing” disappeared in the bustle of Northeastern cities where saloons, cinemas screening silent films, player pianos, consumer sports and comic strips accommodated workers’ fragmented schedules. Many of these also preempted the need for sophisticated language. Foreign-language newspapers provided information while immigrants learned to speak English. Gramophone records and English-language dailies taught Europeans how to speak and read American English, while young people from the American countryside used the same rudimentary technology—cheap newspapers produced on steam presses—to learn what to say and both when and how to say it. Those new to urban life were vitally concerned with learning what was good and what was bad, how to act and how not to act, what to like and whom to love. At stake was not just learning vocabulary, but also acquiring the social protocols that animated it. Answers, advice and models were everywhere in the newspapers, the movies and, with them, the rising culture of advertising. America was remaking itself outwardly into an industrial giant while simultaneously redesigning the interior of its national psyche. A new kind of American was emerging: the “other-directed” citizen who took his cues about what was good and right from the thoughts of people he had never personally encountered.4
Other-directedness no doubt contributed to the frenetic pace of urban life, because it seemed as though everyone in post-bellum 19th-century American cities thought it was their purpose to work as hard as possible to get as rich as possible. Being a workaholic was the most socially acceptable method of dealing with the loneliness endemic to 19th-century urbanization. As it intensified through the 1860s, Twain observed that city people had very little time at their “disposal to fool away on matters which do not involve dollars and duty and business.”5 The extended visits that sustain human relationships had become too time-consuming for urban life. It is no coincidence that the retail pet industry dates from this era, as does the manufacture of toys that facilitated the solitary play of modern childhood.
There were also less socially acceptable ways to deal with chronic urban loneliness and isolation. The most conspicuous of these were relatively new vices for which the end of the 19th century has become famous: prostitution, alcoholism and drug abuse. Each of these was influenced by new technology: Mass-manufactured condoms slowed the spread of venereal disease in America’s cities; during the Civil War, vast improvements in refrigeration made cold beer and drinks on-the-rocks available even in the dog days of summer; and the hypodermic syringe, a brilliant innovation that transformed European medicine after 1843, migrated from Scotland to America in 1853, facilitating an epidemic of intravenous drug use after morphine became increasingly available during the Civil War.
There were technological developments in music, too. The first commercially successful automatic or “player” piano—the Pianista—was patented in 1863. In 1877, one year after Alexander Graham Bell invented the telephone, the famous workaholic Thomas Alva Edison announced his working phonograph. Improved “gramophone” technology emerged gradually, but once Leon Glass combined the device with a nickel slot machine in 1889, factory, office and sweatshop workers could all summon music on demand. They no longer needed to painstakingly learn to play an instrument or seek out someone who had. Just as business letters came to the ears of female typists, Edison’s ear tubes delivered popular songs. Droves of paying customers hunched over playback machines in public parlors near ferry, trolley and rail terminals. After the Chicago World’s Fair of 1893, listening to music on gramophone slot-machines became a national pastime, and music changed from do-it-yourself, shared entertainment into a consumer product created by technical specialists as well as by musicians. A new kind of listening—acousmatic listening, or listening to music with no visible source—became the strangest feature of music in the burgeoning machine age. Oxford musicologist Eric Clarke claims it is less peculiar to watch a silent film than to listen to disembodied music because “vision is the socially dominant sense in our culture. . . . To leave that sense ‘dangling’, as acousmatic listening does . . . is perceptually incongruous.”6
In the nickelodeon, people with ear tubes clustered around the new devices while avoiding the awkwardness of eye contact. Relief for this minor social discomfort came soon after, when gramophones became available on easy credit terms. By 1901 most models could be purchased for $1 per week. The uncomfortable sensation of listening to disembodied music in public or in company was suddenly ameliorated by the relative portability of gramophones. Even models with large amplifying horns could be wheeled into separate rooms where listeners could enjoy music all by themselves.
Though it now seems odd to say so, those who listened to short popular songs alone on their gramophones were cultural revolutionaries. They initiated a psychological change as far-reaching as the one Walter J. Ong described in Ramus, Method and the Decay of Dialogue (1958) as the shift between oral and scribal culture. Indeed, solitary listening had an impact as profound as that of widespread literacy after Gutenberg. Like a reader, the solitary listener is part of an abstract, far-flung audience that reaches far beyond the here-and-now of a live performance. “Solitary listening”, writes the preeminent historian of recording technology, Mark Katz, “is now the dominant type of musical experience in most cultures.”7
We underestimate the meaning of this development at our peril. Where Gutenberg freed middle-class Europeans from the mediation of the educated classes (aristocrats and clergy), recorded music freed lonely Americans from the silent tyranny of their own company. Where reading made people more capable of purposeful and extended linear thought, however, solitary listening reduced their span of concentration and made them less tolerant of boredom. Where reading deepened character, solitary listening ignored character altogether. As texture yielded to the low-fidelity, three-minute span of the gramophone, the idea of music narrowed from the relatively long and complex structures of religious music and classical symphonic composition to that of disposable pop songs.
Still, people could play whatever music they wanted, whenever they wanted it, enabling them to use music as a diversion from the harsh, persistent internal voices that nag those struggling in a difficult new environment. Music lifts one’s spirits, offering diversion, escape and comfort. Industrialization needed music for these purposes if not others, and the gramophone served it up, hot and ready, encouraging people to get up and dance to ragtime’s new sounds. Lonely and bored teenagers fell into dancehalls where they spooned and rubbed themselves against likeminded strangers in the American night.
The star system too, which film studios had invented to deepen audience loyalty, soon came to promote pop music, first on the gramophone, then on radio. A large number of the Hit Parade high-fliers of the 1930s and 1940s originated in film, “Over the Rainbow” from The Wizard of Oz being only the most famous of many hundreds. From film and radio music moved eventually to television. Like solitary listening, the stars ameliorated America’s loneliness without offering reciprocity. We became one-way intimate with the faces we saw on the screen and the voices we heard on the airwaves, but we never interacted with them.8
At first, the deeply antisocial character of such solitary listening prevented its widespread acceptance. It implied, as Eric Clarke writes, “a visible withdrawal from the social context and immersion in an intensely private world that people may find unsettling or offensive.”9 In 1923, essayist Orlo Williams observed, “We think, people should not do things ‘to themselves’ . . . they may not even talk to themselves without incurring grave suspicion.” Williams was worried that if he were seen listening to his gramophone alone, others would think him crazy. He wrote, “If I were discovered listening to the Fifth Symphony without a chaperone . . . my friends would fall away.”10
In fact, this is exactly what happened to social circles in the remainder of the 1920s, and throughout the rest of the century. The friends of solitary users of personal technologies fell away, not because they thought users were crazy, but simply because they spent less and less time with them. The shift to the selfish (but lonely) enjoyment of electronically rendered music seemed complete by 1931, when an anonymous writer in Disques celebrated the impersonal experience of guiltless solitary listening:
alone with the phonograph, all . . . unpleasant externals have been removed: the interpreter has been disposed of; the audience has been disposed of; the uncomfortable concert hall has been disposed of. You are alone with the composer and his music.
By the early years of the Depression, solitary listening had become a commonplace in America and elsewhere in the Western world. Unemployed millions made their way through a decade of hard times cheered on by the sponsored entertainments of network radio, exemplified by Yip Harburg’s popular anthem “Brother, Can You Spare a Dime?”
In 1932, car manufacturers began installing optional radios for the first time. Increasingly, people were spending more time driving alone, and some Americans were willing to pay handsomely to listen to a radio in the privacy of an automobile. Two years later, Muzak, Inc. began to fulfill a similar service, distraction, in the public spaces of Manhattan hotels and restaurants. Muzak has often been called “mindless music”, and that’s right: Mindlessness is exactly the service it was first meant to provide. After all, other-directed Americans, as opposed to the older inner-directed types, were uncomfortable when they were alone. In elevators and in cars, music was deployed either as an analgesic diversion or as a prosthesis for company. Some managed perfectly to combine both elements, as did the Dinah Shore Show, sponsored by Chevrolet. Now Americans could hear Dinah sing “See the USA, in your Chevrolet” when they were, in fact, on the road. If you were driving a Ford, it just felt wrong, and that, of course, was the point.
By 1932, too, American radio ownership approached saturation, as Archibald MacLeish observed in “Invocation to the Social Muse”:
Señora, it is true that the Greeks are dead.
It is true also that we here are Americans:
That we use the machines: that a sight of the
god is unusual:
That more people have more thoughts:
that there are
Progress and science and tractors and
revolutions and
Marx and the wars more antiseptic and
murderous
And music in every home: there is
also Hoover….
In order to sell even more radios, American manufacturers began the “radio in every room” advertising campaign, which specifically promoted listening as a solitary activity as opposed to the communal experience it had been in the 1920s when ownership was rare. The basic pattern persists today and has in fact intensified. Rush-hour radio time slots still cater to a commuting audience, but in addition, people can now plug their iPods into car stereos and listen to individually tailored music downloads or podcasts. It’s unthinkable in 21st-century America to be silent and alone while driving a car. No wonder so many people use their cell phones in traffic. As George Prochnik observed recently in In Pursuit of Silence (2010), “Silence is for bumping into yourself. . . . [P]eople . . . seek to avoid that confrontation.”
The pattern of family ownership preceding the marketing of individual devices repeated itself in the manufacture and sale of televisions, beginning in the late 1940s. It’s worth noting that before the switch to digital television began last year, there was at least one working analog television for every person in America. Of course, such market saturation frightens electronics manufacturers who depend on a constantly growing market. The only solution to saturation is thus to shift formats. America’s shift to digital television is less about delivering improved image quality than about creating a market for electronics manufacturers.
The possibility that a mechanical device might become a surrogate friend didn’t emerge until miniaturization created mobile technology. In 1954, Texas Instruments introduced a four transistor shirt-pocket radio called the Regency TR-1 for $49.95. Over the next decade, as the price of Zenith, Motorola and Sony transistor radios came down, their prevalence increased. But despite their portability, transistor radios were fundamentally social devices. Most offered an external speaker and many included dual earphone jacks so that rock ’n’ roll, the emergent revolutionary music, could be shared on the go with a friend.
Twenty years later, Akio Morita, who designed Sony’s TR-63 transistor radio in 1957, repeated its dual jack/dual sound control option when his company introduced the Walkman in 1979. Without an external speaker, the Walkman generation was isolated from unpleasant city sounds by stereophonic supra-aural headphones. Cliff Richards, an old English rocker, sang enthusiastically about the new device: “Walkin’ about with a head full of music/cassette in my pocket and I’m gonna use it/ster-E-O-/out on the street you know/woh o who . . . I’m wired for sound.”11 One year later, in 1980, as Sony removed the second earphone jack, Pink Floyd sang: “I don’t need no arms around me . . . don’t think I need anything at all.”
Solitary use of personal music devices became normal during the 1980s, but few people projected a personality either onto the Walkman or onto the flurry of personal tech devices that followed them: CD players, game-boys, MP3 players or cell phones. Ultimately, social acceptance of our projection of personality and friendship onto an electronic device had to wait another generation, until the turn of the millennium. It bears repeating that, remarkably, the 2000 census revealed that one in four American households consisted of a person living alone. That same year, the film Cast Away depicted an island-bound airplane crash survivor using a volleyball to create a companion he named “Wilson.” Perhaps the creation of this personal “fetish” marks a turning point in how America has dealt with chronic loneliness ever since; that film ratified to an extent the practice of projecting humanity onto an inanimate totem, a suggestion that our forbears would have interpreted, no doubt, as outright insanity. The following year saw the debut of Apple’s iPod, a truly totemic device whose sleek, compact design invites fetishization. Both iPod and iPhone are shiny, playful, mass-produced Wilsons whose friendship we can all afford. After all, the first iPod ad emphasized its practicality as a street companion for the solitary listener. It encouraged its users to “Think Different.”
America listened. Its children are listening now, too, but to what effect? Norman Nie, a clever researcher at Stanford University, wanted to resolve the question of whether the technology of the Internet is a socially enabling tool or one that displaces or replaces social interaction with technological ones. In 2002, Nie demonstrated that “for every hour spent on the Internet at home” his subjects spent “an average of almost 30 fewer minutes with their family.”12 In other words, the more time we spend using technology, the less time we spend in real human interaction. Thus have devices once used to relieve loneliness now become, in effect, generators of loneliness.
This simple fact is especially alarming because, according to an astonishing report by the Kaiser Family Foundation, American children in 2010 spend on average seven hours and 38 minutes online per day.13 In previous decades, nearly four hours of this time would have been devoted to interacting with family members. But now the intensity of online engagement has a much greater appeal than family life; it is chock-full of sometimes literally addictive games, music and videos.14 And for every seven hours a child spends online, he or she actually logs 11 hours via multi-tasking. In the online world, distraction is piled on distraction to deliver an intensity of experience that mere reality cannot hope to match.
Case in point: In an experiment conducted last year, so many people were engrossed in cell phone conversations that only 25 percent of them noticed a clown ride past them on a unicycle. For members of the Avatar generation, immediate surroundings appear to be less important—much less exotic, interesting and intense—than whatever happens in cyberspace. And so have we fabricated means to no longer be psychologically alone, but only at the cost of actually being alone. What would Faustus say about a bargain of that sort?
2Miller McPherson, Lynn Smith-Lovin, Matthew E. Brashears, “Social Isolation in America: Changes in Core Discussion Networks over Two Decades”, American Sociological Review (June 2006).
3Cacioppo and William Patrick, Loneliness: Human Nature and the Need for Social Connection (W.W. Norton, 2008), p. 5.
4David Reisman, author of the 1950 bestseller The Lonely Crowd, is usually credited with the idea and the term “other-directed.” A fascinating and comprehensive history of the transformations of the self in the urban setting appears in Philip Cushman’s brilliant account of the American psyche, Constructing The Self, Constructing America (Da Capo Press, 1995).
5Twain, Travels With Mr. Brown (Knopf, 1940), p. 259.
6Clarke, “The Impact of Recording on Listening”, Twentieth Century Music (March 2007), p. 45.
7Katz, Capturing Sound: How Technology Has Changed Music (University of California Press, 2005), p. 189.
8See David Kirby, “Celebrities R Us”, The American Interest (Spring 2006); and Donald Horton and R. Richard Wohl, “Mass Communication and Para-social Interaction: Observations on Intimacy at a Distance”, in Gary Gumpert et al., eds., Inter/Media: Interpersonal Communication in a Media World (Oxford University Press, 1979), pp. 32–55.
9Clarke, “The Impact of Recording on Listening”, p. 45.
10Williams, “Times and Seasons”, Gramophone (April 1923), pp. 38–9.
11B.A. Robertson, Alan Tarney, “Wired for Sound” (Sony/ATV Music Publishing UK Ltd: R & B Music Limited, 1979).
12Norman H. Nie and D. Sunshine Hillygus, “The Impact of Internet Use on Sociability: Time-Diary Findings”, IT and Society (Summer 2002).
13Kaiser Family Foundation, Generation M2: Media in the Lives of 8- to 18-Year-Olds (January 2010).
14On the addictive nature of electronic games, see Harvey Milkman and Stanley Sunderwirth, Craving Ecstacy and Natural Highs (Sage, 2010), section IV.