“Social media fatigue”, I said to the dozen tenth-graders in the paneled library of Ft. Lauderdale’s Pine Crest School. “Who’s heard of it? Has anyone felt it?”
The students remained silent. Nothing beats silence to make teenagers blurt out their thoughts. I let the silence hang until a blond, acne-afflicted 15-year-old swathed in a green varsity jacket slowly raised his hand.
“I think you’re talking about my New Year’s resolution”, he said.
It was February 3, but the fact that this teenager had made any kind of resolution piqued my interest. Let’s call him Emerson. His family was sufficiently poised to afford the $29,000 a year tuition that Pine Crest commanded. He and his classmates seemed cut from similar cloth: The boys I’d seen in the corridors and commons wore the same slip-on sneakers, jeans, and T-shirts over which they layered a sweatshirt or jacket adorned with the school logo. The girls were likewise reproductions of a template ingénue: long hair parted in the middle, jeans, fitted T-shirts, and school sweatshirts knotted around the waist.
At Pine Crest, conformity was evident in another way, too. Here we were in a gorgeous library, heavily marbled, with a rotunda that reminded one of Thomas Jefferson’s Monticello. Kids lounged in leather wingbacks as if they did nothing but read the Economist all day. But they were heads-down, earbuds in, staring into their latest Apple gear while the books remained in situ on the library’s finely woodworked shelves.
Emerson clarified why he wanted to cut back on social media. “I felt overwhelmed by it, like I was addicted to my phone.” Classmates nodded. He spoke to his fear of missing out—fomo in teenspeak—and complained about the pressure to keep up with the texts and tweets that minutely chronicled lives that were merely one-and-a-half decades old. Already, these high-achieving tenth-graders felt overwhelmed. But people have “missed out” for centuries, of course. What I wished I could have impressed on these ambitious youngsters is that attention is a biologically finite resource, and that fomo makes them squander it needlessly.
Emboldened, perhaps, by Emerson’s honesty, a girl I’ll call Charlotte took up his theme: “Your life is out there, visible to everyone. Anybody can check out what you’re doing, what your values are, where you go, who you hang out with.”
“Even the way people shoot a selfie makes them look different from how they look in real life”, said another. “There’s a lot of artifice.”
Alicia chimed in, “People only show you their highlight reel. You don’t see doubts or insecurities. It creates an impression that they’re perfect even when they struggle on the inside like the rest of us.”
Our library conversation came as part of a larger project about digital distractions for which I’ve been interviewing students and teachers around the country from kindergarten through college. As a home to ambitious students, Pine Crest is a good microcosm in which to examine the effects of digital technology on our collective ability to learn and remember, and how increasingly it is leading to the imbecilization of America and beyond. Anecdotal evidence based on these interviews suggests that at less privileged schools where students are not so inwardly driven, the problem is magnitudes worse.
I asked the students of the Pine Crest cohort to describe their relationship to their phones, knowing that social media, games, and texting were typical teenage priorities. “I have a very good relationship with my phone”, a smart aleck named Jared said, with the emphasis on very. “I can see my class schedule on my phone. I can study on my phone. I don’t need to take out my laptop.”
When I asked how many in the group multitasked, all hands went up. “Suppose I told you that multitaskers perform far worse than those who focus on one task at a time.”
“Where’d you get your information from?” Jared challenged me, clearly in a snarky mood. “I guess I’d better change my perspective on life because I multitask everything.”
Stanford University, I told him. Research some years back by Dr. Clifford Nass demonstrated that the worst–performing individuals were those, like Jared, who believed themselves most proficient at multitasking. Jared needed to show off in front of his peers. Unsurprisingly, the attention seeker was also the one most obsessed with his screens.
Scientists debate whether we are truly addicted to our devices. They also question whether the internet makes us dumber by weakening memory or smarter by showering us with facts, and whether social apps isolate us more than they connect us. What no one seems to dispute is that our attention spans have gone to hell.
Dr. Hilarie Cash believes the evidence for internet addiction. She runs a rehab for digital detoxification in Washington state called reSTART, a place to “unplug and find yourself.” Digital addiction shares common symptoms with other medically recognized dependences such as those for alcohol, drugs, and gambling: increasing amounts of time spent texting, gaming, and online; failed attempts to control and limit behavior; lying to others and oneself about the time spent on digital activities; feeling depressed or anxious when offline; neglecting friends, family, work, and school; withdrawal from exercise, being outdoors, and other pleasurable activities. The reSTART detox restores the idea of a Sabbath, a day of rest to rediscover one’s soul, reconnect with what matters, and reclaim one’s attention span.
Attention and memory are our brain’s two most precious resources. I say this as a neurologist rather than a sociologist, cultural critic, or philosopher of phenomenology—which does not mean that keen and careful observations, such as those offered by Nicholas Carr in The Shallows (2011), are without value. But I approach the problem from the specific issue of energy cost, given that attention is a biologically fixed resource. No amount of exercise, puzzles, diet, supplements, or lifestyle change can increase the limited bandwidth of attention that the brain has to work with.
If attention is in finite supply, then why do we squander it so carelessly? Why can’t we let go of our phones, even when on the toilet? Some Pine Crest first-graders have phones, the admissions director tells me, but usage skyrockets in the second grade. Such early cellphone use is all the more dangerous because digital technology does not just consume attention; it also shapes the brain. Would parents take devices away if they knew they were shaping the neural pathways of their child’s developing brain in harmful and irreversible ways?
At the Institute of Neuroinformatics at the University of Zürich, Arko Ghosh has found that smartphone swiping alters the brain in direct proportion to phone use. Repeated swiping remaps the representation of the hand in the brain’s sensory cortex. The changes measured in this preliminary research were temporary, but there is no reason why they couldn’t become permanent.
We have long known that the brain is highly plastic: Its structural pathways change in response to experience, the definition of learning. Anything we practice changes the brain, particularly a still-developing one. Fixate on screens enough and we become glued to them. Consider string musicians in whom the sensory area devoted to the non-bowing hand grows to invade neighboring areas, or newly blind individuals learning Braille in whom the area devoted to the “reading finger” expands into the now unused visual cortex. Ghosh is not demonstrating plasticity per se but asking which aspects of smartphone use the brain responds to and which do not matter. How does a reshaped brain feed back to and alter the individual? We have not yet conducted the research to answer such a question.
What we know for musicians doesn’t necessarily apply to smartphone users. Musicians practice scales and études for hours a day to hone their technical skills, whereas people use their smartphones throughout the day in a variety of contexts. The swipes may look similar, but electrical brain readings reveal that some yield a high pay-off for the effort while, at other times, the same gesture yields nothing. More research like Ghosh’s is lacking because neither science nor society yet considers the topic of digital overuse worthy of study—reminiscent in some ways of the years it took scientists to persuade people that television as technology (never mind program content) has neurophysiological effects on watchers. Society must take seriously the likelihood that prolonged and repetitive exposure to digital devices changes the way we think and behave.
If I am wrong about this, then my efforts to limit my exposure will be for naught. But if I am right, then my thinking, attention, and mood will have benefitted. For years our household resisted smartphones. We had inexpensive Tracfones mainly for emergencies, and we never used our 90–day allotment of minutes. Two years ago, we finally got smartphones, mainly for the GPS maps. Within an hour of fiddling with my new toy I said, “Now I know why people say they’re addicted.” For the record, I check my email by phone only if I haven’t already opened it on my desktop (and there I check once after breakfast and again at 4 p.m., never on weekends). If I go out and leave my phone at home I don’t freak out. Nothing can be that important. I miss most mobile calls because I don’t carry the device around on my person. I view digital devices as useful tools, but dangerous ones that can easily waste my time and distract me. Like every blessing, it is decidedly mixed. My advice remains, “User beware; you are being manipulated.”
Want to blame someone or something for this? Corporations? Celebrity culture? Social apps? Go ahead, but the truth has to start with accepting that our stone-age brain is a chump for today’s collective state of mental overload. We have exactly the same brain as our distant ancestors. Its attention capacity has profound limitations that modern life taxes past the breaking point. We cannot get past the biological constraint by will power or by applying external forces.
Here is why. Modern humans have been around for about 200,000 years. During 99 percent of that time we did little but survive and procreate, with a little cave art tossed in. Climatic extremes settled down roughly 10,000 years ago. Agriculture appeared shortly thereafter, and life became more predictable and less of a challenge. Even just a century ago, life was far slower, quieter, and less complicated than it is today.
When not much changed except the seasons, the brain became a change detector meant to be distracted. A sound, a sudden movement, or a strange smell might signal a threat. Because we had to be ready for anything, our change detectors were always on alert and they remain so today, even while we sleep. Maintaining a mandatory, constant state of vigilance eats up a steady slice of the brain’s power allotment, which is likewise limited and fixed.1
In terms of energy use, switching attention incurs a high cost. We are not good at it. Yet consider how many items vie for our attention in a given hour compared to what our ancestral brain evolved to handle. Our brains still operate at speeds of about 120 bits (~15 bytes) per second. It takes roughly sixty bits per second to pay attention to one person speaking, half our allotment right there. Arithmetic shows why multitasking degrades performance. Verizon’s 75 Mbps fiber optic connection shoots data into my home at 5,000 times the rate my biological brain can handle. We ask our stone-age brains to sort, categorize, parse, and prioritize torrential data streams it never evolved to juggle, while in the background we have to stay ever vigilant to change in every sensory channel. Is it any wonder people today complain of mental fatigue?
Fatigue makes it even harder to sort the trivial from the salient and navigate the glut of decisions modern life throws at us. Google searches return hundreds of results that force us to make just as many hurried decisions. Screens of all sorts serve up rapidly changing images, jump cuts between scenes, erratic motion, and non-linear narratives that spill out in fragments.
Dr. Arnold Wilkins at the University of Essex has studied the side-effects of screen exposure and the viewing of artificial, or mediated, images. These include migraines, seizures, fatigue, and visual discomfort. With screens, the main culprit is flicker, fluctuations in an image’s brightness that occur when the number of frames per second is too small or the refresh rate too low for persistence of vision. Flicker can induce seizures in sensitive individuals (television viewing, reading, and driving past telephone poles or a stand of trees that produce dark-light flicker are common triggers of such “reflex epilepsy”). With old-fashioned television, 50 percent of children who had such seizures had them only while watching television. Today’s LED screens are much brighter, making the problem of flicker more noticeable.
Ambient lighting increasingly employs LEDs, too, providing yet another source of visual and mental strain—not just for sensitive individuals but for all of us. The predominantly blue light emitted by LED screens also disrupts melatonin rhythms and thus the sleep architecture of an already sleep-deprived nation. Yet many people snuggle up to their devices before bedtime, perhaps while they multitask, watching the news on a second LED screen. One colleague’s son sleeps clutching his phone. Imagine if we all functioned at peak performance instead of being cognitively impaired from chronically poor sleep.
Coupled with rapid content shifts that don’t allow time to make out detail before a scene switches to something else, the strain of all this saps our energy. “We’re designing an environment that is antithetical to what your visual system evolved in”, says Wilkins. The energy drain is not metaphoric: The brain accounts for only 2 percent of body weight yet consumes 20 percent of the daily energy we burn. In children the figure is 50 percent, and in infants 60 percent, far more than one might expect for their relative brain sizes.
Wilkins has discovered that the natural images our visual system evolved to process have a particular spatial structure. They are “scale invariant”, meaning that no matter how much you enlarge them they contain the same amount of detail. The brain can process invariant images quite efficiently using a small number of neurons. Unnatural images, by contrast, are scale variant. The degree of variance determines how uncomfortable an image is. Unnatural images, particularly the stripes and patterns that populate modern urban environments, turn out to be measurably uncomfortable to look at.
In modern environments striped patterns are everywhere—stairways, lighting grids in large stores, corrugated and reticulated surfaces of buildings, the right angles of buildings themselves. One stripe pattern that we look at every day is text. Masking lines speeds up reading because one is covering up uncomfortable stripes already read or yet to be read. Small type size matters, too. Kids have traditionally learned to read from books with large type. Yet, says Wilkins, “text is getting too small for children too early in life.” More alarming, iPad holders are becoming the new norm on car seats, toilet-training seats, and bassinets. My advice is that screens of any sort should be forbidden before age three. If I’m wrong, no harm’s done. If I’m right, the harm to developing brains is irreversible.
Unnatural strain-inducing images cause abnormally large oxygen uptake in the brain for the simple reason that analyzing them takes more work than analyzing natural images such as trees, clouds, bodies of water, and mountains. Wilkins suspects that visual discomfort is a protective response that dampens the spike in oxygen consumption that taxes brain energy reserves. He has further found that, “In nature we don’t get images with large color differences.” That is quite the opposite from what we encounter in our screen, urban, and Disneyfied worlds. “We’ve known for a long time that nature is restorative”, he adds. “It’s nice to go for a walk in the woods or on the beach. It makes you feel better. Part of the reason is that you’re not looking at stripes all the time.”
The other day I stood in an elevator lobby at George Washington University where I teach medical students. Doors opened and a dozen undergrads spilled out, all staring down at their phones, oblivious to my presence as they jostled and bumped into me. The screen lock on their attention created a blind spot that rendered me invisible, a phenomenon called “inattentional blindness.” This literal blindness is not a flaw. It is a consequence of how the brain developed: It ignores everything that isn’t an immediate priority even when it is staring us in the face.
Attention is like a sharp-edged spotlight. What lies beyond its perimeter lies in our cognitive blind spot. By definition, then, we never know what we are missing. As an educator I worry that our current students and future leaders won’t be able to focus, prioritize, delegate, meet deadlines, or shepherd a task through to its end. They have already ingrained slovenly habits that undermine their ability to learn and remember.
The Pine Crest staff echoes my worries. “Kids show up not ready to learn”, says one of the school psychologists, a woman with decades of experience. Her team puts candidates through extensive testing starting with pre-K, because Pine Crest wants its students to succeed. Its evaluations confirm that the best-loved teachers are tough yet fair, ones who push students because they believe they can excel. For that kind of teacher to succeed, student listening skills are critical: “If children can’t focus and are easily distracted, they’re not going to learn.”
Part of what this well sought-after school confronts concerns temperament. Some four-year-olds “can sit still beautifully” while some six-year-olds can’t. Although Pine Crest expects children to have learned patience and delayed gratification at home, many still show up lacking self-control, the kind illustrated in Walter Mischel’s famous “marshmallow test.” In that experiment, a child is offered a choice between one marshmallow immediately or two if he or she can wait 15 minutes, during which time the examiner leaves the room. Those able to delay gratification rate more highly in later years on measures of success such as education, accomplishment, and income. Alas, the instant shallow gratification that children get from their screens is not conducive to the patience necessary for long-term success.
Another atrophying skill the Pine Crest staff notices is motor coordination. A teacher draws a straight line, yet the child can’t copy it. “It’s alarming”, a school psychologist says. “Their fingers don’t know how to hold a fat pencil or crayon. They hold it limply or like a spear. Their line is wiggly. Fine motor development simply isn’t there.” Research confirms that different brain parts engage when writing versus typing.2 My current crop of medical students is the last cohort that knows cursive writing. Loss of this mind-hand connection is lamentable. We have always known that personality shows itself in handwriting. The loss is even worse given that “busy” teachers deem it unimportant. In reaction, smart parents are signing their offspring up for penmanship classes. Cursive classes at Fahrney’s Pens in Washington, DC, for example, are regularly oversubscribed.
Another skills regression compared to earlier cohorts manifests itself in trouble assembling simple puzzles. “They aren’t playing with things that kids in the past did—modeling clay, jacks, scissors, coloring books, tracing, pickup sticks. They’re not using their fingers to manipulate. They only use them to swipe and type.” Readers have no doubt heard stories of toddlers, perhaps your own, who swipe the pages of open magazines, expecting the pictures to change.
New technology is here to stay, concedes Pine Crest’s admissions director. Yet she sees “more of the downside than the upside.” She worries about long-term effects. “Can you imagine these students doing something as intricate as surgery in the future?” She perceives problems with socialization and emotional intelligence, too. “All this tech, and they can’t carry on a conversation!” It reflects home life, she suggests:
You can almost tell who eats together and talks, and which kids are raised by nannies. The latter can’t sit still, won’t converse with classmates or tablemates. They have a profound sense of entitlement and little initiative. They are used to people waiting on them.
Do kids like these represent the best and brightest, or the future know-nothings of America? We are bound to find out.
A troubling disconnect between parent and child is also evident in the after-school pickup line. Kids jump into the back seat of a car and whip out their iPads or watch television. There is no “How was your day?” because mom and dad are on their own cellphones as the family drives off. Parent and child exist alone together, a scenario that speaks to the isolating influence of digital media that once promised to make us more connected.
Pine Crest recently revamped the evaluation of its youngest candidates by not providing them with chairs. “We wanted them to circulate and see how they interact with one another. How much do they cooperate during a task?” the admissions director said. She lamented how poor interpersonal skills are increasingly evident early on, as is lack of eye contact with teachers and classmates at all grade levels. At home, pupils play online games in the isolation of their bedrooms with far-flung opponents whom they never see. In a world dominated by screens, in which no one looks you in the eye and you look at no one back, ever more young learners lack the experience to grasp subtext, body language, facial expression, and tone of voice. What neuroscience calls “Theory of Mind” is the skill, first acquired in infancy, of inferring another’s intentions, desires, and motives, and imagining how he might react and feel—in short, of recognizing that other minds exist and empathizing with their various states.
Autistic individuals studiously avoid eye contact and live in a self-referential world of their own. They have stark deficits in empathy and social insights. Cambridge psychologist Simon Baron-Cohen describes them as having a faulty theory of mind. Oxford neuroscientist Susan Greenfield further draws a parallel with today’s generation that avoids eye contact and has a hard time with face-to-face negotiations, especially job interviews. The adolescent years were always hard, but now teenagers text rather than speak to someone in person. This kind of avoidance makes bullying easier when one can hide in cyberspace and not be accountable. Another arena free of consequences is videogames, especially violent ones in which you can kill your opponent without repercussion. The notion that our cognition can maintain strict segregation between what is fictive and what is real is not a bet any sober scientist will take.
“The more screen time, the worse they fare in social life”, says a Pine Crest teacher who has seen Generation X and the Millennials march by.3 She is also an athletic coach, which gives her a better perspective than most:
More and more I’ve noticed that they’re unable to discriminate when they are being rude or offensive. Once you say ‘you can’t speak that way’, they’ll tell you they didn’t mean it negatively. They don’t grasp how they come across. If you haven’t practiced looking at a hundred faces and getting live feedback then how will you recognize broadcast emotions when you need to? How does one do that online?
Nearly all of us fail to appreciate the huge emotional investment we have in our devices. The singular insight of Stanford’s Clifford Nass was that we relate to them socially. Their content couldn’t be more personally relevant. Take the film Her, in which the protagonist relates to an operating system with a gamut of feelings as if it were his intimate inamorata.
Shocking? Perhaps, but less so if one is aware that back in 1966 Joseph Weizenbaum at MIT showed how easily human thinking becomes loopy when we interact with a program in natural language as we do with today’s smart devices. In Computer Power and Human Reason, Weizenbaum explained how he composed a demonstration program called eliza “with which one could ‘converse’ in English.” It parodied a Rogerian psychotherapist by simply reflecting back the “patient’s” statements and encouraging him to keep talking.
My boyfriend made me come here. He says I’m depressed most of the time.
I am sorry to hear you are depressed.
It’s true. I am unhappy.
You think coming here will help you not be unhappy?
I need some help, that much seems certain.
What would it mean to you if you got some help?
Perhaps I could learn to get along with my mother.
Tell me more about your family.
My mother takes care of me.
Who else in your family takes care of you?
My father.
Your father.
You are like my father in some ways.
What resemblance do you see?
You are not very aggressive but I think you don’t want me to notice that.
What makes you think I am not very aggressive?
You don’t argue with me.
Why do you think I don’t argue with you?
You are afraid of me.
Does it please you to believe I am afraid of you?
My father is afraid of everybody.
What else comes to mind when you think of your father?
Bullies.
Does that have anything to do with the fact that your boyfriend made you come here?
Eliza’s users typed a message. The program analyzed it, composed a response, and spit it out on a teletype terminal (no printers back then). For its emulation of a psychiatrist eliza became famous around MIT. People were eager to “talk” to it. Soon they were confiding their “most private feelings.” Siri, it turns out, has an ancestor.
What alarmed Weizenbaum was how quickly people “became emotionally involved with the device and how unequivocally they anthropomorphized it.” Even his secretary, who had seen him laboring for months on the machine, started conversing with it as if it were an actual person. After a few sessions she asked Weizenbaum to leave the room so she could talk to it “in private.” Instead, he shut it down, stunned that “extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Psychological reactions have not changed since, as the Media Lab’s Rosalind Picard showed more recently in Affective Computing (MIT Press, 2000), and as current Media Lab researchers continue to explore.
The danger of digitally induced delusions, as Weizenbaum saw it, lie in ceding authority to digital devices. As for emotional attachment, think how agitated people become when they lose their phone. Pine Crest girls asked to place their devices in a “phone box” during two hours of athletic practice act as if they were going through cold-turkey withdrawal. In itself, forging emotional ties with machines is hardly surprising. Since the Stone and Bronze Ages we have regarded the tools we invent as extensions of ourselves. We come to accept them as “natural”, and so do not question it when we reveal our inner selves to a digital device. Yet the danger Weizenbaum pointed out decades ago, when computational devices were magnitudes less sophisticated than today’s iterations, is that they make us “cease to believe—let alone to trust—[our] own autonomy.” Do we now defer too much to our devices? In my opinion, we do.
A passive mindset toward both learning and experience seems the norm. Consider what I call “calculator atrophy.” The introduction of electronic calculators in the 1970s rendered doing math in one’s head iffy going forward for Boomers like me. Today, Google has spoiled our collective memory as well. Once, everyone shared a cultural body of knowledge—factual dots that any educated person could connect in new contexts. Yet Pine Crest teenagers and my GW medical students both ask, “Why should I memorize when I can Google?”
I’d answer that shallow learning leaves you with no dots to connect. I distinguish between understanding and mere familiarity gained from Googled facts. A student might listen to a lecture or Google web pages about heart sounds, and be familiar with some terms used for various heart murmurs. But asked to explain the causal physiology behind them—the why and the how—the student can’t unless he has delved into the material in depth. When exam time arrives, something as quaint as flash cards, available for modern eyes on apps like Quizlet, quickly reveals the difference between surface familiarity and a rigorous grasp of the material.
Naked information in quantity is frequently untethered from context. Information and knowledge are not synonyms, a once-obvious point increasingly forgotten in a Googlefied world. This matters because memory retrieval rather than storage is the brain’s limiting factor. Context colors all perception, which is stored as memories across the cortex in a web of associations, and retrieved from the cortex in yet different contexts. Each time we remember something it is a creative reconstruction of the original event. Shaded by current context, all biologic memories are therefore in some sense false.4
Google and IBM’s Watson can discover patterns in the big data they vacuum up, as can NSA surveillance programs. But compare the viewpoint of a GoogleMaps car that indiscriminately records everything as it drives down the street to how two people can walk down the same street and notice entirely different things with respect to shops, restaurants, and passersby. People have different perspectives. Individuals are curious about different things. They assign different values to what they encounter. Just as automobile passengers do not really learn a route until they drive it themselves, we cannot passively download knowledge in the way The Matrix depicts. Learning demands active engagement, not passive consumption. It requires the capacity to concentrate, focus, prioritize, follow a linear stream of thought, or reason one out for oneself. Learning benefits from contemplation, moments of which seem increasingly hard to find.
Decades ago the famous “gondola kitten” experiment demonstrated that one must actively explore if one is to learn. One littermate in the set-up was free to explore its environment while another hung passively suspended in a contraption that moved in parallel with the exploring kitten. The gondola passenger saw everything the exploring kitten did but could not initiate any action. The mobile kitten discovered the world for itself while the passive kitten was presented a fait accompli-world in the same way that screen images are passively delivered to us. The passive kitten learned nothing. Since this classic experiment we have come to appreciate how crucial self-directed exploration is to understanding the world.
This holds true for humans as well as kittens. In an update of the gondola kitten experiment, researchers recently videotaped an American child’s Chinese-speaking nanny so that a second child saw and heard exactly what the first one did. The second child learned no Chinese whatsoever, whereas the first child picked up quite a lot. One speaks with a baby, not at it. Physical engagement allows the youngster to take in subtle signals such as tone, gesture, the way the other makes eye contact, and the two-way emotional reading that neither side is likely aware of, which is why the remote child and kitten learned nothing. This is also why there is a significant difference between telling a child a story, making eye contact and using gestures, and reading a child a story. Both are good, but they are not the same.
Allow me a personal anecdote, please. In the 1970s I arrived in London on scholarship only to find the lens housing of my camera broken. Since repair was beyond my budget, I decided to see the city through my own eyes instead of through a viewfinder. Fast-forward to this year’s Philadelphia Flower Show, and that lesson is still relevant. Everywhere people snapped pictures with their smartphones despite a crowd density that made it impossible to capture a decently composed shot. No one was looking at the flowers in front of them; people en masse looked instead at mediated images of the flowers on a screen.
This gets back to 15-year-old Emerson’s frustration of being addicted to his digital devices. Does using them to store phone numbers rather than remembering them or relying on Google Maps for directions free up cognitive resources? Or does doing so make us less self-reliant and unable to fend for ourselves when the power goes out? The reward neurotransmitter dopamine is without question involved in the pleasurable reinforcements that screen hits deliver. Most of us are unquestionably obsessed with our screens, and I believe a segment of us are frankly addicted.
Organizational consultants and proponents of distributed cognition extoll the ability to offload certain tasks in order to save effort and energy, given that consciousness incurs a high energy cost. But the flaw in their idea is this: What we offload to save energy is tied to memory, the dots at our disposal to connect. With fewer and fewer dots connected, we are making ourselves increasingly dumb.
Digital devices discretely hijack our attention. To the extent that you cannot perceive the world around you in its fullness, to the same extent you will fall back into mindless, repetitive, self-reinforcing behavior, unable to escape. Emerson had the wherewithal and self-control to cut back and eventually cancel his social media accounts so he could study more and work toward his long-term goals of college and a career. Just ten percent of Emerson’s peers can multitask fairly well, but only because they can prioritize, stay on task, and keep their goal in front of them.
We don’t need to wait for researchers like Arko Ghosh to tell us what we intuitively know. We need to act on our insights as Emerson did and realize how easily manipulated we are by billion-dollar corporations that have a vested interest in making us addicted us to their products in the same way that tobacco companies make their customers addicted. We need to stop treating every new piece of tech gear as an unalloyed good, and start questioning the pernicious effect it has on our brains and our being.
1See my TED-Ed lesson, “What Percentage of Your Brain Do You Use?”
2For a summary see Maria Konnikova, “What’s Lost as Handwriting Fades”, New York Times, June 2, 2014.
3A highly anecdotal, but very entertaining, argument that digital technology conduces to the spread of rudeness is Lynne Truss, Talk to the Hand: The Utter Bloody Rudeness of the World Today, or Six Good Reasons to Stay Home and Bolt the Door (Gotham, 2005).
4See Maryanne Wolf, “Memory’s Wraith”, The American Interest (September/October 2013).