In 2017, scientists at Carnegie Mellon University shocked the gaming world when they programmed a computer to beat experts in a poker game called no-limit hold ’em. People assumed a poker player’s intuition and creative thinking would give him or her the competitive edge. Yet by playing 24 trillion hands of poker every second for two months, the computer “taught” itself an unbeatable strategy.
Many people fear such events. It’s not just the potential job losses. If artificial intelligence (AI) can do everything better than a human being can, then human endeavor is pointless and human beings are valueless.
Computers long ago surpassed humans in certain skills—for example, in the ability to calculate and catalog. Yet they have traditionally been unable to reproduce people’s creative, imaginative, emotional, and intuitive skills. It is why personalized service workers such as coaches and physicians enjoy some of the sweetest sinecures in the economy. Their humanity, meaning their ability to individualize services and connect with others, which computers lack, adds value. Yet not only does AI win at cards now, it also creates art, writes poetry, and performs psychotherapy. Even lovemaking is at risk, as artificially intelligent robots stand poised to enter the market and provide sexual services and romantic intimacy. With the rise of AI, today’s human beings seem to be as vulnerable as yesterday’s apes, occupying a more primitive stage of evolution.
But not so fast. AI is not quite the threat it is made out to be. Take, for example, the computer’s victory in poker. The computer did not win because it had more intuition; it won because it played a strategy called “game theory optimal” (GTO). The computer simply calculated the optimal frequency for raising, betting, and folding using special equations, independent of whatever cards the other players held. People call what the computer displayed during the game “intelligence,” but it was not intelligence as we traditionally understand it.
Such a misinterpretation of AI seems subtle and unimportant. But over time, spread out over different areas of life, misinterpretations of this type launch a cascade of effects that have serious psychosocial consequences. People are right to fear AI robots taking their jobs. They may be right to fear AI killer robots. But AI presents other, smaller dangers that are less exciting but more corrosive in the long run.
The First Error
All misinterpretations of AI start with language. Let me explain with an analogy:
Why do people love the sea? Because the sea is rarely silent. It always seems to be talking, singing, or murmuring. When the sea grows rough, people say it surges and pulsates. The sea makes people feel as if they are in a whirl of action, with events causing other events.
But this is wrong thinking, said philosopher George Berkeley four centuries ago. Objects like ocean water lack the power to “cause” events. We say waves “cause” a ship to break up, or that fire “causes” smoke. Doing so is natural; when we look for causation we tend to look for things that move. When we find them we tend to attribute power to them. Yet material objects are passive, not active, Berkeley noted. When object A leads to a change in object B, it is not a question of “causation” so much as a series of cue signs. We see a giant wave coming; we expect the change coordinated with it—the smashing of the ship; the wave is the cause in the sense that it supplies a solid and reliable basis for predicting what is going to happen next; but the wave itself is passive and without the power to act. The wave event is analogous to that of a policeman holding up his hand to stop traffic: The hand is not really the cause of traffic stopping; it is a cue sign telling us that traffic will likely stop in the next event.
Every day we mistake the passive for the active. For example, if my patient has a seizure, I instinctively look to her brain as the cause. Her seizure is a frantic explosion. Her brain is a frantic explosion. An atomic bomb is a frantic explosion. Yet none of these phenomena is active; they are all passive. They lack the power of cause. When neurons in the brain reach a certain state the next event is a seizure. They are a sign of the event to follow, just as split uranium atoms are a sign of the mushroom cloud to follow. But they are not the cause. They are like a cause in their significance; indeed, they are like causes in all but the causing, for they have no causal power, just as split uranium atoms have no causal power. True, explosive events follow, but neither the neuron nor the atom can make the tiniest change begin to be. Strictly speaking, I err when I say the brain “causes” a seizure.
The distinction between active mind and passive brain is perhaps harder to grasp than the distinction between active mind and passive computer, since an active mind cannot exist without a passive brain. Think of it this way. The brain is a finite quantity. There is nothing about it that cannot be seen or touched. The mind, on the other hand, is infinite; it has no dimensions. If the brain “caused” the mind, there would have to be an interface between brain and mind, a connection somehow, a point of impact, where some infinitely small particle merged into infinite, invisible mind. But keep dividing and subdividing the finite brain and one will never arrive at an infinitely small particle. As Berkeley observed, even the tiniest particle is finite; there cannot be quantities smaller than the minimum that can be sensed. The only way for an infinitely small particle to exist is for us to erroneously imagine it, to fall into the grip of a delusion, to go from seeing with the naked eye a brain particle’s ten parts, to knowing that under certain conditions we can see its ten million parts, and from there taking a leap of faith and believing in an infinite number of parts. Legitimate representation passes into illegitimate substitution. Without an interface between finite brain and infinite mind, the brain cannot “cause” the mind—other than in our own minds, where we believe in such a connection to make us feel better.
The brain is passive; it only indirectly gives rise to the mind, as its presence is needed for the mind’s existence. DNA is the customary antecedent to a neuron, which is the customary antecedent to interacting with other neurons. These are all parts of the same process. They are causally connected in that we expect the activity of one to be followed by activity in the other. They are somehow connected with the mind’s existence. They are customary signs of its existence. But they do not cause the mind’s existence. They are one continuous passive process. The DNA and the neuron are unable to do things on their own.
What about images of the brain inside our minds? Are they “active”? No, they are also passive. Like the brain, they can be perceived by the mind. Like the brain, they can be made to engage in activity. A mind can play with brain images. But the images themselves, like the brain (or the AI computer), are not active. They lack power or agency; they are not causes but effects. Only a mind can purposely perceive, produce, and play with images.
The problem is that we cannot describe a mind. The mind, unlike a brain or a computer, is not a finite thing. Even images of the mind inside our minds are just passive pictures of that which acts, which is impossible. We know the mind not by any pictorial likeness but by the effects of its activities. We know our active powers, will, and understanding from within. But we have no mental picture of the mind, just as we know of no substance composing the mind. We may have an understanding of the meaning of the word “mind.” We have a linguistic convenience. But that’s all we will ever have.
The Perfect Being
AI’s danger arises from our mistaken tendency to describe passive events with active verbs. We wrongly say AI “practices” medicine,” “drives” a car, “invests” money, and “calls” pitches during a baseball game, thereby confusing passive AI with active human minds that can practice, drive, invest, and call pitches. Last year, for example, Saudi Arabia awarded citizenship to a robot named Sophia, who went on stage and reportedly said, “Thank you.” People said Sophia “spoke,” as if she had thought up the words and mouthed them. This is impossible. AI is a collection of passive silicon and metal. It does not speak. It has no mind. Like an ocean wave, it forms part of an ordered sequence of passive realities moved by an agency not its own. Sophia’s sounds represent activity and change, but only as an effect in a sequence of effects.
Misinterpretations of this type gradually move society in a peculiar direction:
An AI robot like Sophia emits sounds. People err and say the robot has “spoken,” although they sense a difference exists between robot speech and human speech. Some academics argue that any such difference is beside the point. Perhaps the words are not “spoken” as freely as when people speak them, but the judgment of what is speech is not so clear-cut. Other issues gradually merge into the verdict. For example, there is a certain virtue in speaking words that good people speak and refraining from using words that bad people speak. The words might not be totally spontaneous in the case of a robot, but whatever mode of interpretation one uses, one at least expresses a kind of friendly admiration for a robot speaking good words, though with some condescension. Then there’s the question of intonation, or the appropriateness of the language for the occasion. As for whether robots lack agency when they speak, the same academics argue that is a deeper metaphysical question that no one should pre-judge. As for whether robots are inanimate or alive when they speak, the academics will note that no coherent definition of life exists; therefore the question of life applied to robots remains unanswerable.
All these complex, cunning, and subtle considerations confuse everyday people and weaken their ability to distinguish between AI and a real mind. Taking advantage of this paralysis, businesspeople will try to award AI robots a legal personality, arguing that AI robots not only “speak,” but already “decide” investments and “reallocate” stock-bond portfolios, which means AI already has legal duties that affect people’s interests—the basic requirement for having a legal personality. In 2017, a committee set up by the European Union to examine whether to award legal personhood to advanced AI robots has already concluded that it might attribute at least a legal personality to robots.
Legal personhood for AI robots will follow logic similar to that underlying personhood for corporations. Legal personhood requires the ability to act. Corporations cannot act, but they are composed of real people who can, thus making legal personhood a reasonable fiction. AI robots will slide into legal personhood through a variation on this theme. They also supposedly “act,” given the active verbs we use to describe their behavior, although they are less connected to real people than corporations are. The term “electronic persons” has already been introduced.
The next step is to award AI robots rights, which already has a precedent. The Constitution’s Equal Protection Clause and Due Process Clause extend to both persons and non-persons such as corporations. Robots will slide in on the corporation’s coattails.
Business wants something very specific from AI, and robot rights make it possible.
First, AI is cheap and foolproof compared to human workers. Yet replacing human workers with AI robots risks a public relations debacle. Business needs ideological cover, which the language of rights offers by establishing moral equivalence between robots and people, thereby making the mass replacement of human workers seem more just. An ideology of robot rights already exists. An organization called the American Society for the Prevention of Cruelty to Robots (ASPCR) has declared, “Robots are people too!” Activists equate denying robot rights with denying animal rights, and call it “speciesism.”
Second, business wants protection against liability, which AI robots provide by virtue of their greater reliability and predictability. Once robots have rights they will also have duties, and therefore can be held responsible. Without rights, a robot remains a product liability problem that requires business to purchase expensive product liability insurance. Robots with rights cease to be products and become workers covered under a general liability insurance policy, with premiums made cheaper by the rarity of robot errors.
Third, business wants to expand deeper into traditional human activities. It has already done so by systematizing activities that laypeople once engaged in naturally—for example, friendship, matchmaking, advising, loving, and caring. Business turned these activities into fields of study, then employed credentialed providers to perform these activities as a paid “service.” Business also systematized and rationalized these services directly through computers—for example, Internet dating sites. AI robots are the next step. Not only do they eliminate the cost of the human service providers, but also the variations inherent in those providers, thereby saving money. Yet such robots must appear human enough for customers to feel comfortable engaging. By giving robots rights, along with arms, legs, faces, and speech, business makes the fiction more compelling.
People will gradually start to look upon these advanced AI robot service providers, which will be ubiquitous, as perfect beings existing on the same moral plane as humans. That robots are lumps of passive metal incapable of causation will be long forgotten. The robots will be said to “work” with humans, “advise” them, and “befriend” them. At the same time they will be programmed to exist beyond ambition, aspiration, and selfish hopes and desires. They will be perfect “humans.”
Such matchless behavior risks instilling in people feelings of guilt and self-loathing. Marx and Nietzsche described how utopian ideals rotted people from the inside by giving them a “bad conscience” when those ideals could not be realized. Marx called the ideal of heavenly perfection “alienation.” Nietzsche called the ideal of the perfectly altruistic society “nihilism.” Whole societies collapsed in the violence these ideals promoted through people’s worship of them, and in the wallowing of self-doubt and confusion they prompted when people fell short of them.
Unlike utopian ideals, perfectly virtuous robots will be very real. People will live among these ideal creatures who speak only positive, uplifting words according to their programming, who do not lie, cheat, or steal, and who feel no temptation to do so. Perfection can be a terrible thing to gaze upon, almost maddening. Although we praise virtue at a distance, no human being is equipped to be perfectly virtuous, just as no human being is equipped to be eternally happy.
To look upon such perfection will provoke feelings of resentment and inadequacy, as it always has. The feelings may seem harmless at first, the way the same feelings generated by hours spent on social media were also thought to be harmless. The latter ended up creating a dangerous psychological dynamic, as people compare their lives with the perfect lives they watch on screen, grow anxious and depressed, and sometimes lash out violently. Virtuous AI robots will become one more example of perfection to be rubbed in people’s faces.
If robots were viewed as passive metal, no psychological reaction would occur. But they will not be viewed as such. Through a steady stream of misinterpretation people will view them as competitors who are virtuous in the worst sense of the word.
Bad Advice
AI’s assault on consciousness involves more than just demoralizing people. Sometimes the opposite occurs.
Take, for example, a new AI app called “Coach Amanda” that advises management executives. In one case, an executive confessed to Amanda that she doubted her ability to review a colleague’s performance. Amanda slotted the executive into the category of highly conscientious workers and then told her not to be so hard on herself. Sometimes Amanda couples her advice with expressions of praise, such as “I’m so proud of you.” On a larger scale, AI collates information drawn from company data and employee-survey responses to advise the company on how to boost morale. For example, it encourages managers to be less secretive in their decision-making process, while also reminding employees that managers have good intentions.
All this is based on a delusion. The AI therapy app is not a mind. It cannot emote. It cannot feel the warmth behind its praise or the sense of urgency behind its criticism. It is passive. When listening to an AI app, the client’s mind is the only active entity in the room. To the degree that it is through speech that we exchange thoughts and experiences with one another, Amanda’s “advice” does not even qualify as speech, as there is no mind behind it. The app simply elicits sounds.
These sounds, which AI designers mistakenly call “advice,” are just an effect in a series of machine effects that come in ordered sequence. Each effect represents a sign of the effect to follow. There is no “cause.” People project onto the final effect—Amanda’s sounds—the causal impulses that stir within them. They call Amanda’s sounds “coaching” or “advising” to enliven the computer’s activities and win people’s sympathies. It is similar to the way poets impart action to water and wind. Poets speak metaphorically to project by an effort of empathy their energy into passive things. They say water “smashes” and wind “sweeps.” These words wrongly imply that water and wind “cause” things; they do not, because they are passive and lack agency. In the same vein, Amanda does not “coach” or “advise.” Its activity is passivity in disguise.
Sometimes these AI sounds are useful, yet their interpretation comes with risk, which is always the case when people confuse sounds coming from an object with advice coming from a mind. Taking advice from a murmuring brook or a whispering cave is another example of risky counsel.
Sometimes the sounds encourage people; sometimes they discourage people. In either case it is all based on a delusion. Amanda the machine “tells” a client it is proud of him, but perhaps Amanda should not be proud of him, and would not be proud of him if it knew him, if it knew his mind the way only a mind can know another mind, if it knew his deeper motivations. Maybe a real mind would have scolded him rather than praised him. Instead, the client hears Amanda’s sounds, which come in the linguistic form of praise that people use in speech. The client is emboldened to carry out a plan that may be nefarious.
Take another scenario: Amanda’s sounds convince employees that their managers have good intentions, when, in fact, their managers have bad intentions. A revolt is in order. But only another mind can detect all this. Amanda cannot. Amanda is merely an ordered scene of passive realities moved by an agency not its own. The employees hear Amanda’s sounds and are wrongly tranquilized. Their company goes bankrupt.
The danger of AI is that people will confuse a sequence of sounds with human advice, causing them to live ridiculous lives. During a therapy session, they will imagine their AI counselor looking into their eyes, suffering both for itself and for them, mourning over its own suffering and theirs, as if it were equipped with a human heart with an infinite of yearning. This robot confidant, more steam engine than human being, will be advertised as someone who “understands” people, including the thoughts that accompany them always and everywhere, awake or dreaming, interfused with all their aims, plans, and acts. It will be asked to help these people reckon with their consciences, or to draw up a general balance sheet of their lives. People, in turn, will foolishly trust this robot, and wrongly adjust their lives in response to its counsel. Having lost the ability to distinguish between passive material and active mind, they will think they have received sound advice, then yield to that advice and judge their lives according to it.
The whole thing is crazy. It recalls the Russian fable of the madman who imagined that he was made of glass, and who, when he was thrown down, said, “Smash!” and immediately died.
The End of Art
AI’s impact on art has another insidious psychological effect.
Professors at Rutgers University have built a machine that autonomously produces “art” using an algorithm called AICAN. The machine incorporates existing styles to generate images on its own. Project director Ahmed Elgammal writes: “People genuinely like AICAN’s work, and can’t distinguish it from that of human artists.”
But this is not art. In his essay, “What is Art?”, novelist Leo Tolstoy says art is a human activity whereby people, by means of a medium, hand on to other people feelings they have lived through, such that other people are infected by those feelings and experience them. Art is a means of union among people’s minds. By definition, AICAN’s work is not art because it has no feelings that it has lived through to communicate. It has no feelings at all. It has no mind. It produces a counterfeit of art.
Even when AICAN draws images that convey emotion to people, it is still not creating art according to Tolstoy’s definition. To express an emotion and arouse that emotion in another person at the same time is not art. A man laughs and infects others with his good cheer. Another man cries and infects others with his sorrow. Neither of these activities is art, Tolstoy says, for art is more than just transmitting one’s feelings of the moment to another person, immediately and directly. Art demands that a person evoke in his or her mind a previous experience, and then transmit that feeling so that others experience the same feeling. If AICAN fakes a laugh, prompting a viewer to laugh immediately in response, that is not art. Yet this is the highest level that AICAN can attain. AICAN cannot feel something arising from a previous experience and then transmit that feeling to another person, which is necessary for art. In fact, AICAN cannot feel at all.
People “like” AICAN’s work, Elgammal declared. It gives people pleasure. But magic tricks and circuses also give people pleasure, and those are not art, Tolstoy said. On the contrary, the notion that art is whatever causes enjoyment is simply a way of justifying existing art, he concluded.
AI cannot produce art. Instead, silicon chips with human-inscribed algorithms pass through a sequence of events. There is a connection, a succession, a certain order; there is activity, there is change. But there is no agency, there is no cause, there is no mind, there is no feeling, there is no art. That some people view AI-generated images as art—one customer paid $16,000 for an AICAN image—is less a commentary on AI’s abilities and more a testament to the worthlessness of much of today’s art, which aspires to do nothing more than stir enjoyment, whatever that enjoyment may be.
In The Painted Word, Tom Wolfe criticized modern art, especially abstract art, because much of it communicates so little. It neither touches nor moves people. Using Tolstoy’s definition, much of abstract art fails as art. Although the artists producing abstract art may feel something, many viewers feel nothing. But AI-produced abstract art is worse according to Tolstoy’s definition, for now neither artist nor viewer feels anything.
No one dies of bad art. The danger of AI-produced art is not the same as the danger of killer AI robots. Nevertheless, harmful consequences may occur. Tolstoy criticized art in the 19th century because he thought it had strayed from its original purpose. Rather than try to convey the feeling of a previous experience, the goal of art had become “beauty,” to please the upper classes. In the process, he said, art had become a destructive racket—for example, training young people for years in ballet, ruining their feet, to make rich people smile. Worse, he said, art had become a subject for instruction, with rigid rules and procedures, severing it from the unique emotional experiences that cannot be taught but that are nevertheless the wellspring of all art.
Social media has reversed this trend to some extent. Like tiny grass shoots heralding the arrival of healthy new life, millions of people today draw pictures and post them on Instagram. They write personal stories and post them online or e-publish. They try their hand at the art of comedy, music, or dance, and post videos of themselves on YouTube. All this fulfills Tolstoy’s definition of art. These people aspire to touch others and to transfer to others an emotion they have felt. What is poignant in all this is the sense of desperation in some of these aspiring artists. They may be lonely; perhaps they don’t know their neighbors; perhaps they have little family or no partner. Loneliness is epidemic in America. People desperately want to connect with others, and so they throw their work onto the Internet hoping that someone, somewhere, will look and listen—and be touched. Tolstoy never talked about how artists need true art as much as viewers and listeners do.
AI-produced art potentially competes for the attention of these viewers and listeners. Through sheer volume of production it could significantly lower the probability that people will click on the website or video of a live person craving connection. There will be too much “art” out there, and AI’s “art” will be the cheapest. Expect more anxiety, more depression, more loneliness among the human beings aspiring to fulfill a basic human need to join with others, even if just by receiving an admiring click on their posting, but who have been crowded out by the counterfeits of art that serve only to please.
Fixing the Problem
Parallel to the big world inhabited by big people and big things, there is a small world with small people and small things. In the big world they invent nuclear power, discuss foreign policy, and worry about economic trends. In the small world they invent yo-yos, discuss personal relationships, and worry about not having a date on Saturday night. Thinkers about the big world aspire to improve the lives of humanity. They invent intellectual paradigms. They give people’s behavior rock-solid foundations. Thinkers about the small world are far from such high-mindedness. They know small people have only a few desires—to have some friends, to have someone to love, to get along with their boss, to feel okay about themselves, and to get by with as little trouble as possible.
AI may pose a danger to life in the big world. It may lead to mass unemployment and killer robots. Thinkers about the big world have written about these possibilities. But AI also poses a danger to life in the small world. It risks heightening feelings of anxiety, worthlessness, and loneliness, as well as increasing the number of missteps and miscalculations that people make in their private lives. This is important. Such feelings and events determine the fate of a society, as people’s little problems, fears, and mistakes are the stuff societies are made of.
Preventing AI from taking human form may forestall some of AI’s dangers in the small world. Without human form, AI cannot delude people into thinking of it as a human variant. But business will not permit this. From a business perspective one of AI’s purposes is to absorb human services further into the capitalist system of exchange. This requires customers to believe in the fiction that an AI service provider is practically human, if not in physical form then at least in speech. That fiction requires AI to take on human characteristics. For example, a heterosexual man will not want to have sex with an AI robot unless it looks like a woman.
The approach of least resistance is to avoid using active verbs to describe AI in the first place, starting with AI’s inventors. Linguistically speaking, people working in AI are not malevolent but they are often lazy, or at least sloppy. They are forever using active verbs to describe what AI can do. For example, Dr. David Hanson, the founder of the company that invented Sophia, erroneously equates AI “thinking” with a mind’s thinking. He predicts that AI will match the general intelligence of a one-year old by 2029, which wrongly presumes that a passive AI circuit board and an active human mind (even a child’s mind) are comparable. The author James Lovelock calls future AI robots “another kingdom of life.” He says these cyborgs will one day “build” themselves while “keeping” humans as pets. Using active verbs, AI engineers project onto AI the feelings that stir within them. This causes AI to take on a separate and active existence, which everyday people believe in, thereby setting in motion the train of events described above.
The language problem surrounding AI is analogous to one that haunted religion in the 19th century. Philosophers such as Feuerbach and Marx said God did not exist, that people had invented God out of some inner need and awarded it human virtues to make the divinity more accessible. For example, people called God “loving” and “merciful.” Such descriptions incited superstition and nonsense, the two thinkers declared. The criticism threw clergymen on the defensive, for some laypeople, even some clergymen, had in fact projected their deepest hopes and fears on to God. For example, some people insisted that God had ordered that there should inequality in the world, while others insisted that God had ordered there should be equality.
Substitute the word “mind” for “God,” and the criticism of the two thinkers can be applied to today’s AI scientists. The scientists have invented a “mind” that they believe “causes” events—that “drives” cars, “counsels” executives, “composes” music, “invests” money, and “advises” government officials. But the AI mind is a fiction. The scientists have simply projected onto AI the feelings that stir within them, causing AI to take on a separate and active existence—the same charge leveled by Feuerbach and Marx against God. They have conflated a sequence of events inside a machine with a mind’s operations, and in doing so have encouraged people to behave superstitiously and nonsensically—for example, taking Coach Amanda seriously—not unlike those 19th-century religious believers who believed God had actively caused a river to flood because they had sinned.
Some AI scientists deny that AI is an invented mind, just as 19th-century clergymen denied that God is an invented God. They say laypeople have AI all wrong, that human biology is as irrelevant to AI research as bird biology is to aeronautical engineering. John McCarthy, the computer scientist who coined the term AI, said, “Artificial intelligence is not, by definition, simulation of human intelligence.” Yet AI experts are people too, and, like the clergymen of the 19th century, they participate in some of the silliness. They say AI “causes” events. They use active verbs to describe what AI does. Even John McCarthy spoke of the computer “playing” chess. They cannot help themselves, just as 19th-century clergymen could not keep from investing God with some of their personal feelings.
People are born to believe. If they ignore their old deities they will find new ones. This is AI’s danger. For two millennia, humanity has endured good and bad in the name of God. We are on the cusp of repeating the same drama on a new plane. Some people believe AI is a mind. They describe its functions with active verbs. They revere AI’s potential as infinite, as if it were some divinity. Non-believers disagree and say, “An AI ‘mind’ is not a real mind. You wrongly credit AI with power.” Believers grow angry and demand respect for AI. Non-believers counter, “You have imagined a ‘mind’ over and above us, and given it a separate existence. When you contemplate this ‘mind’ you simply contemplate your own ideas.” This is religious strife in secular form, not over an invented God but over an invented mind.
To keep people in the small world from behaving more ridiculously, or from experiencing more loneliness than they already do, and to head off a potential conflict of religious proportions, we must tell AI’s inventors to set the right tone. No more confusing the passive with the active. Such careful language was unnecessary in the past. People could get away with being lazy and sloppy, and saying that an ocean wave “causes” events. No longer. With the rise of AI, we must be more precise. We must declare AI a bunch of silicon and metal, with no more power to “cause” than an ocean wave.