Several years ago, I had a patient with a full stomach lose consciousness while under spinal anesthesia. Had the anesthetic spread to the patient’s brain, or had she just fallen deeply asleep, I wondered? If the patient was asleep, I could just let her sleep. But if the anesthetic had climbed toward her brain, she might have lost her cough reflex, which risked exposing her lungs to any food that might slip from her stomach into her windpipe. She would need a breathing tube to protect her airway. But she had a funny airway. Placing the tube would be difficult, and the struggle to do so might cause her to vomit and aspirate.
The BIS monitor that measures brainwaves to distinguish between natural sleep and general anesthesia read “borderline.” I carefully studied my patient’s face. Natural sleep often conveys an image of peace, while an anesthetic-induced sleep conveys an image of fear. In the latter, the cheeks are pale; the veins at the temples stick out repugnantly; the hair is busy and hectic, and dank with sweat; the nose is snotty, as if the owner were too harried to wipe it. When watching someone in real sleep, one has a sense of life reviving, that a tired spirit is putting forth fresh shoots. A person in an anesthetized sleep usually gives off an aura of despair. The difference is subtle, and yet I sensed my patient was naturally asleep. I decided to leave her be. It proved to be the right decision.
An artificially intelligent (AI) system might have decided otherwise, based on the patient’s gross appearance, BIS number, and aspiration risk. It might have tried to insert a breathing tube, causing the patient to vomit—and die. True, AI’s judgment may improve in the future. Many researchers think it will one day surpass natural intelligence, which would make my judgment as a doctor obsolete. At the 2006 “AI at Fifty” conference, 41 percent of attendees expected this to happen sometime after 2056. In a 2013 survey of the 100 most-cited authors in AI, respondents expected machines to “carry out most human professions at least as well as a typical human” around 2070. Only a small fraction of the survey’s respondents thought this would never happen.
But such optimism is a source of danger, as it builds up organizational bias toward replacing human professionals with AI. Such an event is already highly anticipated. In medicine, AI’s ability to analyze images threatens to push radiologists aside. In law, AI-judges in China already decide legal cases. In aviation, engineers expect ground-based AI to fly passenger planes. All professionals, no matter what their field, seem vulnerable to AI, while for AI enthusiasts, the belief in what AI might become is exciting, like a charm, like a whispered promise of mysterious perfection. Business is also excited, since AI costs less than human personnel.
Yet the promise of AI is false. While AI is very useful, it will never provide a complete substitute for a professional’s natural intelligence. We are rushing headlong toward a luminous image of AI perfection, acquiescing in the possible mass replacement of professionals with AI, and risking people’s lives in the process.
Here follows a second anesthesia case to illustrate some of AI’s permanent limits—my effort to give pause to those who would lead us toward disaster.
Just a Regular Day in the Operating Room
My patient was a 65-year-old man going for an upper endoscopy and colonoscopy. Despite having a history of vascular disease, he was in good spirits and kept cracking jokes during the pre-op interview. At one point he said he had swallowed a quarter and that we could keep it if we found it during the operation. When I asked the gastroenterologist what this was all about, he rolled his eyes and told me to ignore the patient’s humor, as apparently the patient was always pushing gags of one kind or another.
Privately, I had another concern—about the gastroenterologist. I had seen him the day before combing through the empty brown boxes that sat as garbage in the hall. When doctors at my hospital left their spouses, they often came looking for these boxes to put their stuff in—hence, this area of the hospital was called the “marriage graveyard.” I had watched him inspect the boxes with bleary eyes and hunched shoulders, then pick up two boxes as though they were very heavy and leave with his head drooping.
We brought the patient into the operating room. I injected the drug Propofol into his intravenous. Before he fell asleep he reminded me to look for that quarter. I noticed his fingers were a little blue, most likely from vascular disease, or from cold.
At this point, perfect AI would face the same limitations I did. All information about patients comes through the five senses. Perfect AI would gather patient information in the same way—since human beings would build its sensors. During the 1990s, researchers decided that for AI to improve it needed to go beyond information processing and interact with the real world. AI had to be “embodied,” they said. Yet being embodied means obtaining real world information through a body’s five senses—same as for people.
Having analogs to these five senses does not keep perfect AI from “seeing” the world differently from how people do. Although first-generation AI visual systems arguably “saw” the world as we do, with precise borders between objects, new machine learning algorithms “see” the world as cloudy and vague. Images generated by advanced machine learning algorithms look quite peculiar. AI enthusiasts praise this digital activity as an example of AI transcending the limited categories of human thought.
Yet AI enthusiasts risk crossing over into the occult when they presume AI “sees” actual reality while humans see only a poor copy. For it implies that something more fundamental exists beyond what people can sense. Such thinking recalls that of 17th-century scholars who believed in an underlying substance called “matter” or “aether,” existing in the universe independent of the senses and pushing the planets around.
Eighteenth-century philosopher George Berkeley is useful in showing us the error in this. Nothing exists if it cannot be perceived through the senses, or through machines that magnify the range of the senses, he argued. A chair exists because it is seen and felt, and not because it is seen and felt and existing. To be seen and felt is the definition of existence. The imagination can play with the chair’s image and create new ones, but that is imagined perception and not images of something that exists beyond the senses, said Berkeley. To think otherwise is to dabble in the occult.
One would assume AI engineers would know that any new AI image generated through a machine is nothing more than an extension of a human being’s senses. But then the philosophers get involved. With their inflated rhetoric they get people to imagine the possibility of transcending the realm of sense. For example, University of Toronto philosopher Brian Cantwell Smith, in his The Promise of Artificial Intelligence, writes:
AI systems need to be able to deal with reality as it actually is, [his italics] not with the way that we think it is—not with the way that our thoughts or language represent it as being. And our growing experience with constructing synthetic systems and deploying them in the world gives us every reason to suppose that “beneath the level of the concepts”—beneath the level of the objects and properties that the conceptual representations represent—the world itself is permeated by arbitrarily much more thickly integrative connective detail.
AI enthusiasts imagine that AI will one day “see” a more fundamental reality beyond what a person can see. They describe this reality with complex phrases—for example, “a vastly rich and likely ineffable web of statistical relatedness,” or “a plenum of surpassingly rich differentiation.” But these phrases serve the same purpose as those of 17th-century scholars who argued for the existence of a mysterious substance occupying all space, and called it “we know not what.” They try to make the unperceivable seem perceivable.
When perfect AI “sees” a patient in the operating room, it “sees” the patient as doctors see the patient. It “sees” all the patient’s colors, contours, and shapes that a doctor sees, and even a few more if it can penetrate cracks and crevices with a built-in microscope. Once AI passively receives the patient image it can break down that image into small lines and pixels, or create blobs to substitute for definite shapes, analogous to what a doctor’s natural intelligence can do with his or her imagination. But like a doctor, AI will never “see” something that doesn’t exist.
Back to my case. My patient’s airway obstructed whenever I deepened the anesthetic, causing his oxygen level to fall. To complicate matters, the oxygen monitor sometimes failed to register, as the man’s blood vessels were too diseased to send a continuous signal. When I finally got a signal, it read a normal “98.” I decided to inject just enough Propofol every few minutes to keep the man lightly anesthetized. Yet this also led him to open his eyes and move around at times. When the nurse dimmed the lights and the gastroenterologist put the scope in the man’s mouth, the man shook his head from side to side, frustrating the doctor. I could tell that the doctor was in a bad mood, the way he growled and muttered, “Damnit!” His facial expression reminded me of how my father used to look when he fought with my mother.
The man sputtered and gagged as the doctor pushed the scope further in—roughly. With my oxygen monitor on the blink once again, I stared at the man’s fingers in the dark. They seemed bluer than before. I sensed something was wrong. I told the doctor to remove the scope so I could give the man extra oxygen. “Come on!” the doctor cried impatiently. “He’s fine.” I repeated my demand, this time more firmly. The doctor grudgingly removed the scope while the nurse turned the lights back on.
I gave the man extra oxygen while waiting for the monitor to start working again. Two minutes later it flashed “90,” or the lowest level of normal. I looked at the man’s fingers. They had the same degree of blueness as when I had stopped the case, but somehow the color was less anxiety provoking—probably because now I could associate it with a normal number. Since an oxygen level of 90 can’t cause cyanosis, and since the bluishness had not really changed, I must have misread the color earlier and imagined more blue. That doesn’t mean the patient’s oxygen level had not dropped earlier. It had. Just not enough to cause cyanosis.
AI enthusiasts might argue that I had made a mistake that perfect AI would not have made. I had seen illusory data—deeply blue fingers. But AI enthusiasts also err here. “Seeing” wrongly does not mean creating illusory data, since seeing cannot create what does not exist. At the same time, seeing involves more than just a camera action, just as hearing involves more than just an audio action. It involves unconsciously expecting, remembering, interpreting, selecting, and understanding. The man’s finger color was a simple sensation, an abstraction from experience, which my mind then integrated into my lived experience. I knew the oxygen monitor was on the blink, which alarmed me. I knew the room’s darkness obscured my view of the patient. I sensed the gastroenterologist was thinking more about his failed marriage than the patient. I knew the patient had serious vascular disease. In my hypersensitive state I misread his blue fingers. I was wrong. Yet I saw what I saw. As Berkeley said, there are no illusory data; there are only true data taken in the wrong way.
I am not a camera. I am not a dead thing. I am a living, seeing, minding, imagining thing, in which sensory and psychological factors mix and become inseparable parts of a whole. This is what caused me to see wrongly. Yet my imagined perception was real to my mind. And it made a difference.
For what would perfect AI have done? First-generation AI was just a camera. It was a dead thing. Today’s AI is more than a camera, as it tracks correlations and makes predictions “underneath” the clear and precise images that people see. But it is still a dead thing. It has all the limitations that people have, for it “sees” only physical reality, yet it lacks the living mind to interpret that reality through emotion and imagination. When my patient sputtered and gagged, AI would have remained silent, for at that moment the oxygen monitor was on the blink, without a flashing number to alarm anyone. AI can be programmed to halt all cases in which an oxygen monitor misfires, but given the man’s bad blood vessels, his case would then be permanently halted, introducing a new risk.
Therefore, AI would let the case continue, as there was no other information about the patient’s oxygen level for AI to “see” besides the sputtering and gagging, which were more intense than usual, but not totally out of the ordinary for these kinds of procedures. The carbon dioxide monitor also suggested that the patient was still taking breaths. Meanwhile, AI’s camera would have correctly judged the man’s finger color to be unchanged. When correlating that color with the last known oxygen level— 98—AI would have experienced no sense of urgency to stop the case. As for the gastroenterologist’s bad mood, which added to my sense of urgency, it would have been too complex for AI to grasp.
This means perfect AI would never have responded as I had. It would have waited for the oxygen monitor to start up before sounding the alarm. Nor would perfect AI have exhibited my anxiety-induced pluck and pushed back against the gastroenterologist when he resisted removing the scope.
AI’s delay in stopping the procedure would have caused the patient’s oxygen level to fall even further. For it did fall. Even after giving the man oxygen for two minutes it only rose to 90—from what level I do not know. By the time perfect AI got involved it might have been too late. With his vascular disease, the man could have suffered a heart attack—and died.
The Quarter Appears
We flipped the patient onto his side to begin the colonoscopy. “What’s this?” I heard the gastroenterologist ask. I peered over the patient’s bottom and saw two dimes and a nickel stuck inside the crack between his butt cheeks. Now I got the patient’s joke. You put a quarter in the mouth; the body makes change; you get two dimes and a nickel out of the anus. Ha-ha. Very funny.
The gastroenterologist inserted the scope. To help him navigate the colon, he asked me to shift the patient’s position and for the nurse to press on the abdomen. In the process the blankets bunched up around the patient. Five minutes later, the patient’s heart rate and blood pressure increased. Half awake, he twitched his upper lip, which I interpreted to be a slight wince. I thought he might be in pain, so I told the nurse to stop pressing and for the doctor to stop advancing the scope. They complied, but the patient’s heart rate and blood pressure remained elevated. With the obvious reasons for pain eliminated, I entertained other diagnostic possibilities, such as fever or a heart attack, but then I saw the patient twitch his upper lip again, and an idea came to me.
“Where are the coins?” I asked.
We looked around and found the two dimes hidden in one of the blankets, but we couldn’t find the nickel. Eventually we did. The patient was lying on it, with the nickel propped up on its edge by a blanket, gouging his hip. The nurse plucked out the nickel and the patient’s heart rate and blood pressure returned to normal.
I had performed in a way that perfect AI could never have, because unlike AI, I am a living being who knows pain. Although both artificial intelligence and natural intelligence can passively perceive a nickel, pain is an active experience, as Berkeley observed. Take, for example, strong heat, which the mind senses through touch. Heat is not pain. It can be contemplated at a distance with indifference. Awareness of a hot flame and the air around it, and even the awareness of a burnt hand hovering over a flame, can be perceived without the mind doing anything more than receive information. Pain, on the other hand, is actively experienced in the mind. When one goes from heat to pain, one goes from passive to active, and from lifeless to life.
Because perfect AI is not a living thing, it cannot know pain. It can only passively perceive the physical evidence of pain through its sensors. For this reason it lacks a doctor’s sympathetic intelligence and intense vigilance in policing pain. Having felt pain myself and correlated thousands of facial twitches with the pain experience, my imagination refused to let go the possibility that the patient might be in pain, which led me to think about the coins.
Perfect AI would never have persisted in this line of action. Its data bank would have no history of a patient lying on a coin during a colonoscopy as a cause of pain. It would not have even known about the coins, since no one in the operating room had spoken about them, other than to say, “What’s this?” Since a blanket hid the coins, AI’s sensors would not have seen them, unless it routinely X-rayed all blankets for metal objects, which it would not do, since that would expose patients to unnecessary radiation. From my side of the room, the AI camera would have seen the patient’s upper lip twitch, but whether AI would have associated that twitch with pain, absent an obvious pain source, is unknown. Most plausibly, AI would have moved on to other diagnostic algorithms and treated the patient with a heart rate drug, which carries risk, more Propofol, with each additional dose causing more airway obstruction, or a narcotic to cover for general pain—while leaving the nickel pressing into the patient’s side.
The case dragged on for another 40 minutes while the gastroenterologist struggled. The patient’s heart rate and blood pressure rose again, suggesting he was in pain from all the pushing inside his intestines. At this point perfect AI might again have injected more Propofol and thus possibly caused more airway obstruction. I studied the patient’s face and reached a different conclusion.
In pain, time seems longer than it does in pleasure. Because time’s duration feels longer, its existence is longer. This is why the measure of time differs from person to person, and in the same person from moment to moment. I know this because of my experience as an anesthesiologist, but also because I am a person with a mind. Although perfect AI can chart time, it cannot perceive time as we do, as a series of sensations or a succession of ideas.
When my patient thrust his leg straight out in frustration, I suddenly recognized what was going on. He was in pain because he perceived time to be too long. He had grown sick and tired of lying on the gurney. The feeling had made him anxious—a kind of pain—which made time seem that much longer, making him even more anxious, thereby trapping him in a bad spiral. His increased heart rate and blood pressure had nothing to do with physical pain. I sensed this in the way he rolled his eyes and sighed heavily, for I, too, have experienced the perception of time gone on for too long. Rather than give the patient more Propofol, I gave him a drug that treated only his anxiety while leaving his airway intact. AI could never have made this subtle calculation, for AI cannot imagine time the way a person can.
A permanent defect thus haunts AI’s evaluation of all inner human experiences, including pain, time, unhappiness, and the feeling of warmth or cold. Again, Berkeley is useful here.
Berkeley found an error in Sir Isaac Newton’s physics. Newton had envisioned the concept of absolute motion in absolute space, which is inconceivable, Berkeley said. All motion is relative. Because a reference point must exist from which to measure a moving body’s direction and speed, no motion can be determined in a body that exists alone in absolute space. Without a reference point, a body moving a thousand miles an hour would seem no different from a body standing still. At the very least, a person must be watching the moving body, for how else would the body’s movement be perceived, Berkeley asked? The perceiver himself becomes the reference point, with the distance between the perceiver and the moving body measurable. Motion without a reference point is “the purest idea of nothing,” Berkeley said.
This is why AI motion detectors must judge motion according to a reference point. Many AI machines, for example, take pictures every second and infer motion from an object’s change of position in sequential images. Somehow there must be a reference point.
But inner feelings have no reference point—other than the mind that perceives them. Although temperature can be measured, the feeling of warmth or cold—a valuable symptom in medicine—depends on the mind perceiving it. And it is a complicated feeling. The feeling of cold, for example, is largely a surface, or skin, feeling. When our skin is kept warm but our inner body temperature is cooled with intravenous fluids, we still feel warm. In one study, an anesthesiologist lying on a warm mattress pushed cold fluid into his veins.1 His teeth began to chatter ferociously, and yet he didn’t feel cold. When I asked him what he felt, he said he wasn’t sure. It just felt “very weird,” he observed.
Our feelings, like motion, are always relative. Just as motion must have an “up” or “down,” a “left” or “right,” so must feeling have a “more” or “less.” There is no absolute state of feeling that can exist alone without reference to something. Even people who feel “just right” feel according to some scale of “more” and “less.” Since AI is not a mind, it cannot grade an inner feeling, even with a leap of empathy, for it has no empathy, as it has no mind. The ability to estimate people’s inner feelings will forever remain a blind spot for AI.
The Pink Mouse
The patient woke up after the operation. As we exited the room the patient drunkenly said, “What’s that funny pink mouse running around over there?” I paused for a moment. “Come on, let’s get going,” demanded the gastroenterologist. I looked around for a pink mouse. As I expected I didn’t see one. “It’s just another one of his stupid jokes,” insisted the gastroenterologist.
But I was suspicious. I once had a patient in obstetrics suffer a major blood pressure drop from an epidural. The first hint of a problem came when she asked me, “Can I have my red hat?” She had no hat, let alone a red one. A rapid drop in blood pressure can make some people dotty and hallucinate.
I decided we had to go back into the operating room and take the patient’s blood pressure. The gastroenterologist protested, but the blood pressure proved to be dangerously low, probably from a mixture of residual Propofol and a vascular system unusually sensitive to postural change. I treated it with medication and restarted our trek to the recovery room.
AI would never have caught this low blood pressure so quickly. It would have let the patient continue on to the recovery room, which would have exposed the patient to several more minutes of low blood pressure—another heart attack risk. Again, Berkeley helps to explain why.
According to Berkeley, when we imagine things, we use data gathered from our senses as building blocks. Even when patients hallucinate, they hallucinate using real sensory data they have gleaned from life. For example, pink things and the figure of a mouse are both perceivable in real life. Our minds work with these perceivable images to create new perceivable images—such as a pink mouse. Our minds cannot work with unperceivable material.
Curiously, contemporary neuroscience’s “cognitive binding” theory violates this rule. According to the theory, the eyes take in color, motion, and shape, while the brain supposedly binds these data to produce a unified image. The theory assumes the data are separable and in need of being bound. But they cannot be, as Berkeley observed. Color, motion, and shape are never separable; they can never be perceived individually, on their own. My patient’s brain did not combine the color pink and the shape of a mouse to see a pink mouse. He saw a pink mouse from the outset. The color pink can never exist without a shape, and the shape of a mouse can never exist apart from a color. As for motion, something cannot be judged moving fast without being a “something” in the first place. And to be a “something” it must have a shape and a color. The brain does not bind motion, color, and shape in preparation for serving them up to the mind as a single image. The mind sees the image whole.
Some neuroscientists claim that as consciousness fades, cognitive binding weakens and then disappears.2 Does this mean that as my patient fell back to sleep, his image of the fast, pink mouse began to break up into its components of fastness, pinkness, and mousiness? The whole notion is ridiculous, Berkeley would argue.
AI has assimilated this flawed theory. AI audio processing uses a variety of architectural models to represent sound—for example, spectrogram representations that convert sounds into visual spikes. Let’s say perfect AI heard my patient speak of seeing a fast, pink mouse. It would reduce the patient’s phrase to three words, which would then be broken down into three sets of spikes, representing the word “fast,” the word “pink,” and the word “mouse.” In adopting this method, AI assimilates neuroscience’s erroneous belief that visual images can be broken down into unperceivable building blocks—that fastness, pinkness, and mousiness can exist separately, and can be perceived separately.
In my case, perfect AI might first look with its camera for a fast, pink mouse in the operating room and not find one—although fast, pink mice do exist, in theory. (I have seen a mouse running around in an operating room, and baby mice occasionally do look pink.) AI would take no action. It would then erroneously think of fastness, pinkness, and mousiness as separate phenomena. It would conclude that these are all reasonable abstractions, at least when described linguistically, that are capable of being perceived separately. It would miss the irrationality in the totality of the image the patient claims to perceive. It would miss the fact that the patient is hallucinating. Again, AI would take no action.
As for AI correlating the patient’s speech with other risk factors, such as the residual Propofol in his system and his sensitive vasculature, this would also be impossible—for they are not risk factors. All patients after receiving Propofol have residual Propofol, and the fraction of people who drop their blood pressure post-operatively while lying on a gurney as a result is too small to make it a special risk factor. AI would have no reason to think this particular patient was different from every other patient moved to the recovery room on a gurney without event. As for the patient’s sensitive vasculature, only rarely is residual Propofol sufficient cause to drop blood pressure in this situation. Even patients with vascular disease, after receiving Propofol, are typically moved to the recovery room without any problem. There is no reason why AI would calculate in advance that my patient was at risk for a blood pressure drop. With no reason to suspect anything unusual, and unable to grasp the patient’s hallucination as a tip-off, AI would never have any reason to stop the patient from continuing on toward the recovery room—with dangerously low blood pressure.
The Limits of AI
An Indian philosopher once said that the world is supported by an elephant, the elephant by a tortoise, and the tortoise by…he knew not what. Something similar can be said about perfect AI. AI enthusiasts see artificial intelligence as a new kind of world. They imagine it surpassing humanity and becoming a creation on par with heaven itself. But they overlook the incoherent leaps in logic and humdrum inconsistencies that characterize perfect AI’s intellectual supports. Their AI heaven is held up by an elephant, then by a tortoise, and then, one realizes, by nothing at all.
How did things get so far? How did we allow ourselves to believe in the inconceivable fantasy of perfect AI? Again, Berkeley may have an answer.
The philosopher John Locke once imagined a triangle that was neither oblique nor equilateral nor scalene, but all of these and none of these at once. Berkeley, who was Locke’s contemporary, declared that such a triangle has never existed and can never exist. By abstracting from real triangles, Locke had created a ludicrous concept. The foolishness is not in the abstracting, which we do all the time, Berkeley said, but in continually abstracting to create abstract general ideas disconnected from any particulars of existence. Then we give those ideas names, which convince us that the ideas exist.
Locke’s triangle, absolute motion, and “aether” were misleading abstract general ideas in Berkeley’s time. We have misleading abstract general ideas in our own time.
For example, in my field of anesthesiology the peculiar mental effect caused by the drug Ketamine is called “dissociative anesthesia.” The fact that anesthesiologists give the drug effect a name suggests they have an idea of how the drug works. In fact, they have no idea. Labeling the phenomenon accomplishes nothing. When asked about Ketamine, many anesthesiologists reflexively parrot the phrase “dissociative anesthesia” to sound smart, but besides that phrase they know little more than a layperson does about how the drug actually works.
In consciousness studies, by using their powers of abstraction, some neuroscientists imagine “microtubules” and “entangled states” as the basis for consciousness. Because they evoke no precise image and yet have a name, these abstractions seem real. But they are not real. Other neuroscience philosophers abstract consciousness from our lived experience and imagine it to be a property of both animate and inanimate objects. They say tables and chairs have consciousness. Again, this is ridiculous.
Our bad habit of creating abstract general ideas has crept into the field of intelligence. To enlarge the compass of our minds, we have abstracted the idea of intelligence from the particular human activities that demonstrate intelligence and named it “intelligence.” Then we have applied the idea to machines, analogous to how some philosophers today apply the idea of “consciousness” to inanimate objects. We have created the phrase “AI.” And now we are being led on a futile chase for “perfect AI,” something will o’ the wisp, something just round the corner, over the hill, something we know not what.
That so many fields today are involved in the chase for perfect AI, including robotics, engineering, philosophy, adaptive systems, neuroinformatics, and bio-inspired systems, to name just a few, suggests that perfect AI is a credible goal, and that through collaboration we are closing in on its secret. In fact, it is just the opposite. By studying AI from so many different sides, researchers imagine one day seeing AI from all sides, and even from its interior. But every subject has as many sides as there are radii in a sphere; that is to say, they are innumerable. Thus it is impossible to study a subject from all sides. Therefore, we typically establish an order of succession, decide which sides are most important, and create fields out of each. For example, instead of all scientists studying the universe, physicists study light, chemists study gases and liquids, and geologists study rocks, with only the slightest crossover. That so many fields study AI is a testament to the fact that researchers cannot establish an order of succession in AI, and they cannot establish one because they cannot understand AI. AI is an abstract general idea. That AI cannot be confined to a specific field or to a few fields suggests weakness, not strength.
The products of AI research will continue to prove immensely useful. They will make our lives easier and safer. I applaud their coming. But the search for perfect AI is a futile activity that belongs, at most, in a center for theoretical knowledge—for there, no one gets hurt. In contrast, professionals must deal with real life, where people and nature are unpredictable, where machines go on the blink, where abstract general ideas are meaningless, and where a client’s well-being, even life, is at stake. The world of the professional is the world of the particular and the singular, not the world of the universal and the general. This is why AI is destined to remain a useful adjunct to a human professional, but a dangerous substitute for a human professional.
1Steven Frank, MD, et al. “Relative Contribution of Core and Cutaneous Temperatures to Thermal Comfort and Autonomic Responses in Humans,” Journal of Applied Physiology, Vol. 86, Issue 5, May 1, 1999. Also conversation with the author.
2For interviews with such neuroscientists, see John Horgan, The Undiscovered Mind (The Free Press, 2000), chapter 1.