by Nicholas Humphrey
Princeton University Press, 2011, 256 pp., $24.95Consciousness: Confessions of a Romantic Reductionist
by Christof Koch
MIT Press, 2012, 200 pp., $24.95
Beyond the Brain: How Body and Environment Shape Animal and Human Minds
by Louise Barrett
Princeton University Press, 2011, 304 pp., $35
Cognitive scientists call consciousness the hard problem, the crux of which is the mind-body problem: What mechanistic account can explain the emergence of conscious qualities of experience, or qualia, in philosophical jargon, from non-conscious substance? Naysayers aside, there is no reason to think this problem will not break before the relentless march of empiricism, as others before it have. For instance, biochemistry and cellular-molecular biology can now explain life as an emergent property of non-living matter. This was not always so.
Before its ascendance, mechanistic biology, as contemporary biology was once called, was controversial. In 1868, Thomas Henry Huxley wrote: “To many, the idea that there is such a thing as a physical basis, or matter, of life may be novel—so widely spread is the conception of life as a something which works through matter but, is independent of it.” Today, there remains no serious doubt about our capacity to describe living substance through biochemistry. (Incidentally, acceptance of evolution has fared less well in the same timespan. Perhaps worse, 16th-century heliocentrism has yet to catch on with many Americans. According to the General Social Survey, 47 percent of them either can’t quite understand or accept it. Scientific literacy is a long, hard slog.)
Over the 19th and 20th centuries, mechanistic biology won its credibility and popular acceptance through evidence. Crucial discoveries ranged from the simple—the ubiquity of cellular organization—to the sublime—identification of DNA as the substrate of genetic inheritance. For mechanistic consciousness to gain its own purchase, similar progress must be made in the cognitive and neural sciences. Advances in computing and artificial intelligence—from Mark I to the PC, and from Deep Blue to Watson—already make it palatable to think that thinking may one day be explained mechanistically. The next marker in this journey will be reached when AI-bots begin consistently defeating Internet-based Turing tests—which test whether a user’s behavior, human or machine, is indistinguishable from human behavior—used to infer and thus authenticate human agency.1 Such tests typically challenge a user’s aptitude for visual pattern recognition, such as the recognition of distorted text. One need only look at current face-recognition technology, Apple’s Siri or IBM’s Watson to conclude that we are approaching an inflection point. A generation hence, all current tests will be inadequate. Central to the materialist project will be progress in neuroscience, which is still a long, laborious way from explaining the doings of human computation. Nonetheless, in visual pattern recognition, coarse approximations to the brain’s computational architecture already compete successfully with pure-engineering algorithms. In a generation or two, these models will advance to a state where intelligence will not only be explainable in material terms, but our particular human intelligence will be as well.
Consciousness, however, is not merely object recognition, analytic reasoning, statistical learning and other cognitive (or affective) competencies readily amenable to computational framing. Consciousness has a subjective experiential dimension, generally referred to as phenomenal consciousness and often described circularly as “what it is like to be something”—a bat, for example. This may be juxtaposed with most conceptions of what it is like to be a robot: information processing absent subjective sensation—a mental void. It may also be compared with non-conscious awareness. For example, in a condition called blindsight, the initial cortical visual processing area is damaged, but evolutionarily older brainstem processing and some minor visual pathways to higher-order cortical areas are spared. This causes afflicted individuals to report their subjective perception of vision to be total blindness. However, to the great surprise of such individuals, they perform better than chance on certain visually cued tasks. This demonstrates that, at least in some circumstances, brains can have awareness of information and act intelligently upon it without consciousness having had any access to the information.
I
n Soul Dust, Nicholas Humphrey, emeritus professor of psychology at the London School of Economics and participant in the discovery of blindsight, sketches a philosophically oriented explanation of consciousness. Writing in a colloquial style, he begins with an account of what phenomenal consciousness is. The remainder of his book explores what it is good for.
The first question Humphrey poses is whether phenomenal consciousness can be inferred strictly from observations of conscious beings. Would an observer be compelled to posit subjectivity, given either human behavior or complete information about the nervous system? Further, might complete information not only imply consciousness but also be sufficient for fully reconstructing the rich personal, ineffable inner landscape? Doubly yes, says Humphrey, contingent on commitments to the evolution of consciousness and materialism. Consciousness, he argues, must have an adaptive value, or else natural selection would not have favored it. (Note Humphrey’s presumption that qualia actually confer adaptive advantage, that they are not spandrels, the late Steven Jay Gould’s term for the non-adaptive byproducts of adaptive traits.) Consciousness, he argues, must have permitted humans to inhabit a special niche. This, then, would be somehow evident in our patterns of behavior.
As to the plausibility of downloading one’s intimate experience through a sufficiently advanced technology (“the Rapture for nerds”, to quote Cal Tech neuroscientist Christof Koch), Humphrey simply asserts, “miracles do not happen.” Any sufficient theory of consciousness’ material causes ought therefore to have an accompanying requirement of neural data from which not only consciousness generally could be inferred but also instantaneous, and hence specific, states of consciousness. A consequence of Humphrey’s conclusions is that consciousness would then be a curious phenomenon of information interacting with itself.
Incredulous readers may find it helpful to consider that all modern computers—specifically universal Turing machines—exemplify something similar. They store data and instructions for computation (programs) equivalently in the physical medium. That is, given a certain substrate, some numbers define operations upon other numbers, which when processed, alter them: information interacting with itself.
Humphrey’s account of how information might interact with itself to produce consciousness has three legs. First, he speculates that consciousness is a phenomenon of perspective: It can only be experienced from the driver’s seat. Humphrey analogizes this to an optical illusion in which a sculpture has an impossible geometry, à la M.C. Escher. As an illusion, the sculpture only works when viewed from a specific vantage point.
The second leg posits that consciousness is a constructed reality, an estimate of what transpires in the real world. This is manifestly true. The brain receives information through sensors in the peripheral nervous system, photoreceptors, for example. The perturbation of these cells generates neural impulses that, when they reach the brain, are processed for information that might increase the organism’s odds of survival and fecundity, whereupon action is taken or not. This is a continuous, cyclical process: (i) input is received and interpreted; (ii) plans are formulated; and (iii) actions are executed and coordinated. Optimizing behavior (decisions and actions) requires the evaluation of hypotheses about threats and opportunities and the real-time coordination of motor plans. For these, the brain requires internal representations (estimates) that, in aggregate, attempt to recapitulate the current state of external reality, where external includes everything outside the nervous system, both body and environment.
When representations are subjectively experienced, they are qualia. That qualia are the direct product of the material configuration of the brain is clear from what we cannot sense (infrared light, ultrasonic vibrations and so forth). Absent a peripheral sensory organ capable of detecting a given signal, consciousness does not represent that domain of physical reality. To wit, we have no direct perception of high-energy particles (radioactivity) and instead must infer their existence and presence from instrumentation.
Third, Humphrey speculates on the mechanism that engenders consciousness and how it came to be. Neuroscience theorizes that the brain uses efference copies to memo itself about motor commands it has just authored. These duplicate orders are sent from the frontal lobe of the brain to the parietal lobe, where stored expectations about future states are updated and then reconciled with sensory feedback from the periphery—pitch, for instance, in the case of singing. An error signal is then propagated to the frontal lobe, which is used to update and refine the unfolding motor plan. Humphrey wants to place consciousness somewhere in the dynamics of this loop.2
More specifically, Humphrey posits that qualia result from efference copies associated with a vestigial reflex arc. The argument is strained, but the intuition is interesting. Over evolution, as behavior transitioned from being exclusively reflexive to being consciously guided, this motor-control loop would have been at center stage. Further, in such a loop, information could interact with itself in a way that might yield consciousness, and to facilitate conscious guidance, consciousness would need to interface with this circuit.
The remainder of Soul Dust speculates on why consciousness evolved. It is often argued that any intelligent activity can be done just as well without conscious accompaniments. That is, if intelligence is conceived of as the optimality of an agent’s beliefs, decisions and actions, then there is no widely accepted argument for why an agent’s performance would be enhanced through the incorporation of qualia. So why have them? The answer may be that evolution works with what it is given. For consciousness to enhance evolutionary fitness, it need not be necessary for a task, like avoiding bears. It only has to be useful for it. Humphrey argues that the benefit qualia may provide is motivational enhancement. Qualia personalize experience, enrich it with affect and imbue it with value. The intuition is that conscious sensation is not just a sterile rendering of the external world but a constant commentary on how you feel about what you sense.
To illustrate his point, Humphrey quotes liberally from poets, writers and artists. Central is a quote from Wassily Kandinsky: “Color is a power which directly influences the soul. Color is the keyboard, the eyes are the hammers, the soul is the piano with many strings.” It is in this sense that Humphrey argues consciousness is a passionate representation of the world. (Ironically, color may literally have triggered the sensation of sound for Kandinsky, as some speculate he had synesthesia, a neurological condition in which the senses are cross-wired.)
Humphrey’s most illuminating quotation comes courtesy of Alfred North Whitehead, which though supremely apt was originally uttered in jest:
Nature gets credit which should in truth be reserved for ourselves: the rose for its scent the nightingale for his song: and the sun for his radiance. The poets are entirely mistaken. They should address their lyrics to themselves, and should turn them into odes of self-congratulation on the excellency of the human mind. Nature is a dull affair, soundless, scentless, colourless; merely the hurrying of material, endlessly, meaninglessly.
Beauty, as it were, is in the eye of the beholder. To further argue that feeling is at the heart of consciousness, Humphrey quotes Milan Kundera: “‘I think, therefore I am’ is the statement of an intellectual who underrates toothaches. ‘I feel, therefore I am’ is a truth much more universally valid.”
While Soul Dust is stimulating reading, Humphrey never makes the case for why consciousness is either required for the integration of sensation and affect or is the exclusive site of this integration. One can easily imagine that sensation could be accented with markers of biological value, such as salience and affect, absent qualia. Thus, while it is eminently reasonable to propose that qualia evolved to augment sensation, skewing behavior toward adaptive ends, it would require more work to show that this actually transpired.
At one point Humphrey quotes Steve Jones, a geneticist, on his view of philosophy: “I often think philosophy is to science what pornography is to sex, I mean it is cheaper and easier and some people seem to prefer it.” I have a less dim view of philosophy. It both guides the scientific enterprise and is its muse. Still, I sympathize with Jones. The mind-body problem will not be solved from the armchair. It will take years at the bench.
T
he aforementioned Christof Koch is one of a handful of serious scientists who aggressively investigates the brain basis of consciousness. His memoir, Consciousness: Confessions of a Romantic Reductionist, is an account of his quest to discover the neural correlates of consciousness. The book recounts his life journey, still in the making, both scientifically and personally. He reminisces on his early career and time spent becoming Francis Crick’s protégé after Crick had left genetics for neuroscience. The memoir is confessional. In mid-life, Koch left both Catholicism and his marriage. His reductionism is a profession that consciousness is born of matter and that the tools are now at hand for unlocking its mystery. Before he dies, Koch, an active researcher, intends to discover the minimal neural states necessary for subjective experience. His project, in part, is to eliminate from consideration brain structures that evince no direct role in consciousness. This is both a medical question—recall Terri Schiavo—and an intellectual one. As Koch puts it, where is the difference that makes the difference?
Much of the brain, Koch concludes, is likely involved in non-conscious processing or only makes qualified contributions to it. This observation is mirrored by neurosurgeons, who describe much of the cerebral cortex as “ineloquent”, meaning, capable of being removed without great effect to one’s quality of life. Of the remainder, certain parts of the cerebral cortex appear necessary for different aspects of conscious experience. For instance, persons with lesions to higher-order cortical areas in the visual processing pathway can lose the ability to perceive motion. Koch refers to such areas as essential nodes, which integrate information via computation into configurations that make possible specific dimensions of experience, for instance, seeing a face as a whole object rather than as a scatter of parts. Koch does not offer a grand statement on which brain structures likely sustain conscious sensation, an ambition of his work with Crick, except to say that the cerebral cortex and thalamus, evolutionarily recent structures, are involved.
Koch, however, does posit grandly about what consciousness is. He sees consciousness as a fundamental (that is, bound up in the laws of the universe) consequence of information integration. Consciousness, as it were, just happens, just as electrical charge and mass just happen. Koch’s reductionism thus is not without bounds. He is a rare species of dualist. His project, then, is not to understand a computational mechanism by which subjectivity emerges, as with Humphrey. Rather, Koch seeks an understanding of how brain architectures and states determine the representational content of consciousness. That is, he views brains, or their analogues, as determiners of the varieties of information integration possible within an information carrying system. Explaining consciousness, he contends, is not to explain the emergence of subjectivity itself. It is rather to explain the emergence of representational complexity and diversity in a system latent with subjectivity.
To this end, Koch is developing methods he hopes will quantify the consciousness of a system. His approach is based on information theory, a branch of mathematics that analyzes signals in noise. Through quantification of the integration of information in a system, Koch expects to be able to characterize a system’s capacity for varieties of subjective experience.
I
n Beyond the Brain, Louise Barrett, a psychologist at the University of Lethbridge, approaches the mind-body problem from a strikingly different perspective. Her aim is to explain the intelligence of animal behavior, as distinct from human behavior. Barrett calls her topic animal cognition. Her usage of cognition, however, is peculiar at best.
Cognition is the faculty of knowing, as distinct from feeling and volition. Typically, the organ of knowing is understood to be the brain. Barrett, however, thinks cognition is “embodied and distributed”, that body and environment, insofar as they contribute to intelligent behavior, are also substrates of knowing.
Consider, for instance, the contributions of the non-brain parts of the head to hearing. The head’s overall bulk and the cochlea, the part of the inner ear where air pressure waves are transduced into neural impulses, are integral to sensation and perception. The distance between the two ears and the acoustic shadow of the head create subtle differences in sound as it is transduced at each ear. The brain leverages these differences to localize sound sources. The physical structure of the cochlea mechanically transforms incoming waveforms, in real time, from a two-dimensional signal (time and pressure) to a three-dimensional signal (time, frequency and energy)—like the display of a stereo’s equalizer. This pre-processing is fundamental to how sound is subsequently encoded and processed within the nervous system.
If cognition is construed to have the broadest of definitions, then yes, the body is part of knowing. If, however, cognition is construed to be fundamentally about the mind, mental states and thought, then no: Signal transformation outside the nervous system is separate from cognition. Most cognitive scientists hold the latter view. Barrett professes the holistic interpretation.
Barrett also argues that cognition is distributed. Her logic is that certain complex behaviors only emerge in the context of an organism’s ecological niche. That is, the organism may express an intelligent behavior, but the behavior is rigidly evolved such that it is evolutionarily adaptive only when certain regularities are present in the environment. Frogs, for instance, will catch and swallow anything with an apparent size, trajectory and velocity similar to that of a fly. As the reflex is immutable, this is a lethal propensity when the context is a laboratory and the objects are BBs. Another case is the brain’s use of cues to trigger the re-expression of memories. For instance, try returning to your high school after an absence of years and witness the deluge of “forgotten” memories. Here, Barrett’s logic is that the environmental cue is part of the apparatus of knowing since, absent the cue, there would be no mechanism for accessing the knowledge. Again, mainstream cognitive scientists will part company with Barrett on this usage of cognition. The nervous system relies on regularities in its inputs for the information it has encoded to be adaptive. This is true for all of its inputs, be they from sensory organs directed at the environment, the internal viscera or the musculoskeletal system. Fundamentally, the nervous system, not the body or the environment, is doing the knowing.
A redeemable emphasis of Barrett’s work is its caution against anthropomorphism: The mechanisms that produce complex, adaptive behavior in animals need not be similar to human cognition in order to produce intelligent behavior. Humans, Barrett argues, may have evolved a predisposition toward anthropomorphizing if, on average, the strategy makes useful predictions about entities in the environment, regardless of the veracity of the attribution of human-like cognition. The strength of this tendency is evident from viewers’ depictions of animated geometric shapes. If a triangle crosses the path of a square, bumping it head-on, viewers describe the triangle as bullying the square. Barrett’s concern, of course, is not the misattribution of intentions to cartoons. Rather, it is that our anthropomorphic tendency can make us fail to see animals on their own terms.
Deciphering what is and is not anthropomorphic misattribution is another hard problem. Clear cases come from insect behavior. For instance, ants march along an optimal path to a food source. Though clearly intelligent behavior, it would be wrong to infer that ants choose their paths from some awareness of path efficiency. Indeed, ant behavior is the result of a random-walk search strategy, updated to bias successful foraging routes. On successful return trips, ants deposit a pheromone trail. Other ants follow it to the food source and leave their own pheromone trail on the return leg. This reinforces the trail, thwarting decay, but it also improves it. As more ants walk the trail, each makes slight deviations. The net result is a smooth, direct path. Thus, a simple algorithm, instinctive and reflexive, explains the intelligent quality of ant foraging. This case is more straightforward than many, but it illustrates the point.
Animal behaviorists, Barrett argues, are equally susceptible to anthropomorphism, often perversely invoking Ockham’s razor to buttress their beliefs. The razor, an epistemological maxim, enjoins us to posit only necessary theoretical propositions. It is often misunderstood—including by Barrett—as a statement that the simplest explanation is usually correct. In fact, it is a statement that there is only justification for beliefs that are licensed by data. Einstein put it best: “The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.”3 In short, posit the least first. Accordingly, we might expect the default position in the study of animal behavior to be presumption of simplistic mechanisms. Barrett, however, reports that the field has instead adopted the anthropomorphic stance.
The rationale offered for presuming human-like cognition and, specifically, intentionality in animals is that complex, adaptive behavior is more parsimoniously explained by representational cognition than through associative learning—the pairwise learning of chains of contingencies. Representational cognition usually denotes abstract, object-level encoding: Sensory impressions are parsed for object-level content and objects are apprehended as coherent, abstract entities, possessing semantic qualities. Associative learning, in contrast, pertains to the shaping of stimulus-response behavior and the mapping of stimulus-stimulus contingencies. By definition, cognition pertains to knowing. Associative processes alone provide impoverished but genuine knowing: probabilistic expectations for future events, hedonic or aversive, given a certain stimulus history. Typically, however, cognition is used to refer to thought, a representational process. Associative learning, or the like, may operate on non-representational, raw sensory impressions, as with machine-learning algorithms. Barrett argues for a rectification of the default stance in animal behavior toward a presumption of non-representational cognition. Given the combined power of innate behavioral tendencies and rudimentary cognitive processes, as with associative learning, her case has strength.
A
n incidental but not insubstantial stake in debate over animal cognition is animal consciousness. Consciousness is typically thought of as a phenomenon of representational processing. Both Humphrey and Koch see room for animal consciousness. Humphrey suspects several species possess limited consciousness but that human consciousness is somehow unique. Similarly, Koch is decided that animals possess consciousness, albeit a less embellished form of it. Once, confronted by a woman who objected that he could never convince her animals are conscious, Koch retorted she would never convince him that she was conscious. The exchange illustrates that today opinions on animal consciousness rest inordinately on faith. Tomorrow, our grandchildren will possess a higher truth.
1For a detailed narrative discussion of Turing tests and formal AI competitions, see Brian Christian, The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive (Doubleday, 2011).
2Douglas Hofstadter makes a related argument in I Am a Strange Loop (Basic Books, 2008).
3Einstein, On the Method of Theoretical Physics (Oxford University Press, 1933).