I really don’t understand what you mean by nonlinearity. Neural networks inherently perform nonlinear computations, but I don’t think that’s what you’re referring to. What is the “nonlinearity” and why can’t complex networks make them (is this a software vs hardware argument)?
More useful than Terrence Deacon’s work on symbolic representation is Michael Tomasello’s work on shared intentionality. It’ll set you straight on what you missed.
As it happens, Maryanne Wolf quotes Deacon on page one of her book Proust and the Squid: The Story and Science of the Reading Brain. I think it likely that the advent of literacy was the great accelerator of the group, or multi-level selection phenomenon you mention. And that is why I noted, as you may recall, that the earliest movement from patrimonial to more “state-like” polities had to involve literacy, because there is no other way to imagine any leader commanding large and specialized functions.
A second comment: a new book by Louise Barrett, called Beyond the Brain, spends a chapter or two debunking the idea that the human brain is just a computer, and that its essential modus operandi is computational. She does a good job. It’s important, however, to at least consider her adjacent point: Yes, humans can read and use symbols and think rational and abstract thoughts, but that is NOT what most of us do most of the time, and a good thing or we would all starve and be unable actually to do anything. The nonlinearity you mention is real but it does not subsume all of what humans are and do. At a certain basic biological level we are not so malleable–that, at least, is what this suggests, and I think the point may bear on volume 2.
The Symbolic Species is a fascinating book, and represents a serious challenge the the Chomskian “innate language faculty” theory, defended by Steven Pinker, Ray Jackendoff and others.
There is a recurring problem in the history of science, a trap that it easy to fall into: included the phenomonon that is to be explained in the explanation. Philosophers and scientists call this the homunculus problem.
The phlogiston theory of fire was one of these mistakes. Scientists like Joeseph Priestley thought that fire came about when certain particles – the phlogiston – were released from a material. Supposedly, if something got hot enough, then it would begin to emit particles that had this fire-like property, that that is what explained the phenomenon of fire.
The problem is, of course, that the ‘fire-like’ properties of the phlogiston is a place where all the mystery of the phenomenon of fire can hide. By postulating a theory of fire that depended on particles with fire-like properties, Priestly and others had merely postponed an explanation of fire. The thing to be explained – fire – had been hidden away in the explanation itself. A fire homunculus.
Deacon argues that the Chomsky’s Universal Grammar hypothesis is a homunculus in this fashion. By postilating a language ‘faculty’ or ‘module’ or ‘organ’, the Chomskians have merely hidden the problem away in an imagined speaker in the mind, one that does all the work of explaining language and its unique properties.
One notices that Pinker will occasionally mention that people think in ‘mentalese’ which then gets translated into language. Oh Proffessor Pinker? And how does that language work?
Scientific interest in the evolution of groups seems particularly important to me because I think we don’t seem to learn very well from repeated collective mistakes. We keep doing the Tulip Bubble. It would be good if we could learn to stop doing that.
“Humans are the only species capable of symbolic reasoning; other forms of communication and social organization among non-human species may be highly complex, but do not involve symbolic representation.”
That may be true, but non-human animals may be capable of symbolic-feeling, which is the basis for symbolic-reasoning among humans. Symbolism is essentially associating one thing with another. Thus, an image or word comes to ‘mean’ or ‘stand for’ something else; it becomes associated with things that are not directly linked.
While non-human beings may not be capable of ‘reasoning’ this way, they are capable of ‘feeling’ this way. Thus, if a dog-owner takes off his coat and places it on the sofa for the dog to lay upon on a daily basis, the dog comes to associate(emotionally and sensorily)the coat with the owner even when the owner is not around. Even after the owner dies, the dog will look at and smell the coat and associate it with its owner. The dog, being mentally limited, cannot turn this into symbolic reasoning, but an element of (associative)symbolic feeling is involved. The dog is emotionally able to associate certain things with other things. As Pavlov demonstrated, a dog can be made to associate sound of a bell with food even though a bell, in and of itself, has nothing to do with food.
Since humans have bigger brains, more memory, and great capacity for logic, they were able to turn associative feeling into associative reasoning. But there had to be associative feeling and/or sensation before it could be mentally organized into associative reasoning.
Nice post. I find Dennett’s views on consciousness compelling. You can say he “define[s] the problem away rather than explaining the phenomenon” of consciousness. But this is begging the question, because his point is precisely that there is really no ontological leap: there is no extra dollop of something special called “consciousness” above and beyond the ensemble of (neural, physical, or even computational) processes which constitute our selves.
This is a great book that can get a little dense and technical at times. The main premise of the text is that what really separates humans from animals and other forms of life is language. Humans use language symbolically as opposed to indexically. The explanation for what this means was one of the hardest parts of the book to get. What it boils down to is that animals, particularly smart animals like chimps and dogs, can map words to specific meanings but they cannot do things like string words together to form long sentences or use the same symbol (word) for multiple meanings. Chimpanzees even struggle with this even though they can learn an impressive vocabulary of word to symbol mappings. Another important fact is that languages simply do not exist in nature outside of humans.
The author then proceeds to examine why or how humans can do this complicated trick. Simple net intelligence is examined and found to be not sufficient. He brings up the fact that mice have similar brain to mass ratios to humans and mice are not considered exceptionally smart. He also points out that Chihuahuas have a much higher brain to body mass ratio than other dogs, even close to humans, but they are not even considered smart for dogs. The reason must have something to do with brain size and structure (not just size). He goes on to show that several regions of our brain are quite a bit larger than expected if we simply scaled up a chimp brain to human size. Some how the pre-frontal region of the brain was exceptionally expanded.
How do things like this happen in nature? Evolution. And why would evolution select for this language ability? The author supposes it has to do with our social organization (pair bonding within a large group) which also turns out to be unique in nature. Humans needed language to determine who was sexually available and who was not. They also needed communication between the pairs so that the male who helps with provisioning of the children can be ensured he is in fact provisioning his own child (genetically).