Three of the books under review are about consciousness, one is about
meaning, and one is about language, but the topics are inter-related, as
we shall see. The reader may find it surprising to learn that it has lately
become fashionable to call the problem of consciousness the "hard problem,"
and the problems of meaning and language (and brain function and behavior)
the "easy problems" of cognitive science. Everything is relative. The "easy
problems" may be easier, compared to the "hard one," but that does not
make them any easier than most other scientific problems.
What is the "hard one," then? It's as old as the human mind, it's probably lurking behind our ideas about religion and the immateriality and immortality of the soul, and it has been pondered since the advent of Philosophy, where it is usually called the "mind/body" problem. Unfortunately, "mind" is ambiguous here, and "body" a misnomer. Some think it's more useful to call it instead the "mental/physical" problem, but even that doesn't quite do the trick.
The problem itself is in relating one sort of "thing" (mental things) with another sort of thing (physical things). We know that physical things are not just "bodies": They are matter and energy, the stuff that physicists (and chemists and biologists and engineers) study and explain to us with their usual functional, cause/effect explanations. And we know exactly what mental "things" are: They are what is going on in our heads when we are awake: thoughts, experiences, feelings.
The problem is: How do we put those two kinds of things together: Are
they both the same kind of thing? Are thoughts/experiences/feelings just
matter/energy, somehow? If so, How? (I pause to let the reader test whether,
mirabile dictu, he can provide a satisfactory answer to this hard question
where everyone else so far has failed)...
If they are not the same kind of thing, what is the relation between the mental and the physical? We know they are exactly correlated, but that is not an explanation. How does the mental fit into the physical world causally? Is it an extra "force," like gravitation? Those who have taken the path of the paranormal in the face of the hard problem, reply "yes," and boldly proclaim the "telekinetic" power of the mind (rather like Uri Geller's spoon-bending, except that it's telekinesis even when we bend the spoon with our fingers: We move our fingers because we feel like it).
But the trouble with this easy solution to the hard problem is that it has some uneasy consequences: It is at odds with the matter/energy conservation laws of Physics, causal laws that have an awful lot of evidence supporting them, all over the universe. To see the mental as a telekinetic force, we have to be ready to believe that some rather remarkable things are going on on our small planet: Things move because they are willed to move, not just because of the usual transfer of energy. And what is the source of this telekinetic force? That's anyone's guess, but it can't be just our brains, because our brains, like our hearts and our livers, are just that ordinary stuff, matter/energy, structure/function. (If our brains were the cause of all our motions after all, i.e., if there were no telekinesis, then it would never be true that we move because we feel like it; it would just feel-like that was how/why we were moving.)
I will not pursue the telekinetic option any further (it is often called "dualism"), because, in exchange for "solving" the hard problem, it seems to raise even harder problems, pitting itself against all the rest of science. Suffice it to say that none of the authors of the books under review would endorse telekinetic dualism. They are all committed to explanations that stay within the natural bounds of matter and energy, structure and function -- bounds set by current theory and evidence in physics, biology and engineering. Yet let us admit that telekinesis certainly feels like the right explanation for our minds, and what they do, and how. It's just that it's an explanation that unfortunately does not fit with the scientific explanation of everything else -- and hence would itself stand in need of scientific explanation.
Before looking at the books in more detail, we must distinguish between the "easy" problems (e.g., meaning, language, intelligence, brain function, behavior) and "easy" solutions to the "hard problem." The books by Fodor and by Tomasello do not venture to take on the hard problem at all. Fodor thinks it would be futile but (unlike McGinn) does not say why (he spends his time instead trying to show why we may not even be able to solve some of the easy problems!). And Tomasello does not even mention consciousness. But Damasio and Edelman & Tononi explicitly state that they will not beg the question.
There are basically two ways to beg the question. One way is to change
the subject, swap an easy problem for the hard one (but keep calling it
the hard one anyway), and then solve that problem instead. The other way
is simply to provide an easy solution, but interpret it as if it
had solved the hard problem. Damasio does the first and Edelman & Tononi
do the second.
Damasio announces that he will not beg the question. He is not merely going to explain intelligence or language or brain function or behavior, for all of those could in principle be explained without there being any hard problem at all: If we had the very same intellectual and linguistic capacities we have, but we were not conscious (no mental states, no thoughts, experiences, feelings), then there would still be the "easy" problem of explaining our capacities in terms of our brain function, but that would just be ordinary (easy) science. Let us call that kind of an explanation a "functional" explanation (shorthand for a structural/functional explanation). Functional explanations are perfectly compatible with the matter/energy explanations of physics, biology and engineering.
What makes the hard problem hard is that that is not all there is to it: We are not just Zombies with certain intellectual and linguistic capacities. We are conscious, that is, we do have mental states: thoughts, experiences, feelings. Let's call what it is that makes mental states mental "feelings," for short. If we were nonfeeling Zombies, there would be no hard problem. What makes the hard problem hard is precisely the mysterious difficulty of explaining feelings functionally. So the "mind/body" problem is actually the "feeling/function" problem.
Why is it so difficult (if not impossible) to explain feelings in terms of function? Because a functional explanation is always a cause/effect explanation, showing how/why something works the way it does. A functional explanation is fine for ordinary, nonfeeling matter/energy: physics, biology, engineering. But every time we try to explain a feeling functionally, we find that the structure/function alone can do the cause/effect job just fine (thank you very much!), and the feeling just falls by the wayside, unexplained.
Here is an example: A functional explanation of "pain" might go something like this: Pain is a signal that indicates that tissue has been injured. It is useful for an organism's survival and reproduction to minimize tissue injury, to learn and remember to avoid what has caused injury in the past, to avoid contact between a currently injured body-part and other objects while the part is still damaged, etc. The sensorimotor and neural machinery for accomplishing all this, including the computational mechanism that would do the learning, the remembering, the selective attention, etc., could all be described, tested, confirmed, and fully understood. The only part that would remain unexplained is why pain feels like something: The functional explanation accounts for the functional facts, but the feeling is left out. Every time you try to give a functional explanation of feeling, the feeling itself turns out to be functionally superfluous (except for telekinetic dualists!).
In short, we know that we are not feelingless Zombies. The hard
problem is explaining how and why we are not. For how/why's
are purely functional matters. That seems to leave only two possibilities:
(1) Epiphenomenalism: Feelings are not functional but merely "decorative,"
piggy-backing (for some inexplicable, because nonfunctional, reason) on
certain functions. Or (2) Dualism: Feelings are telekinetic. The
hard problem is finding an explanation for feelings that is neither (1)
nor (2). My own view is that this is simply impossible. How do our authors
fare with this?
Damasio sets out determined not to beg the question. Even his title makes it clear that it is the problem of feeling that he wants to take on directly, not something else: "The Feeling of What Happens: Body and Emotion in the Making of Consciousness." His book provides a great deal of new, insightful and illuminating data and theory about the brain areas correlated with feeling, especially the feeling of the "self," and about the remarkable ways in which they can diminish or break down in states such as sleep, coma, vegetative state, epileptic automatism and akinetic mutism. We are all Zombies when we are in deep, dreamless sleep; are we Zombies in any of these more active states too? These questions and answers are fascinating, but they do not include the hard one.
Maybe we are indeed feelingless Zombies when we are in the grip of an epileptic automatism, maybe we are not. (It is hard to know for sure without being the epileptic undergoing the automatism; and even if you were, you wouldn't be able to speak at the time, and afterwards you wouldn't be able to recall! So, without being telepathic, no neurologist could ever know for sure whether or not a patient was in a Zombie state: This is called the "other-minds" problem, the flip side of the mind/body problem.)
Damasio's functional anatomy of feeling states certainly tells you a good deal about what their brain and behavioral correlates are: When this part of the brain is active, you feel this and you can do that; when you lose this part of the brain, you lose the ability to feel this and to do that. This is of great interest to the clinician trying to do diagnosis, prognosis and treatment. It is also useful to patients, patients' families, and to everyone interested in how their own brain works. In some cases, for example, in the brain anatomy of the "sense of self," Damasio's findings may help theorists come up with functional models for designing a system that has the capacities that go with having a sense of self. But these are all the "easy" problems. Do Damasio's findings cast any light on the hard problem of how/why we feel at all?
Alas, they do not, and I think I can pinpoint exactly where the question gets begged: Damasio is intent on providing a bottom-up explanation of feelings, from the most primitive feeling-state of motionless muteness (akinetic mutism) to the very highest order feeling-states of a philosopher like Descartes when he is reflecting on the nature of mind. But explaining the variations along this hierarchy of feeling states is the easy part; the hard part is explaining why/how any of it is felt at all. The critical transition, in other words, is between nonfeeling and feeling, and this is the transition that Damasio completely overlooks. Instead, he rests his hierarchy on a very nonstandard (and I think, in the end, incoherent) notion of "emotion."
On the face of it, an emotion is just a synonym for a certain kind of feeling. (Other kinds of feelings would be sensations like seeing something blue or hearing something loud, hybrid emotion/sensations like feeling pain, desire-states like wanting something, psychomotor states like willing an action, or complex feeling/knowing-states like believing, doubting, or understanding something.) But Damasio uses emotion in an equivocal way, so as to bridge the unbridgeable gap between nonfeeling and feeling. For his bottom-level "emotions" (readers can confirm this for themselves) are either just motions (movement tendencies and their underlying brain activities), in which case they are no kind of feeling at all, and leave us as clueless as before about how to bridge the gap, or, worse, they are "unfelt feelings," which is a contradiction in terms. Either way, it is only by using this blurred notion of emotion that Damasio gives the (illusory) impression of having made some sort of successful transition from the unfelt to the felt.
Descartes (whom some people wrongly blame for the idea of dualism) was
the subject of Damasio's prior book, "Descartes' Error." According to Damasio,
Descartes had made the mistake of trying to separate what in the brain
is inseparable: the psychic (mind) and the somatic (body). In brain functional
anatomy, there is no such separation. But let us not forget that all of
the brain, both structure and function, is "somatic." And that's precisely
Damasio's error with motions and emotions. For the functional part of emotion,
the somatic part, is indeed just motion! But the felt (psychic) part is
something else, something 100% correlated with brain structure and function,
to be sure, but correlation isn't explanation. Correlations need a causal
explanation, and the only candidate explanation (telekinetic dualism) is
a nonstarter. Hard luck.
But then Edelman & Tononi go ahead and beg the question anyway. They describe some very interesting functional networks -- "distributed, re-entrant" ones -- which they hypothesize to have some powerful functional capacities (some of them experimentally demonstrated, many of them not yet). They also describe how these networks are brainlike in many ways. This is all very important and exciting, but still all just functional: How/why do the feelings come in (other than as the usual mysterious, unexplicated correlation)? For otherwise this is just an exercise in hermeneutics: interpreting a functional mechanism that correlates with feeling as actually being the feeling, and thereby being the functional explanation of the feeling; whereas in reality it is merely the explanation of the functions that are mysteriously correlated with the feeling, nothing more.
We can pinpoint the locus of the question-begging here too: Edelman & Tononi's network model is largely a category-learning mechanism. As such, it will be a very important contribution if it can be shown to have all the functional capacities the authors say it has; but that is not what they show here. In this book they just try to persuade the reader that their network's functions are somehow an explanation of feeling. And their counterpart of Damasio's equivocation on motions/emotions is their treatment of "discrimination." To discriminate is to be able to tell things apart. Psychophysicists speak about the "jnd" or "just-noticeable-difference" -- the smallest sensory difference that we can feel.
Feel? But of course psychophysics, being an ordinary functional science like all the others, really only deals with the smallest sensory difference we can detect and respond to. That could just as well apply to an optical transducer. The fact that it also happens to feel-like something to detect those differences is another matter, and Edelman & Tononi's model comes no closer to explaining the how/why of that than an optical transducer does.
Two other points in passing about Edelman & Tononi: (1) They cast
some of their argument in terms of another fashionable problem, the "Binding
Problem" ("How does the brain manage to 'bind' all the simultaneous sensations
it receives while perceiving an object into one unitary percept of that
object?"). But would there be a Binding Problem at all if there were nothing
it felt-like to perceive an object -- if our brains just went about doing
all their functional business of moving, categorizing, discriminating without
feeling anything while doing it? Might the "Binding Problem" be just another
variant of the (hard) question of how/why we are not Zombies? (2) I personally
did not glean much insight from the authors' paraphilosophical koan "Being
For surely the latter is true: The brain does somehow cause feelings; no nondualist doubts that. But the hard problem is explaining how/why. Now McGinn's position is interesting in the sense that he is declaring, positively (but nondemonstratively), that there IS an answer, but it just happens to be one we are not equipped to grasp. By way of evidence, he gives examples of other kinds of things our brains are not equipped to grasp: We can't know what it feels like to be a bat (with its extra sonar sense), any more than someone born blind can know what it feels like to see. But this is cheating! It is like saying that there is a feeling missing from our repertoire, and that feeling is: what it feels like to know the solution to the feeling/function problem!
At the very least, to give that point some substance, McGinn would have to say what the solution was that we were incapable of grasping as being the solution -- and how and why it is the solution even though it does not feel like the solution. For, on the face of it, all we are asking for is a functional explanation, a how/why explanation. Such explanations tend to be objective ones, not depending on how they "feel" to you, any more than the truth of a mathematical proof depends on whether or not it feels true to you. If there is indeed a functional explanation of feeling, it ought to be possible to at least state it (and test it, functionally), even if, because of our brain limitations, it will not be sufficient to dispel the attendant mystery about the hard problem from our minds.
But perhaps McGinn means something even stronger than this: Not just that we lack the sense to see that something is a solution to the hard problem even when it is staring us in the face, but that we even lack the means to state that solution. But that would be very odd, because it would be a limit not just on the nature of our brains, but on the expressive power of language and mathematics (both of which, though rooted in our brains, have universal, brain-independent powers too): I may not be able to feel what it is like to be a bat, but surely I should be able to state all the functional facts about it (in fact, that's exactly how we understand the bat's sonar sense, and there is absolutely no mystery there, just a feeling that we know that we lack).
No, I don't think McGinn's conjecture helps us with the hard question at all: If the question is how/why we feel, then his reply that we are not equipped to know how/why simply raises another question, just as hard: How/why not?
Before leaving the hard problem and moving on to the two books that address easier problems, I will venture an answer: It is not because we have the wrong brains. It's because of the nature of functional explanation and the nature of feeling. The only alternative to telekinesis (in which feelings would have an independent causal power of their own) is that feelings do not have an independent causal power of their own. They just are. (We know they exist; that's not in dispute.) Moreover, they pose no problem to the rest of science if they are simply side-effects of matter/energy/structure/function, not causes in their own right.
We are no less mystified by this merely decorative "function" for feelings (called "epiphenomenalism") but at least it moots any further how/why questions. And it implies that the hard problem is insoluble: Telekinesis itself is false. Feeling is immune to (nontelekinetic) functional explanation (hence it is inexplicable). And we are still left with the sense of mystery about how and why this should be so -- a mystery that could perhaps only be dispelled if we did have an extra sense, a telepathic sense, of the way matter/energy/structure/function causes feeling. But that hypothetical sense is just as self-contradictory, hence impossible, as a functional explanation of feeling, because of the essentially 1st-person nature of feeling: The only feelings you can feel are your own. ("I feel your pain" is just a metaphor.) So any "telepathic" sense I had of how nonfeeling causes feeling could only be an illusion. I can feel only what I feel, not how I (or anyone else) feel(s).
The question Tomasello is trying to answer is unapologetically one of the "easy" ones: How and why does our species, and no other, master language? In the past, other theorists have begged the question of consciousness by suggesting that having consciousness and having language are somehow one and the same thing, but Tomasello will have no part of that view. He recognizes that animals not only have feelings but that they are very smart; so in many ways the question about language is: How and why do we differ from animals in this respect? What is the functional specialization that makes us capable of language, and them incapable?
To answer this question, Tomasello studies the behavioral, social, conceptual and communicative capacities of (1) apes as well as those of (2) children, before (2a) and after (2b) the age at which they acquire language. His comparative studies point to a few critical capacities: the capacity to imitate others, the capacity to "mind-read" (i.e., to sense what others are seeing, wanting, thinking), and the capacity to monitor and coordinate joint attention with others (to sense that both of you are looking at or thinking about the same thing, and to sense that the other one senses that too: Damasio's mechanisms for the sense of self would come in very handy here). No nonhuman species has this set of capacities in full, and not even the human child does until the age when language usually begins. So Tomasello concludes that they are the functional basis of language.
These findings are very important, and, as Tomasello shows, the capacities he has isolated form a basis for human culture. But do they explain language? There is still the separate question of the functional basis of grammatical capacity (another "easy" problem), but let us leave that aside, as a functionally autonomous module, until we get to Fodor's book. Apart from grammar, do we really have the functional basis for language here? I would like to suggest that we do not. For human language is, among other things, the capacity to express any proposition with a string of symbols (e.g., "The cat is on the mat," "Feeling cannot be explained functionally," "2 + 2 = 4") plus the capacity to understand symbol strings as expressing propositions.
But if you look closely at the capacities Tomasello has singled out (and even if you design a functional model that will actually implement those capacities) you will find that you have a mechanism that is capable of producing and sharing social pantomime: Acting out present and future scenes, drawing people's attention to this or that, sharing all the kinds of knowledge that can be shared by this sort of joint activity -- but it doesn't provide a clue about how to get from pantomime to propositions. Even acting out the cat being on the mat is simply that: a pantomime of the cat being on the mat, in much the way that the cat actually being on the mat is that too. But so far we are still in the analog world of events, and copies and re-enactments of those events. This may well be a necessary precondition for language. But it is not language until we make the transition from this analog world of social imitation, to the arbitrary, symbolic world of propositions.
Perhaps Tomasello's functional resources need to be augmented with Edelman
& Tononi's: If their category learning network has the power they say
it has, it should be able to learn to detect and identify cats and mats
and "on-ness." So far that just names them. But if it can also string the
names into propositions that describe events and can be construed as either
true or false, then we may indeed be closer to the functional substrate
for language capacity.
Now this view of Chomsky's is highly controversial, but it has a great deal of evidence supporting it: It does look as if UG isn't and cannot be learned by the language-learning child (the trial and error possibilities are far too large, and the child's actual learning time and experience far too small); for similar reasons, it is hard to imagine how UG could have evolved in the usual way (but this is perhaps not as firmly based on evidence as the fact that it cannot have been learned in childhood).
Fodor, impressed by the innateness of one function of the mind, UG, generalizes it to other functions that go far beyond the evidence for UG: Fodor thinks that most categories ("cat," "mat," "object," number," etc.) are innate and unlearned too, just as UG is. All we learn is what names to call them; their meanings are already innately in our heads, like place-holders merely waiting for labels. If this is true, it is bad news for Edelman & Tononi's category-learning networks, because it leaves them very little to do: Most of the category structure of the world would somehow have to be built into them in advance.
But is there any reason to believe it is true? Is there any evidence that there are not examples enough, or time, for children (and adults) to learn all the kinds of things there are in the concrete and abstract world by trial and error, guided by feedback indicating when they get it right and wrong? I think there is no such evidence. But then why does Fodor believe that what is true of grammar might be true of meaning too?
I think the answer is related to yet another ("easy") problem, The symbol-grounding problem: Symbols alone do not mean anything. Ignorant of Chinese, you would look in vain for the meaning of any Chinese word in a Chinese/Chinese dictionary: It's all in there, and yet it isn't! You look up a definition, and it's just more meaningless symbols, even though, for a Chinese speaker who does not know the meaning of that particular word, but does know the meaning of the words used to define it, it's enough to convey the new meaning.
This is at the same time the power and the limitation of language: In principle, you can find out anything and everything from strings of words expressing propositions, but you can't start from scratch: Some of the words have to be "grounded" in something other than just more (meaningless) words. How are those to be grounded? Edelman & Tononi's networks, linked to the world through sensors and effectors, sound like a good start, although we would do well to build in the functions Damasio describes for internal sensorimotor maps and the self, as well as the functions Tomasello describes for social communication.
Such a system would then ground some of its symbols directly, in the capacity to detect, discriminate, categorize and manipulate the things they stand for in the outside world. Other symbols it could then ground indirectly, through propositions that define them in terms of already grounded symbols.
Fodor thinks this sort of mechanism is a nonstarter, for pretty much the same reason that the "associationism" of 17th century philosophers was a nonstarter: Thought and meaning is not merely the "association" of "ideas." Thought has structure over and above mere association in time and space. (So Fodor would not believe in the Edelman & Tononi network module of such a hybrid symbol-grounding system.)
Symbols and computation can perhaps capture some of the structure of thought, but Fodor, although he is a functonalist and a computationalist (computation is his "language of thought"), doubts that computation can do the whole job. His doubts are based in part on worries about "holism" (symbols are local things, but meanings are not) and in part on "abduction" (how can a symbol system find the best theory to explain any set of data unless the answers are all already built into it in advance?) (So Fodor would not believe in the computational component of such a hybrid symbol-grounding system either.)
(It should be added that Fodor seems to have little more faith in the explanatory usefulness of "modules" -- despite the fact that he himself was responsible for popularizing the notion -- than he does in nets or symbols [or brain function, for that matter, or evolution]. We can define modules, in a theory-neutral way, as functionally independent components of a system, components whose design can be understood and modeled on their own, in isolation from the rest of the system. Perhaps because Fodor's own notion of modularity was inspired by UG [which was originally considered by Chomsky to be a functionally independent component of our language mechanism], the definition of "module" has been saddled with so many additional arbitrary stipulations -- they must be innate, they must be "informationally encapsulated," they must not be influenced by what a person knows -- that the word really has lost all its usefulness.)
Are there grounds for all this scepticism about the only explanatory resources that cognitive science has at its disposal? It is certainly true that cognitive science has not even come close to solving any of its "easy" problems, such as explaining the functional basis of language or of meaning or of any other lifesize piece of human intellectual capacity or brain function. But it's also hard to know how fast it ought to be explaining the mind, based on scientific track records elsewhere. Tononi & Edelman have to be given the time to demonstrate whether or not their nets can do what associationism could not do. Their nets are, after all, operating on inputs (sensorimotor and symbolic) not "ideas" (whatever those are). And if symbols have their limitations, they also have their powers. No one can say in advance what hybrid systems can or cannot accomplish if their symbols are grounded in the sensorimotor world via category-learning networks. Changing the definition of just one word in a dictionary already propagates "holistically" to every other definition in the dictionary in which it figures. Change the sensorimotor grounding and the holistic effects could be even more dramatic.
So there's no a priori reason to doubt that the "easy" problems can be solved using cognitive science's current functional tools. But if what you want to know is how and why it feels like something to be a system that has and exercises all those functions, I am afraid you will be disappointed. That is one unsolved mystery we will just have to learn to live with.
Chalmers, D. (1995) Facing Up to the Problem of Consciousness. Journal
of Consciousness Studies
Damasio, A.R. (1994) Descartes' Error: Emotion, Reason, and the Human Brain. Avon Books.
Harnad, S. (2000) Correlation VS. Causality: How/Why the Mind/Body Problem
Is Hard. [Invited
Commentary of Humphrey, N. "How to Solve the Mind-Body Problem"] Journal of Consciousness
Studies 7(4): 54-61.
Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability
Journal of Logic, Language, and Information 9(4): 425-445. (special issue on "Alan Turing and
Artificial Intelligence") http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html