A preprint of an article which appeared in CONSCIOUSNESS AND COGNITION 2: 364-382 (1993).
This paper supports the basic integrity of the folk psychological conception of consciousness and its importance in cognitive theorizing. Section 1 critically examines some proposed definitions of consciousness, and argues that the folk- psychological notion of phenomenal consciousness is not captured by various functional-relational definitions. Section 2 rebuts the arguments of several writers who challenge the very existence of phenomenal consciousness, or the coherence or tenability of the folk-psychological notion of awareness. Section 3 defends a significant role for phenomenal consciousness in the execution of a certain cognitive task, viz., classification of one's own mental states. Execution of this task, which is part of folk psychologizing, is taken as a datum in scientific psychology. It is then argued (on theoretical grounds) that the most promising sort of scientific model of the self-ascription of mental states is one that posits the kinds of phenomenal properties invoked by folk psychology. Cognitive science and neuroscience can of course refine and improve upon the folk understanding of consciousness, awareness, and mental states generally. But the folk-psychological constructs should not be jettisoned; they have a role to play in cognitive theorizing.
Let us concentrate on the core sense of "conscious." The definition suggested above seems reasonably accurate but not terribly illuminating. "Awareness" is just an approximate synonym of "conscious," and so is "phenomenal." Not much progress is made by providing these synonyms. Is there a definition that gets outside this circle of unrevealing synonyms, while still confining itself to the ordinary grasp of the concept (rather than shifting to the psychological mechanisms of consciousness or its neurological basis)?
Attempts at definition might try to define consciousness (semi-) operationally, by reference to the sort of behavior that would provide public or external evidence for consciousness. For example, one might try to define a conscious state as a state available for verbal report. This proposal, unfortunately, has many defects. Verbal reportability is not a necessary condition for a state to be conscious. First, individuals with speech impairments such as global aphasia may be unable to report their inner states, but this doesn't render those states unconscious. Similarly, the right hemisphere of a split-brain patient might have awareness although its disconnection from the verbal left hemisphere precludes verbal report. Second, some states of awareness may be too brief, too confused, or too temporally removed from report possibilities to link up with the apparatus of verbal report. Dreams, for example, are episodes of awareness. Yet at the time of dreaming there seems to be no engagement with verbal apparatus. Dreams also commonly suffer from a degree of confusion and evanescence that make them difficult if not impossible to report. Third, no satisfactory definition of consciousness should automatically exclude animals from having states of consciousness, which the verbal reportability definition does. Although it is not certain that animals are conscious, it is surely an intelligible possibility despite their evident lack of report capability. So report capability cannot be required by consciousness. Finally, verbal reportability is not a sufficient condition for consciousness. Machines and robots might be capable of reporting their internal states, but intuitively this would not suffice to confer awareness on those states.
A second approach is to try to define consciousness in terms of its function, e.g., informational accessibility. An example of this approach is the "global workspace" idea of Bernard Baars (1988) and others. A conscious representation, on this view, is one whose message is "broadcast" to the system as a whole, not just to some local or specialized processors. This idea of global broadcast may accurately describe a notable characteristic of human consciousnessness as studied by cognitive science, but it is unlikely to capture the ordinary grasp of the consciousness concept. Surely ordinary people do not understand consciousness as a set of messages posted on a large blackboard for all cognitive subsystems to read (Baars 1988, p. 87), since the picture of the mind as a collection of intercommunicating subsystems is not part of our naive conceptual repertoire. A similar point holds for other traits of consciousness described by Baars. For example, he notes that consciousness in human beings is typically reserved for messages that are "informative" in the technical sense of reducing uncertainty. When uncertainty is already (close to) zero, messages tend to be removed from consciousness; the most obvious example is the loss of awareness of repeated stimuli in stimulus habituation. But clearly this correlation between consciousness and informativeness is not something generally recognized by ordinary people, so it is not part of the naive understanding of consciousness. Moreover, one can readily conceive of a system in which uninformative or "redundant" thoughts remain vividly conscious. Thus, informativeness can hardly be viewed as an essential property of consciousness as commonly understood. In similar fashion we may note that the combination of global broadcast and informativeness is not sufficient for consciousness. We can easily conceive of a (nonhuman) system in which informative representations are distributed to all subsystems yet those representations are totally devoid of phenomenal awareness. Baars lists further properties of consciousness, but I doubt that even these, in conjunction with the first two, suffice for phenomenal consciousness.
A third general approach is to try to define consciousness in terms of self-knowledge, self-monitoring, or higher-order reflection. For a state S to be conscious, it might be proposed, the possessor of S must have another state that is conscious or aware of S at the time of its occurrence. Observe, however, that this formulation uses the term "conscious" in defining itself, obviously not terribly satisfactory. Such objectionable circularity can be avoided if we substitute "belief," "thought," or some other term referring to an informational state. This yields something of the following sort: "State S of a system is conscious if and only if the system possesses a 'higher-order' belief that it is in S; or perhaps "... if and only if the system has another informational state that monitors S." Such proposals are endorsed by philosophers such as David Armstrong (1968), David Rosenthal (1986, 1990, 1993), and William Lycan (1987), as well as psychologists like Philip Johnson-Laird (1988a, 1988b).
Does the higher-order belief or monitoring state required by this proposal itself have to be conscious? If the proposal is so intended, then we are still appealing to the consciousness of a higher-order state to confer consciousness on a first-order state, which leaves the circularity unremedied. It also generates an infinite regress, since each nth-level state must be rendered conscious by an n+1st-level state. Suppose instead that the higher-order belief need not be conscious (it is generally assumed in cognitive science that belief per se, though it requires intentionality or aboutness, need not involve consciousness). Under that construal, clearly stated in Rosenthal (1993), the definition does not get things right. Couldn't there be a robot or "zombie" that totally lacks phenomenal awareness or subjective feeling but nonetheless has higher-order beliefs about its other internal states? In fact, we need not appeal to thought experiments to make this point. Real human beings have nonconscious representational states that are monitored by other nonconscious states. This objection is lodged by Anthony Marcel (1988, p. 140), who observes that we nonconsciously edit nonconscious speech production decisions and motor intentions. Since higher-order monitoring takes place in these nonconscious domains of cognition, the monitoring relationship is by no means sufficient for consciousness. In addition to these counterexamples, the underlying idea here is puzzling. How could possession of a meta-state confer subjectivity or feeling on a lower-level state that didn't otherwise possess it? Why would being an intentional object or referent of a meta-state confer consciousness on a first-order state? A rock does not become conscious when someone has a belief about it. Why should a first-order psychological state become conscious simply by having a belief about it?
It is noteworthy that each of the failed attempts thus far examined offers a relational definition of consciousness. Each tries to explain the consciousness of a state in terms of some relation it bears to other events or states of the system: (i) its expressibility in verbal behavior, (2) the transmission of its content to other states or locations in the system, or (3) a higher-order state which reflects on the target state. The failure of such proposals leads one to suspect (though of course it does not prove) that no relational proposal will succeed. Of course, conscious states could still possess significant causal/functional relations to other cognitive events; it is just that consciousness is not definable by such relational characteristics. Our ordinary understanding of awareness or consciousness seems to reside in features that conscious states have in themselves, not in relations they bear to other states.
Call this sort of thesis about the ordinary understanding of consciousness: intrinsicalism. One way intrinsicalists defend their position is through inverted spectrum arguments. They try to produce conceptual "dissociations" between the intrinsic quality of awareness or experience and its functional-relational properties, thereby showing that the former are not simply equivalent to the latter. The traditional inverted spectrum argument tries to show that functional (relational) similarity can in principle be accompanied by qualitative (intrinsic) diversity. Two people, or the same person at different times, might have functionally identical states, i.e., states that interact equivalently with all inputs and outputs, and yet have different experiential "feels" to them, or no feels at all. A second type of conceptual dissociation is presented in Ned Block's (1990) discussion of the "Inverted Earth" example. Block demonstrates that qualitative or intrinsic similarity can in principle be accompanied by functional diversity. Two people, or the same person at different times, might have qualitatively identical states that are functionally-relationally diverse.
For reasons such as this, Block (1991, 1992, 1993) holds that at least one sense of "consciousness" refers to an intrinsic (rather than a relational) property, called phenomenal consciousness. He distinguishes this from a second sense of consciousness, access consciousness, which picks out, roughly, a state's capacity for rational control of speech and/or behavior. A very similar distinction is drawn by Edoardo Bisiach (1988). John Searle (1992) denies that so-called access consciousness is a bona fide concept of consciousness at all (a worry also expressed by Bisiach), and our earlier discussion supports these doubts. Even Block, who defends access consciousness as a legitimate sense of "consciousness," does not take it as a substitute for phenomenal consciousness. Our own discussion strongly suggests that only the phenomenal notion of consciousness is the one intended in common usage.
We have not managed to define phenomenal consciousness except through unilluminating synonyms, but this does not necessarily show that anything is amiss. Not all words in the language (perhaps very few) can have "reductive" definitions. There must be exits from the circle of purely verbal definitions. Moreover, definitional problems are regularly encountered with many fundamental concepts, such as "truth" and "existence," which consistently resist non-trivial definition. Finally, it should not be surprising that the meanings of some words, especially those addressed here, should be attached largely to subjective experience rather than behavioral criteria. Why shouldn't words like "conscious," "aware," and "feeling" be associated in common understanding with subjectively identifiable conditions rather than behavioral events? The contrast between awareness and unawareness, for example, might be learned as follows. Someone asks whether you are aware of a certain humming noise. You now notice this noise for the first time, and contrast your new state of awareness (of the noise) with your prior state of unawareness. There are also degrees of awareness -- e.g., being dimly aware, vividly aware, etc. -- that provide clues to the meaning. So why shouldn't the intended meanings be located primarily in subjective experience rather than behavioral dispositions, for example?
Patricia S. Churchland (1988) gives one forceful expression of such worries (also see P. S. Churchland 1983). She offers a scenario in which "consciousness" might "go the way of 'caloric fluid' or 'vital spirit'" (Churchland 1988, p. 277). She does not exclude other possible scenarios, even a smooth reduction of consciousness to neurobiological phenomena. But she seems to lean toward the "outright replacement of the old folk notion of consciousness with new and better large-scale concepts" (p. 302). Although she does not use the word, she apparently contemplates the elimination of consciousness, just as the propositional attitudes (belief, desire, etc.) might be eliminated (see P. M. Churchland 1981, 1988; P. S. Churchland 1986). Eliminativism seems to be implied by the analogy with caloric fluid and vital spirit, though it may not be her intent. (Another interpretation will be considered later.) In any case, let us examine the arguments and see what conclusions they support.
One reconstruction of Churchland's reasoning might proceed as follows. Consciousness is a theoretical concept, which means it is implicitly defined by a network of putative laws. In this case the laws are ones that ordinary folk allegedly accept and regard as essential to consciousness. If these laws are in fact false, there is no phenomenon that instantiates, exemplifies, or realizes this concept. Churchland offers several examples of such concept-impregnating but factually false laws. It is generally assumed to be dead obvious, she says, that if someone can report on some visual aspect in the environment then he must be consciously aware of it. But blindsight reveals the falsity of this assumption. Second, says Churchland, it is generally assumed that the conscious self is an unanalysable unity, i.e., if the self reports a conscious experience, there is no other part of the self that could be unaware of that experience. But commisurotomized subjects falsify this assumption. Third, it is part of the very concept of consciousness that if one is not having visual experiences then one is aware that one is not having visual experiences. But denial syndromes, such as blindness denial (Anton's syndrome), falsify this generalization. Fourth, it is part of the conventional wisdom that what we are in control of we are also conscious of. But this is refuted by somnabulism: successful negotiation of the environment during nonconscious sleep.
These phenomena would indeed undercut the ordinary concept of consciousness if the folk accepted the assumptions Churchland imputes to them. But do they? That has not been established. How do ordinary folk react when they are initially told about blindsight? Do they conclude that blindsighted subjects must be consciously aware of what they are reporting (or guessing)? That is not how I responded when I originally heard descriptions of blindsight. What is the evidence that other ordinary folk would so respond? Do ordinary people believe that the self is an unanalyzable unity, i.e., that there could not be one part of a self with awareness of a certain experience and another part lacking such awareness? Unity of the mind is, of course, a metaphysical doctrine advanced by philosophers (e.g., Descartes), but is it systematically assumed by ordinary folk? Usually a resolute empiricist, here Churchland provides no evidence about what ordinary folk assume. What about the putative assumption that consciousness and control go hand in hand, which somnabulism allegedly refutes? Somnambulism is hardly an esoteric phenomenon. If it were capable of refuting the existence of consciousness as it is commonly understood, why would that refutation not have been appreciated long ago? Are ordinary folk of the mistaken persuasion that sleep-walkers are aware during their nightly excursions? Churchland provides no evidence in support of this contention. Thus, it remains questionable whether the cited requirements on consciousness are really imposed by ordinary folk. If not, then the ordinary concept of consciousness is not overthrown by the so-called "denormalizing" facts she adduces.
There is a less radical view in Churchland's discussion. Elsewhere in the article she claims merely that consciousness is not a "natural kind" (see Churchland 1988, pp. 284 ff., and P. M. Churchland 1985). It is not a "unitary" phenomenon but a class of phenomena whose subclasses may be amenable to diverse neurobiological explanations. Neuroscience may eventually find little use for the consciousness construct, and prefer to draw classifications rather differently. This idea is also advanced by a second contributor to the Marcel and Bisiach volume: Kathleen Wilkes (1988).
I find the denial of natural-kind status a much more congenial point, especially when it is recognized that it does not entail the nonexistence of consciousness. Observe, for comparison, that there are plenty of terms in ordinary language, e.g., "bush" or "bug," which do not pick out natural kinds as judged by scientific concerns, but still delineate a genuinely existing set of objects (cf. Flanagan 1992, p. 22). Scientists may not find "bush" or "bug" particularly useful classifications; they do not comprise botanically or biologically unitary categories. It does not follow that there are no bushes or bugs. Furthermore, John Dupre (1993) argues persuasively that scientific taxonomies do not normally give us insight into "essences," and are as messy as nonscientific classifications. Applied to the present domain, this would raise doubts about the assumption that neurobiological classifications convey the "real essences" of the mind-brain. Nonetheless, if Churchland only means to make the weaker claim (about consciousness), viz., that it isn't a natural kind, we may have no serious disagreement.
A third contributor to the Marcel and Bisiach volume, Alan Allport, expresses his doubts about consciousness in somewhat similar language as Churchland's: "... there is no unitary entity of 'phenomenal awareness' -- no unique process or state, no one, coherently conceptualizable phenomenon for which there could be a single, conceptually coherent theory" (Allport 1988, p. 161). Allport says that he does not mean to deny the reality of phenomenal awareness, just as he does not deny the reality of life or understanding, which he regards as analogously disunified. Nonetheless, he seems to make a stronger claim than mere disunity when he denies that phenomenal awareness is coherently conceptualizable. A more radical interpretation is also invited when Allport endorses Daniel Dennett's (1988) eliminativist position about qualia in the latter's contribution to the volume. Allport says: "I find his [Dennett's] analysis, or rather his demolition of this incoherent notion, refreshing, and indeed liberating. What qualia, indeed?" (p. 162).
A prime source of Allport's difficulties with the concept of consciousness is his insistence on behavioral criteria. By "criterion" Allport refers to a procedure for telling whether the concept in question applies in a particular case. He presumes that if there is a unitary phenomenon picked out by a concept, there must be a unitary method of verifying the concept's applicability in all cases. A need for different criteria speaks against the unity of the phenomenon. But this methodological viewpoint conflicts with fairly standard treatments in the philosophy of science, which has long since given up the requirement or indeed desirability of unique operational criteria for theoretical concepts. For example, Carl Hempel writes:
[C]onsiderations of systematic import militate strongly against the proliferation of concepts called for by the maxim that different operational criteria determine different concepts. And indeed, in scientific theorizing we do not find the distinction between numerous different concepts of length (for example), each characterized by its own operational definition. Rather, physical theory envisages one basic concept of length and various more or less accurate ways of measuring lengths in different circumstances. Theoretical considerations will often indicate within what domain a method of measurement is applicable, and with what accuracy. (Hempel 1966, pp. 94-95)In this spirit, all we should expect in the present domain are a variety of tests or indicators of awareness that may be applicable in different contexts, and may not always be wholly accurate, depending on different cognitive tasks confronting subjects or impairments from which they suffer. When Allport finds a multiplicity of criteria of awareness -- ones that appeal to voluntary action, to memory, and to confidence of report (Cheesman and Merikle 1985) -- he should not despair or infer the disunity of the phenomenon. True, the phenomenon may turn out to be disunified, but this does not follow from the necessity for multiple, non-coinciding criteria. To elaborate on Hempel's length example, it is evidently impossible to measure astronomical lengths and sub-atomic lengths by the same operations, but it is still a unitary concept of length. Nor will it always transpire that two usable criteria always coincide. As we saw in the first section, verbal reportability may normally be a good test of awareness but it will obviously yield an inappropriate outcome when there are speech impairments or restricted access to the verbal subsystems.
Allport also errs in restricting criteria of consciousness to behavior; evidence may equally come from the neural direction. If a neural substrate of consciousness can be tentatively identified, it might be used to resolve problematic cases where behavioral criteria differ. We shall see examples of this shortly. Thus, it is wise to expect relevant evidence for consciousness to come from multiple sources, in accord with the general theoretical posture sketched in the passage from Hempel. Precisely this methodology is urged by Owen Flanagan (1992). Start by treating three different lines of analysis, he says, with equal respect, viz., phenomenology, psychology, and neuroscience.
Listen carefully to what individuals have to say about how things seem. Also, let the psychologists and cognitive scientists have their say. Listen carefully to their descriptions about how mental life works, and what jobs if any consciousness has in its overall economy. Finally, listen carefully to what the neuroscientists say about how conscious mental events of different sorts are realized, and examine the fit between their stories and the phenomenological and psychological stories. The object of the ... method is to see whether and to what extent the three stories can be rendered coherent, meshed, and brought into reflective equilibrium. (Flanagan 1992, p. 11)Flanagan gives three examples of how his method of seeking coherence or meshing would work. I shall exposit one of his examples and then add one of my own.
In studies of dichotic listening, subjects are interviewed and give us a phenomenology. They tell us what they heard in the attended channel and insist that they heard nothing in the unattended channel. But we know that they are in fact influenced by, say, linguistic material that is presented in the unattended channel. One possible explanation of these results is to say that subjects are never conscious, or aware, of the sentences presented in the unattended channel, although the cognitive system is sensitive to this material. A second interpretation is that the material in the unattended channel is conscious for only an instant. The brevity of the conscious episode explains why it can't be remembered, though it was in fact consciously experienced.
Could there be a motivated choice between the two interpretations? Brain science, says Flanagan, may here prove useful. Francis Crick and Christof Koch (1990), for example, have suggested that (visual) subjective awareness is linked to oscillation patterns in the 40-70 Hz range in the relevant groups of neurons, that is, neurons involved in a certain decoding task synchronize their spikes in 40-70 Hz oscillations. The 40Hz patterns can be sustained for very short periods of time in which case there is rapid memory decay, or they can resonate for several seconds in which case they become part of working memory, give rise to more vivid phenomenology, and are more memorable. Suppose this hypothesis (or something in a similar vein) turns out to be corroborated across sensory modalities and that short term 40-70 Hz oscillations are observed to occur when the sentence in the unattended channel is presented. Combining present theories of short-term and working memory with such a finding would lend support to the second hypothesis that the sentence in the unattended channel makes a conscious, but unmemorable, appearance.
Another illustration (not discussed by Flanagan) of the attempt to "triangulate" on the phenomenon of conscious awareness through phenomenology, psychology, and neuroscience is given by Daniel Schacter (1989). Schacter first discusses consciousness in connection with the contrast between explicit and implicit memory, where explicit memory is roughly "memory with consciousness" while implicit memory refers to situations in which previous experiences facilitate performance on tests that do not involve any conscious memory for these experiences. Schacter then turns to studies of brain-damaged patients with specific perceptual and cognitive deficits. In a wide range of cases patients have access to knowledge of which they are unaware. Amnesic patients are a well known case, but other types of brain damage also yield conditions in which patients show implicit knowledge of stimuli that they cannot consciously perceive, identify, recognize, or understand. Prosopagnosic patients have difficulties recognizing familiar faces, and report no (conscious) familiarity with the faces of family, relatives, and friends. Despite the absence of conscious familiarity, however, data indicate that these patients do have implicit knowledge of facial familiarity. Blindsight patients are another well-known case, in which patients can gain access implicitly to information that does not inform conscious visual experience. Similar dissociations are observed in the syndrome of alexia without agraphia, in visual object agnosia, in Broca's and Wernicke's aphasia, and in studies of interhemispheric transfer in split-brain patients.
Schacter stresses two key points concerning these data. First, similar patterns of results have been observed across different patient groups, experimental tasks, types of information, and perceptual/cognitive processes. Second, the failures to gain access to consciousness are selective or domain- specific. Patients do not have difficulty gaining conscious access to information outside the domain of their specific impairment. Building on this evidence, Schacter suggests a framework that posits a distinct subsystem called the Conscious Awareness System (CAS), which interacts with modular mechanisms that process and represent various types of information. In cases of neuropsychological impairment, specific processing and memory modules are selectively disconnected from the conscious system, thereby resulting in a domain-specific deficit of conscious experience. CAS serves three functions in this framework. First, its activation is necessary for the subjective feeling of remembering, knowing, or perceiving. Second, CAS is a "global data base" that integrates the output of modular processes. Third, CAS sends outputs to an executive system that is involved in the regulation of attention and initiation of such voluntary activities as memory search, planning, and so forth. Finally, moving to neuroanatomical possibilities, Schacter draws on work by Dimond (1976) and Mesulam and colleagues (especially Mesulam 1983, 1985) to suggest that regions of parietal cortex have precisely the pattern of interconnections that would be necessary if they constituted part of a larger system with the properties and functions of CAS. Schacter's proposals, then, are an illustration of Flanagan's method of "triangulation" on the phenomenon of consciousness.
Daniel Dennett (1988) is a fourth contributor to the Marcel and Bisiach volume who voices grave doubts about consciousness, at least phenomenal consciousness. Dennett's specific target is the notion of qualia, and his view is bluntly eliminativist: "contrary to what seems obvious at first blush, there simply are no qualia at all" (Dennett 1988, p. 74; also see Dennett 1991, chap. 12, "Qualia Disqualified"). Dennett's arguments, like Allport's, center around the problem of verification. Both in this article and in later treatments (Dennett 1991; Dennett and Kinsbourne 1992), he presents cases where it is allegedly impossible to determine whether or when phenomenal awareness has occurred. His eliminativist conclusion about qualia is primarily based on such verificational indeterminacies. Since one cannot tell which qualia story is correct, there is no true story about qualia at all; in other words, phenomenal consciousness, as ordinarily understood, is an illusion.
In the 1988 paper, "Quining Qualia," Dennett uses the example of a coffee-taster who thinks that he no longer has the same taste qualia from Maxwell House coffee as he used to get when he joined the company six years earlier. The question is whether his taste qualia have really changed or whether his standards of judgment or perhaps his memory have changed. Dennett argues that there is a fundamental verificational indeterminacy among these (ostensibly) competing hypotheses. In later publications (Dennett 1991; Dennett and Kinsbourne 1992) he presents the example of a man who briefly glimpses a lady without glasses run by, and shortly afterwards remembers her as wearing glasses. There are two alternative stories. The "Orwellian" story says that there was a phenomenal experience of a lady with no glasses followed by contamination of this experience by a previous memory of a woman with glasses. (This story is "Orwellian" because history is rewritten.) The "Stalinesque" story says that no such phenomenal experience occurred. Dennett's claim is that there is no way to distinguish between these competing stories either "from the inside" (by the observer himself) or "from the outside," and he appears to conclude that there are no genuine facts concerning the putative phenomenal experience at all. A third such example concerns "metacontrast." A subject gets a short (30 millisecond) presentation of a disk which is immediately followed by a donut whose inner border is just where the outside of the disk was. If the setup is right, the subject reports having seen only the donut. However, there is evidence that information about the disk is represented in the brain. For example, subjects are better than chance at guessing whether there were one or two stimuli. An Orwellian story would say that the subject has a conscious experience of both the disk and the donut, but that the latter wipes out the conscious memory of the disk. The Stalinesque story is that the disk is subjected to pre-conscious processing, but that consciousness of it is prevented by the donut stimulus that follows. So the Orwellian and Stalinesque stories disagree about whether there was a brief flicker of consciousness of the disk that the subject does not remember. Dennett argues that there could be no matter of fact as between these two stories, because they cannot be discriminated.
There are several problems with these lines of argumentation. First a philosophical point. Even if it were true that nobody, including the subject, could subsequently determine which of the two stories is right, why does it follow that there is no matter of fact? It may be impossible now for anyone to get decisive evidence about the ornaments (if any) that Julius Caesar wore on his toga when he was slain. It hardly follows that there is no true fact of the matter, independent of our verification. Second, Dennett claims that the experience would "feel the same" on either account (Dennett 1991, p. 123). As Block (1993) points out, however, this assertion is just false, or at least question begging. If there is such a thing as phenomenal experience, there will be a slight subjective difference between a brief flicker of consciousness of the disk and no brief flicker. Such a flicker may go too quickly, though, for the subject to be able to detect or report it. (Notice that "detecting" is a matter of judging or believing, which should not be equated with the flicker of visual consciousness itself.) Third, Dennett is over-hasty in claiming that there could be no scientific evidence favoring one story over the other (Flanagan 1992; Block 1993). Again, suppose we find evidence from normal contexts (where there are no perceptual or memory tricks) for the Crick-Koch hypothesis that consciousness is related to the 40-70 Hz neural oscillation, or for another Crick-Koch hypothesis that consciousness is fundamentally connected to activity in the larger pyramidal neurons in layer 5 of the neocortex. If we had converging evidence from normal cases to support some such hypothesis, we could use neural information to resolve the phenomenal facts of the case in metacontrast. Whereas Dennett expresses doubts about the resolving power of brain science at this level of grain, I would echo Flanagan's motto, "Never say never" (Flanagan 1992, pp. 14-15).
Consider the old joke about two behaviorists. Just after making love, the first says to the second: "It was great for you, but how was it for me?" Why is this funny? Contrary to behaviorism, there seems to be an informational asymmetry in the knowledge of mental states that favors first-person over third- person knowledge rather than the reverse. People seem to have a different, and better, form of access to their own mental states than to the states of others. Such "privileged access" need not be perfectly reliable or infallible, but it seems to be usually reliable. Indeed, why is verbal reportability a normally reliable indicator of conscious states if not for the fact that people can ordinarily report the existence and content of their (current) conscious states correctly?
The privileged access thesis, of course, has its dissenters. In recent psychological literature, Alison Gopnik (1993) claims that people make inferences to their own mental states using the same theory they use to infer mental states in others. While denying that she is a behaviorist, she agrees with behaviorists that there is no informational asymmetry between first- and third-person mental attributions.
Unfortunately, Gopnik and other psychologists offer few details about the inferential processes that might underpin self- attributions. For possible details of such an account, the best place to look is the philosophical doctrine of functionalism. Philosophical functionalism holds that ordinary people understand each common mental-state descriptor to pick out a distinctive "functional role," i.e., a set of causal-functional relations to stimulus inputs, behavioral outputs, and other mental states. If this is correct, then the task of categorizing one's own mental states must involve deciding which functional roles are instantiated by one's current states. How might this task be executed?
Consider the descriptor "thirsty." According to functionalists, the meaning of "thirsty" is (partly) given by the following properties: (1) it is a state that tends to be caused by not drinking for a while; (2) it is a state that tends to cause a desire to drink. These two conditions are part of the distinctive functional role for thirst. What kind of state is a "desire to drink"? This is understood (among other things) as a state which, when coupled with a belief that a container of potable liquid is in one's hand, will cause one to bring the container to one's lips and drink. Notice that the posited understandings are purely relational, ultimately relating the states in question to peripheral inputs and outputs. They make no reference or commitment to any phenomenal character of the state. So functionalism is attractive to a qualia-skeptic or a qualophobe.
If functionalism were correct, what inferential procedures could a person use who is trying to decide whether he/she is currently thirsty, or currently desires to drink? Since being thirsty is a state that is understood in terms of its relations to inputs, outputs, and other states, presumably one would classify a present target state by trying to determine what inputs preceded it, or what outputs and/or other inner states followed it. In the case of the thirst concept, one might try to recall whether one had not drunk anything recently, or one might wait to see whether the target state is followed by a desire to drink. Insofar as inputs and outputs are the pieces of evidence available, functionalism does not differ from (philosophical) behaviorism. Only the addition of relations to other internal states differentiates functionalism from behaviorism. How much this helps functionalism remains to be seen.
Is it really plausible that people execute tasks of mental self-ascription in the fashion required by functionalism? There are three sorts of difficulties. First consider the case of a morning headache. You wake up to a distinctive sensation state and immediately classify it as a headache. However, you don't recall anything that might have caused it. You don't remember a bout of drinking, a long session of rock music, or anything analogous. So you don't infer the "headache" classification from knowledge of earlier inputs that are typical causes of headaches. Nor have you yet performed any action, such as taking an aspirin, that might help you identify your state as a headache. Is there some other internal state you identify which prompts the "headache" classification? Perhaps you notice a desire to get rid of the state. But this would not distinguish a headache from other unwanted states, like aches in other areas or even thirst. Furthermore, you may well identify the headache before you identify this desire. Finally, appeal to the desire simply transfers the difficulty to that state: How do you classify that state as a desire to be rid of the initial state? At this point, let us just label the current problem as the problem of insufficient evidence. If one could use only relational information of the sort considered thus far, it is doubtful that the classification task could be executed accurately, either at all or as rapidly as it is in fact executed.
Perhaps we have been unfair to functionalism. Our discussion of the morning headache case seemed to assume that only actual events or states preceding or following the target state are usable as evidence for its classification. But functionalism would not restrict relevant evidence to actual events or states. Functionalism says that the identity of a state depends on its subjunctive properties, e.g., on the behavior or other internal states that it would produce. So perhaps one classifies mental states, including morning headaches, by their subjunctive properties. But this introduces a second problem for the functionalist model: ignorance of subjunctive properties. How can a person tell which subjunctive properties a current state has? Suppose you don't in fact believe that you are currently holding a container of potable liquid. How can you tell that if you did have such a belief, it would combine with the current state to cause you to bring the container to your lips and drink? Yet that is just the kind of subjunctive information you need to have, according to functionalism, to classify a current state as a desire to drink.
It may be replied that we often do have requisite subjunctive information, however that is obtained. In the morning headache case, for example, you probably would know that the state in question is one that would cause you to take an aspirin; and this is something you would know even before you actually got out of bed and went to the medicine cabinet. The problem however, is, how you would know this. Wouldn't you know it by first classifying the state as a headache and then coming to the conclusion that you should take an aspirin? If this is right, then you don't use the subjunctive information in order to classify. Quite the reverse: you use classification information to infer subjunctive properties.
A third problem for the functionalist model is at least as severe as the preceding ones. This difficulty arises from two central features of functionalism: (1) the type-identity of a token mental state depends on the type-identity of the states related to it (its "relata"), and (2) the type-identity of many of the relata (viz., other internal states) depends in turn on their relata. To identify a state as an instance of thirst, for example, one might need to identify one of its effects as a desire to drink. Identifying a particular effect as a desire to drink, however, requires one to identify its relata, many of which would also be internal states whose identities are a matter of their relata; and so on. Complexity ramifies very quickly. There is a clear threat of combinatorial explosion: too many other internal states need to be type-identified in order to identify the initial state.
In light of these difficulties, it appears that our classification routines must not use only subjunctive information, or causal-relational information of the kind functionalism suggests. Rather, our systems must rely on some properties of the target state that are categorical (rather than dispositional) and intrinsic to the state, i.e., properties the states have in themselves rather than in virtue of their relations to other states. What might these categorical and intrinsic properties be?
There seem to be two candidates to fill this role: (1) neural properties, and (2) qualitative or phenomenal properties. Every sensation state has some neural properties, and these might be categorical and intrinsic. (Notice that a neural property could be intrinsic to a state as a whole even though it involves relations among constituent neuronal structures, just as temperature is an intrinsic property of an entire volume of gas even though it involves relations among component molecules.) But neural properties are not the sort of properties to which the classification system has access. Certainly the untrained person has no "personal" access to neural properties, and knows nothing whatever about them. Could there be "subpersonal" access to these properties? It goes without saying that neural events are involved in the relevant information processing; all information processing in the brain is, at the lowest level, neural processing. The question, however, is whether the contents (meanings) encoded by these neural events are contents about neural properties, from which subjunctive properties can be inferred. This seems quite implausible.
Obviously a great deal of information processing does occur at subpersonal levels within the organism. When the processing is purely subpersonal, though, it seems that no verbal labels are generated that are recognizably "mental". All sorts of homeostatic activities occur in which information is transmitted about levels of certain fluids or chemicals, e.g., glucose. But we have no folk psychological labels for these events or activities. Similarly, the pupillary response changes continuously in response not only to changes in illumination, but also to the hedonic value of environmental stimuli (Weiskrantz 1988). But there are no mentalistic labels for events concerning pupillary states, apparently because these states are not registered in awareness. Our spontaneous mental naming system does not seem to have access to purely subpersonal information. Only when physiological or neurological events give rise to conscious sensations, such as thirst, felt heat, and the like, or to other conscious mental events, does a primitive verbal label get introduced or applied.
We seem to be left, then, with qualitative or phenomenal properties, i.e., qualia, as the intrinsic properties that permit mentalistic classification. As we argued earlier, these are indeed categorical, intrinsic properties that can be detected or monitored "directly" (though not necessarily infallibly). Thus, it looks as if the most promising psychological model of how one's own mental states are classified is by detecting phenomenal properties of these states, e.g., the "itchiness" of an itch or the "headachy" quality of a headache. (More fully, micro- components of such phenomenal properties may also be utilized. See Goldman 1993.) If this is right, phenomenal awareness has an essential role to play in explaining the execution of a very common cognitive task.
This discussion has focused on the phenomenon of verbal self-ascription of mental states. But the argument might equally be based on a purely internal and non-verbal activity, what Lawrence Weiskrantz (1988) calls a "monitoring" response. Weiskrantz suggests that blindsight patients (at least many of them) are disconnected from a monitoring system. If we had to discriminate between highly distinctive vertical and horizontal gratings, as blindsight patients are asked to do, we could press one of two keys appropriately to indicate "horizontal" or "vertical." But we would also typically have no difficulty in pressing a third key that indicated that we were "seeing" and not "guessing." This is where we would differ, says Weiskrantz, from the blindsight patient, whose third-key response would be "guessing." The best model of this difference is that there is an extra state that we are monitoring -- a phenomenal or qualitative state -- which the blindsight patients do not have at all (or have only in a limited or diminished form). These patients do have implicit informational states, but these are of a different sort, and cannot be monitored in quite the same way. That is why blindsight patients (at least initially) regard the discrimination questions they are asked as a pointless game.
One question that arises at this juncture is whether the foregoing account could be extended from sensations (including perceptual states) to the self-ascription of so-called propositional attitudes: thinking, wanting, doubting, intending, and so on. Philosophers of mind often maintain that only sensations have phenomenal properties. If this were so, the account sketched above would not explain self-application of non- sensational mental descriptors. The prospects for the indicated extension, however, are reasonably good. First, as a terminological matter, we should be prepared to use the terms "phenomenal" and "qualitative" for states with any sort of subjective feel, not just sensory qualities. Next, consider the feeling of familiarity associated with (consciously) recognized faces. In addition to the purely visual quality of seeing a familiar face, there is an additional quality of "seeming familiar." The latter quality is what prosopagnosics presumably lack, though they fully enjoy the visual dimension of seeing faces. Thus, above and beyond the purely sensory (e.g., visual) feeling, there seems to be such a thing as non-sensory feeling. This might hold for feelings of remembering in general, and it is not outlandish to suggest that there are distinctive ways it feels to believe something rather than desire it, to hope for something rather than dread it, and so forth (Goldman 1993; Flanagan 1992, pp. 67-68).
It is sometimes argued that if qualia exist, they have no functional or causal role to play in cognition (Jackson 1982). That is not the position advocated here. As the foregoing arguments indicate, phenomenal states do have causal consequences: they often produce verbal self-attributions, and, purely internally, they trigger monitoring activity. (According to Weiskrantz, this activity is also responsible for integrating and linking one's thoughts). Our earlier arguments for the conclusion that qualia are not merely equivalent to functional states should not be confused with the thesis that qualia have no functional properties at all. Qualia are not equivalent to functional states because (1) the chosen functional states could ("logically") exist without qualia (Block 1980), and (2) the same qualia could play different roles in other people, or in people differently situated than we are. But in us they do in fact play specific functional roles of the kinds sketched above (among others, no doubt). Similarly, our earlier criticisms of the self-monitoring definition of consciousness does not conflict with the present endorsement of Weiskrantz's idea that conscious states are peculiarly available for monitoring. Our earlier criticisms were only aimed at the thesis that the concept of consciousness is exhausted by, or to be identified with, higher- level monitoring. Rejection of this thesis does not conflict with the claim that, in us, consciousness of a state makes it readily and distinctively available for monitoring. Since phenomenal properties do play a significant functional-causal role in our psychological systems, they deserve to be recognized by cognitive science, not thrown in the trash-bin of theoretically worthless constructs.