On Sun, 8 Apr 2001, Ivady Rozalia Eszter wrote:
> I am quite sure that our mind is really wrongly designed to
> understand consciousness, for the sole reason that we have not
> understood consciousness as yet.
Then is it true for everything we have not understood as yet that our
mind is wrongly designed to understand it?
And does "wrongly designed to understand it" mean "will never be able to
understand it" (as McGinn suggests) or just "will have difficulty, or
will take a long time, to understand it"?
And could it be something about the problem that makes it hard or
impossible to understand, rather than something about our minds.
[A slight ambiguity: We should have been talking about the designer of
our BRAINs here, rather than the design of our MINDs. For the "hard
problem" we are discussing here (explaining/understanding
consciousness) is ABOUT the design of our minds; so it is tautological
to say that, if it is hard or impossible to solve this problem, then it
will be because of the design of our mind that makes it hard or
impossible to solve. For that could be either because of the design of
our minds as the content of the problem we are trying to understand, or
the design of our minds as the vehicle for understanding the problem!
So, although it does not eliminate this ambiguity completely, let us say
that McGinn is talking about whether there is something about the design
of our BRAINs that makes it difficult or impossible to explain/understand
the "hard problem" of consciousness.]
> I would not say, that not having
> understood it as yet means not being able to understand it at all.
> In my opinion we are not designed to understand non-Euclidean
> geometry either (I certainly not am :-) ), but not only are there
> people who understand it, there are also some that must have
> discovered (or invented ?!!? depending on your point of view) it
True, but you seem to suggesting only that solving the problem of
consciousness will be hard for our brains, but not impossible. McGinn
thinks it is impossible, and not because the problem is insoluble, but
because our brains are not designed to solve it.
> 0.1. Materialism
> > MCGINN:
> >"We could know all about the bat's brain as a material system,
> >but that would not give us knowledge of what it is like to be a
> >Therefore, complete knowledge of the brain does not add up to
> >knowledge of the mind, and the thesis of materialism is false. "
This is Thomas Nagel's famous point: Even if we knew everything there is
to know about the bat's brain and behavior, we would no more know what
it FEELS-LIKE to be a bat than a blind man would know what it feels-like
to see from a total knowledge of the brain's visual system.
But is this really the "hard problem"? Do we really have to know what it
feels-like to be a bat in order to have a full causal/functional
explanation of the bat? A cardiologist does not have to know what it
feels-like to be a heart in order to have a full causal/functional
explanation of the heart. And although we will never know what it feels
like to have a sonar system like the bat's, we do not think there is
problem in principle in explaining how a bat's brain works.
The real problem is not our inability to know what it feels-like to be a
bat, but our inability to explain how or why it feels like anything at
all! And that problem is just as great when we are trying to explain
how/why our own brains feel, even though we know exactly what it
feels-like to be one of us.
I mention this important distinction at the beginning, because I think
it is the basis of an error on McGinn's part. He is arguing that our
brains are not designed to understand how/why any feeling creature feels
(that is the problem of consciousness). But the example he gives is
one of a feeling that our own brains happen to lack (humans do not have
the bat's sonar sense). If that is the kind of "defect" we have, then
what is it that we lack a feeling for? Not being able to know WHAT
another creature feels is just a sensory matter; it is not a hard
problem. If you have a good sense of smell, you can smell certain faint
odors. If you don't, you can't. But there is no MYSTERY there for you.
It's the same with senses you have or lack entirely. In practise, you
don't know what anything you have not (or cannot) feel feels like; but
there is not problem in PRINCIPLE there: If you had the feeling, you'd
know what it felt like.
But the problem of consciousness is not a problem of knowing or not
knowing WHAT any given creature feels or doesn't feel, but knowing HOW,
and WHY. If our brain defect, by analogy with our inability to know what
it feels-like to be a bat, is a missing feeling or sense, then what
feeling or sense are we missing if we cannot know HOW/WHY anything feels
anything? It does not make sense to me that there is a "how/why" sense,
analogous to the bat's sonar, that our brains are missing...
But let me also reveal why I am skeptical about McGinn's brain-deficit
theory: Because I don't think the problem is with our brains. I think
there is something about the nature of the mind/body problem, the
problem of explaining how/why we feel, that is insoluble -- not
insoluble by our brains, because they have the wrong design, but
insoluble in principle, for any brain.
But we will get to that...
> Let us accept that bats are conscious
I think we better accept that bats feel. Of course we can't know for
sure, but by the same token we cannot know for sure about one another
either (this is called the "other-minds" problem, the other side of the
mind/body problem). And as far as that is concerned, bats are like us in
other respects that it is safe to assume that they feel, that they are
not just zombie automata with nobody home inside...
> Yet I think that the conclusion is quite early to make as yet...
You have taken the position that it is too early to say that the "hard"
problem is insoluble. Be careful, though, to make sure you are not
talking about other, "easy" problems instead...
> this conclusion is like saying that if we
> knew everything about snarks (or any hypothetical creature), even
> then we could not say for sure if snarks are mammals or reptiles.
You see, you have just substituted an easy problem for the hard one:
It is not explaining all the correlates that predict feelings that is
the problem. It is not even the (other-minds) uncertainty that we will
always have about whether feelings really correlate with these
behavioral and brain functions in other creatures. We can live with all
that. So it's nothing like being able to say whether something is a
mammal or a reptile. Those are the easy problems.
The problem is saying how/why a feeling creature feels. And there, the
full brain/behavioral information, even when it is correct and
complete, and correctly correlates with feelings every time, does not
do the explanatory job, indeed does not even begin to do the job.
> I propose the following idea: we cannot see small particles like
> electrons with our eyes, the same way we cannot feel the
> consciousness of other people. Yet we have an idea of electrons
> and we say we understand the rules that govern them. We can build
> machines that make electrons detectable with those senses that
> evolution has equipped us with. Let us suppose that we understand
Good try, but this analogy does not work, and it again substitutes an
easy problem for the hard one. The reasons we understand electrons even
though we cannot observe them (actually, as you say, with the aid of
instruments we can observe them, but never mind, suppose they were
completely unobservable) is that they are necessary as parts of our
causal/functional explanation of all the rest. The nature and existence
of electrons is DICTATED by our functional/causal explanation of
But the neither the nature nor the existence of feelings is dictated by
our functional/causal explanation of the brain or its behavioral
capacities. And even if we developed a brain-scanner/mind-reader that
could detect feelings with 100% accuracy and specificity, it wouldn't
be like detecting electrons, because the hard problem with feelings is
how and why we feel at all, not merely detecting when, where and what
And our functional/causal explanation of the brain and its behavioral
capacities, far from DICTATING that there must be feelings, leaves the
(true) fact that there are feelings a complete mystery (McGinn's
"mysterious flame). Indeed, it would all be much simpler if we WERE
just Zombie-automata. For then the functional/causal explanations would
be complete, and there would be no hard problem left to solve!
> What if we built a machine that, connected to our brain,
> somehow makes the consciousness of others observable? Would we
> understand consciousness then?
The answer is no. This is the brain-scanner/mind-reader above. It would
be able to predict feelings perfectly, from their correlates (like
weather-prediction) but it would not explain HOW all those correlates
amount to feelings, nor WHY there are feelings. To put it another way,
there would be no answer to the question: "How and why are we NOT
> Maybe it is too early to talk about these questions as yet;
> maybe we do not know everything nor about the brain, nor about the
> mind. Maybe we are trying to solve the problems of chemistry with
> knowledge of alchemy. It might very well seem mysterious.
No, the problem is harder than that, and the analogy with alchemy and
the early stage of our knowledge does not work. There is a problem IN
PRINCIPLE here: Everything else in the universe (electrons,
gravitation, life, airplanes, hearts), both what we have already
explained, and what we have not yet succeeded in explaining but
probably will, is amenable in principle to a causal/functional
explanation, but feeling is not: For if feelings can be causes
("telekinesis") then they are at odds with all the known conservation
laws of the universe; if they are merely effects ("epiphenomena") then
they are superfluous in any functional/causal explanation.
This is a real problem, and appealing to an analogy with some future
brain chemistry that will dispel it like alchemy is unfortunately
inadequate. There is a problem in principle, related to causality, and
not just a temporary knowledge gap.
> We perceive three dimensions of space and one dimension of time
> directly, but we can conceive a lot more. Does that not mean that
> we have chance to conceive a theory of mind as well?
If mind (feeling) is amenable to the same kind of causal/functional
explanation as space/time. But it looks as if it is not.
> The definition [of consciousness is having sensations, in
> other words experiencing something.
In a word: FEELING.
> My problem is the same as that of solipsism: how on earth do I
> know if others have these sensations too?
You can't know with the certainty of mathematics. But no doubt the day
will come when we can detect and explain the correlates of feelings so
well that we will be able to predict with 100% accuracy.
Unfortunately, that does not solve the hard problem, of the how/why of
feeling, just the easy when of when/where/what/.
The problem is not one of telepathy. It is one of causality, and causal
> To be precise it is
> telepathy we are talking about, being able to find out what
> someone else thinks of. Why did not McGinn include this in his
Because parapsychology has not yet succeeded in establishing that there
really are such things as telepathy and telekinesis. And the
overwhelming probability is that most of the reported effects are error
and/or fraud (and the rest natural phenomena that are not really
telepathic at all). In any case, hard, principled problems can hardly
be solved by soft (and probably incorrect) "evidence." Besides,
parapsychology has been so busy (unsuccessfully) trying to demonstrate
that paranormal effects are real, it hasn't ever gotten to the next
stage of explaining them. If it ever does, it may well find itself at
causal odds with all the rest of science. Hardly a promising way to
solve other people's hard problems...
> How can we
> be sure that bats are not smartly designed automatisms put
> together in a way that produces their behaviour?
See above. We can't be sure, but we can be sure enough, from the
functional correlates (brain/behavior). So that's how we know THAT they
have feelings. The problem is HOW and WHY.
> McGinn's argument is that people in coma do not
> have experiences, therefore they are not conscious. This is only
> partly true. Some people after having been woken up from coma
> reflect on their earlier sensations, and we have reason to
> believe, that they are not making having had these experiences up.
But this is all irrelevant to the hard problem. If the person in a coma
is a Zombie while in a coma, no problem. If they feel, then it's the
same problem as with normal awake people (how? why?). We have gained
nothing by considering coma (except, of course, correlates,
> It seems to me that in the end it all boils down to
> the conclusion, that the only way we can get hold of consciousness
> is via behaviour. Therefore a thousand things could be conscious
> starting with my stockings or the keyboard...
This is the other-minds problem, the "epistemic" side of the mind/body
problem. It is merely a symptom. The problem is not knowing whether or
not, or even what, something feels. It's knowing how and why.
The mind/body problem is really the feeling/function problem.
Mind-reading is not the solution; nor is a complete causal explanation
of brain/behavioral function: That only explains Zombie functions.
How/why are all those functions FELT functions?
> How does he know computers
> are not conscious and that they do not have sensations?
This is the other-minds problem again. (Searle's Chinese Room Argument
gives part of the answer:
> > MCGINN:
> >"blindsight"... in which a
> >person (or animal) suffers damage to certain regions of the visual
> >cortex, as a result of which sensations of sight disappear, but...
> >if an experimenter requests that the patient try
> >to make a guess about various objects placed before his eyes, then
> >it turns out that the patient performs above the level predicted
> >by chance...
> >Thus, there is a sense in which he is sighted and a
> >sense in which he is blind.
> Could someone tell me why it is that I start to feel more and
> more confused? So consciousness again is not having sensations and
> thoughts, but reacting to the environment in a certain way? Well,
> even a computer equipped with a video camera can handle that
No, the point of blindsight is that such a person seems to be
part-Zombie in that respect -- able to perform "opto-motor" functions
that normally require seeing, but without seeing. But the problem is
the same: How/why are WE NOT like that? Why is there something if
FEELS-LIKE to see; it can't be just the need to have the right
opto-motor interactions with the world, for here is a case where some
of them can happen without the help of any feeling at all. (But the
blindsight is still controversial, and may involve low-level feelings
of other kinds, including feedback from true automatisms, such as the
unconscious mechanisms that control eye-movement.)
> Actually it did not surprise me that we are not conscious of
> everything that affects our behaviour. Psychology as it is would
> be a fake science then, and it would not interest anybody. We have
> subliminal perceptions of pain, that come from the articulations
> and make us change the position of those articulations.
Not quite. Yes, if we were conscious of how our brain functions we
could just give all causal/explanations by introspection and cognitive
science would be unnecessary, but we can't.
But I would be careful about "subliminal pain". There can be subliminal
FUNCTION. My brain may be able to detect tissue damage without my
feeling anything (it does so before I feel the pain from touching
something hot). But that isn't "subliminal pain." That's simply
ordinary unconscious function. "Subliminal pain" would be an "unfelt
feeling," which is a contradiction in terms.
And as usual, the variant of the how/why are we NOT zombies question
here is simply: how/why isn't all tissue-damage-detection, etc. all
"subliminal," like blindsight, rather than felt, as it is when it is
> if actions (or to put it more scientifically:
> behaviourism) are accepted as signs of consciousness in the case
> of bats, then how can it be that we conclude that not all our
> actions are conscious? Or how can we claim after having concluded
> that not all actions are conscious, that bats (judging from their
> actions!!) are conscious?
No problem. Behavioral correlates are merely predictors. They are not
guarantors. But (as we know from Descartes, when I FEEL a pain, I
cannot be wrong about what it feels-like (although there may be nothing
wrong with the part that is hurting). And the same is true if I say I
DON'T feel something. (So I would trust the blindsight patient if he
tells me he can't see!) Nor does it follow, from the fact that SOME of
our normal and abnormal functions are zombie-like, that therefore
animals (e.g. bats) are all zombies. None of these inferences are
> > MCGINN:
> >We just don't know how much difference consciousness makes to
> >the behaviour of a system that has it. We don't know how necessary
> >consciousness is to performing certain specific functions.
> Hurrah! Are we really trying to chase a black cat in a dark
> room? By the way: is there a cat in there?
I can't follow your point here. McGinn says quite frankly that all this
evidence for function without feeling makes it even harder to imagine
what the function (how/why) of feeling might actually be. How did the
black cat get into it?
> > MCGINN:
> >I see no good reason to deny that the universe might have
> >existed in some quite different state prior to the Big Bang.
> I do. There is no such thing as "prior to the Big Bang"
And I think we should steer clear of the problems of cosmology and
cosmogony. We have enough problems nearer to home (the brain)...
> relativity theory and quantum theory are mutually exclusive. This
> means that only one of them can be true at a time...
> The problem of McGinn trying to build on these physical grounds
> is that that they themselves are quite shaky...
> The same way we do not know
> enough about the brain, we do not know enough about physics
Yes, physics has some puzzles and unsolved problems. But that doesn't
help cognitive science; and besides, the problems there are not
problems of principle, just the incompleteness and inconsistency of
> > MCGINN
> > It seems to me that the Big Bang must have had a cause, and
> >that this cause operated in a state of reality that preceded the
> >creation of matter and space.
> Causality operates in a material world, or so do we say. It is
> McGinn himself who says that our concepts of causality might be
> wrong about consciousness, a non-material thing. Why do we want to
> talk about causality about the pre-spatial, pre-material, pre-
> everything world then?
I agree. Leave those irrelevant problems of other fields alone and
focus on the problems at home: what is the causal status of feeling?
> > MCGINN
> >When we reflect on the experience itself, we can see that it
> >lacks spatial properties altogether. Your visual experience of
> >red or my emotion of fear has no particular shape or size. Nor
> >does it stand-in spatial relations to other - experiences. Your
> >experience of red is not, say, next to your experience of a
> >whistling sound, or four centimeters away from it,, it. There is
> >no clear sense in the question of how great a distance separates
> >a pair of experiences,[...]
> >. Only concrete things have spatial properties, not abstract
> >things like numbers or mental things like experiences of red,
> >Numbers and consciousness could not have spatial properties;
> >they are not the kind of thing to be spatially qualified. This is
> >what lies behind the intuition that the mind is not a "thing,"
> >not an extended substance, a space-occupier. There is no
> >question of trying to find room for the mind in the parking lot.
I'm afraid I have no idea what McGinn means by saying that feelings
are "abstract things" like numbers. Surely feelings are concretest of
Nor is the mind/body problem the problem of the spatiality or the
spatial location of feelings (unless "spatiality" includes all the
causal/functional properties of space/time). I suspect that all this
"space" stuff of McGinn's is a red herring. The problem is not the
spatiality of feelings but their causality/functionality.
> My other disagreement is about consciousness not being spatial
> and exerting effects on material entities. What does non-spatial mean
> exactly? I see that they do not take up space, but the sole
> reason we see materials taking up space is forces. Not being able
> to put two things in the same place is not really about them being
> material literally, to put it more bluntly it is not because
> molecules collide in space and molecules are solid, therefore they
> cannot take the same place at the same time. The essence here is
> forces: electromagnetic forces keep matter the way it is, they are
> what keep particles at a considerable distance from each other.
> What are forces?
I agree with you here. McGinn's spatial intuitions here seem naive and
irrelevant. Nor do we have to go into the basic physics of forces. The
only relevant question is, what kind of a "force" (if any) is feeling?
> Does saying consciousness is non-spatial mean that it is not
> affected by forces? Because if so, it cannot interact with the
> material world however much it would want to, and that is a
> paradox, since it is an experience of the material world, or is it
The problem of whether the causal status is telekinetic (an autonomous
"force") or epiphenomenal (a mysteriously dangling "effect" with no
autonomous causal power) or something else IS the hard problem, the
mind/body problem, the problem of consciousness, the feeling/function
McGinn evokes some aspects of the problem, but then he begs the
question, going off into mysteriously missing and nonspecified
understanding-capacities that our brains lack, but, if they had them,
then they could solve the feeling/function problem. It seems to me that
this is just swapping for one mystery -- how/why do we feel -- another
mystery of at least the same magnitude: how/why are brains unable to
explain how/why we feel? The analogies with not knowing what it
feels-like to be a bat, and the allusions to the problems of other
areas of science are irrelevant and hence unhelpful in dispelling any
of the mystery,
> My main point about the book is then: if we still do not give up
> and are to look for a yet undiscovered essence of consciousness,
> we had better try to search in the field of forces, rather than
> trying to figure out non-spatial or temporal dimensions, which we
> might not even be able to grasp.
Lots of luck! There seem to be two possibilities, as usual. Either
feeling is an extra, independent force in the universe (telekinesis),
or it just "piggy-backs" on the other forces, with no independent
causal power of its own (epiphenomenon). Either way leaves profound
how/why questions unanswered. The "mysterious flame" flickers on.
Stevan Harnad firstname.lastname@example.org
Professor of Cognitive Science email@example.com
Department of Electronics and phone: +44 23-80 592-582
Computer Science fax: +44 23-80 592-865
University of Southampton http://www.cogsci.soton.ac.uk/~harnad/
Highfield, Southampton http://www.princeton.edu/~harnad/
SO17 1BJ UNITED KINGDOM
This archive was generated by hypermail 2b30 : Wed Jun 13 2001 - 18:38:03 BST