Re: Searle's Chinese Room Argument

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Mon Jan 22 1996 - 12:45:38 GMT


> From: "Parker, Chris" <Chris.Parker@soton.ac.uk>
> Date: Mon, 11 Dec 1995 08:17:33 +0000
>
> The Gap: Searle says that although we are reasonably confident in using
> grandmother (intentionalistic) high levels of psychology to explain
> behaviour, we just suppose that there are underlying hard science
> levels, because we haven't the faintest idea of how the lower
> (neurophysiological) levels work. The bit between the high and low
> levels is the gap. The main candidate to fill the gap is the mind/brain
> =program/hardware (strong AI) explanation. The consequence of this
> analogy for some, is that "intelligence is purely a matter of physical
> symbol manipulation".

Let me say it even more briefly. The "gap" is the mind/body problem,
which is the problem we ALL have about the relation between the mental
and the physical. "Strong AI" is just computationalism, according to
which mental states are just the physical implementations of certain
symbol systems (programmes).

> The Chinese Room Revisited: Searle uses an example of the Turing test
> to dispute the idea that computers can think in the same way that
> humans can. A person can pretend to be a computer by receiving a
> question in Chinese and giving the correct answer in Chinese simply by
> manipulating symbols using a manual which says what response to make
> for what input (even though they don't understand Chinese). If the
> person now receives the question in English they can understand in a
> way that the computer cannot. The analogy therefore breaks down because
> we have semantics and computers only have syntax. Thus strong AI fails
> to distinguish between syntax and semantics.

Not quite. The idea is that a computer passes the Turing Test
(corresponds with you indistinguishably from a real penpal, for a
lifetime if need be). Computationalism says ANY system that runs
(implements, physically embodies) that TT-passing computer programme (a
huge symbol system, with rules for manipulating input symbols and
producing output symbols based solely on the shapes of the symbols) will
understand, just as a penpal does. So Searle asks you to imagine the
programme as a Chinese penpal (indistinguishable from a real penpal, to
someone who can correspond with it in Chinese), and he proposes that
he himself will implement that programme, just as the computer does.

Clearly he would not understand Chinese if he did so; so if he doesn't,
then neither does the computer (or if it does, it's not just because
it's running the right programme). So computationalism is wrong (unless
you are ready to believe that Searle would have two minds that didn't
know about one another, merely because he had memorised a bunch of
meaningless symbols and manipulation rules).

> [I am unhappy with this, it seems to me that computers could have as
> much semantic information as we have. Anything we know about Chinese,
> they could know too. The difference between us and machines just
> appears to be that machines haven't been programmed to experience
> understanding, and if they have, then they haven't been programmed (or
> rather built) to say that they understand. Searle seems to have a
> homunculus that understands, but what is the understanding experience?
> Surely it can only be an experience of the fit of input with memory.]

What do you mean by "understanding"? No point trying to define it,
because nobody knows what it is yet. But you can point to it: Whatever
it is, you DO understand the symbols I am writing to you now in English,
es most pedig, mikor at terek magyarra, nem erted -- whereas you did
not understand the latter ones (unless you know Hungarian). THAT's the
sole difference that Searle is drawing on when he reports, truly, that
he does NOT understand Chinese, despite how his symbol manipulations
might be interpretable by someone (a real Chinese penpal) who does.

The symbols have NO semantics for Searle; hence they have none for the
computer running the same programme either. They are INTERPRETABLE
by the real penpal, but then the "semantics" is not in the symbols but in
the mind of the real penpal, just as the semantics of a book are not in
the book, but in the mind of the reader. Whatever cognition (thought)
might be, it can't be THAT: It can't be about what it's about merely
because it is interpretable to an outside interpreter. The ""aboutness""
of thoughts must be intrinsic, autonomous, independent of an outside
interpreter. When I think something, it means something to me; and not
because it is so interpretable by you!

You go astray when you talk about "semantic information," which is
equivocal as between intrinsic semantics, like the meaning intrinsic to
my thoughts, and extrinsic semantics, like the meaning in a book.

As I suggested, I think the intrinsic "aboutness" of thoughts derives
completely from the fact that there is someone home in a thinking
system, and its thoughts are about something TO him; they mean something
to the thinker; they are not merely interpretable as meaning something
by an external interpreter.

> The Brain and its Mind. Searle reckons that brain function is all about
> "variable" non-digital rates of neuronal firings, as opposed to
> all-or-nothing firings, and that understanding brain function is all
> about understanding systems of neurons in networks, circuits, slabs,
> columns, cylinders etc. All mental phenomena are caused by processes in
> the brain and all causal processes are internal to the brain even
> though "mental events mediate between external stimuli and motor
> responses there is no essential connection". Mental phenomena, like
> pain, are features of the brain.

Searle is just speaking platitudes here. We all know that mental states
have SOMETHING to do with the brain -- perhaps even EVERYTHING to do
with the brain. The question is, what is the nature of this
relationship? Searle has shown it is not just that the brain is
implementing the right computation. But then there are still many
possibilities: There may still be non-natural non-brain systems that
also have minds, not because they are running the right computations,
but because they are the right kind of physical system (the brain is one
too). But what kind of physical system is that?

I had suggested that any physical system that could pass the Total
Turing Test (T3) -- having not just all of our penpal capacities, but
all of our robotic capacities as well, able to interact with all the
things in the world that its internal symbols are interpretable as being
about in a way that is indistinguishable from the way we do -- would have
GROUNDED symbols: i.e., they would not be open to Searle's objection
that their meaning was only in the mind of the interpreter, for the
robot's interaction with what the symbols were interpretable as being
about would be autonomous, independent of any interpreter. The big
question, though, is whether grounded semantics guarantees intrinsic
semantics: Is there anyone home in a T3 robot, thinking thoughts,
understanding things, and meaning things?

I have no idea, and no one has or can have (because of the other-minds
problem); I just think it would probably be easier for a camel to pass
through the eye of a needle than to build a robot that was
indistinguishable from you or me, yet with nobody home in there. For if
it were possible to have all those capacities without a mind, why
shouldn't some of us be Zombies without minds too? The evolutionary
forces that shaped us are no better at mind-reading than you or I are:
Like us, evolution is guided only by what the system can DO, not by
what or whether it FEELS anything while doing so...

See:
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad89.searle

ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad90.sgproblem

ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad91.otherminds

> Macro- and Micro-properties. This comes down to brains experience pain
> but neurons don't although the latter cause or realise pain in the
> latter, is like water freezes but molecules don't although the latter
> (micro, low level features cause or realise higher- level features in
> the former).

Searle tends to finesse the mind/body problem (which, let me remind you,
has to do with the puzzle of how to see what look like TWO KINDS of
things -- physical and mental -- as ONE KIND of thing, or at least how
to sort out their relation without falling into a dualism that tampers
with the laws of physics) by masking the two-ness inherent in the
physical and the mental with a long double-barreled verb:
"The mind is caused-by-and-realized-in the brain." Trouble is, "caused
by" involves two things (cause and effect) whereas "realized in" involves
only one. So that doesn't solve the numbers problem. (Elsewhere Searle
says he has a simple cure for Cartesian dualism: "Don't count!" -- i.e.,
don't count the "kinds" of things there are; just describe and explain
them all in a unitary way. Easy to say; harder to do, without a sense of
begging the question...)

> The Possibility of Mental Phenomena. Consciousness now is like "life"
> used to be when we didn't understand the biology of life. We simply
> need to wait until we understand the characteristics of brain processes
> and the micro-macro-analogy. This is also true of intentionality,
> subjectivity, and intentional causation. Mental states are physical
> states of the brain.

An old, bad argument: Yes, people used to think life was special, and
couldn't be explained physically. They turned out to be wrong. But if
you had asked them at any time: "What IS it about life that could not be
physical?" they could not have replied. If they said a "vital principle"
they'd just be using an empty phrase, redundant with the first question.

This is not true of the mind/body problem, because one can most
definitely say what would be missing from any physical system that did
not have a mind: There would be nobody home in there, experiencing
experiences, thinking thoughts, feeling feelings.

In fact, I bet the reason people thought life was special was because
they had the mind/body problem in mind all along: They assumed that
there was something home in every living creature. In that case, the
biology of life STILL hasn't solved the problem...

> Consequences for the Philosophy of Mind. Searle presents the principle
> of neurophysiological sufficiency - "what goes on in the head must be
> causally sufficient for any mental state whatsoever. He disputes
> Wittgenstein's "an inner process stands in need of an outward
> criterion".

This is not kid-brother talk. But if it were reduced to kid-brother talk
it would not convince a kid brother.

> [While it is true that mental events can take place with no observable
> behaviour/output and mental events can take place with no observable
> stimulation/input it still seems a polarised view when you consider
> that most of our everyday life is spent receiving input and producing
> output via mental mediation. Searle seems to over concentrate on
> programs and semantics. While programs are transportable between
> machines, they are not independent of them directly or indirectly. Try
> running Windows 95 on a Sinclair Spectrum. It seems conceivable that
> computer architecture will become more complex, and will have more
> special-function modules. The main difference between computers and
> brains is that brains are alive, and can change or adapt their
> architecture both phylogenetically and ontogenetically.]

You could have an artificial system that was self-modifying, so that in
itself doesn't help. Yes, transportability of programmes (the
implementation-independence of computation) is at the heart of
computationalism, which Searle has been at pains to puncture (to my
mind, successfully). The problem is not one of "silent" inner events
going on between input and output, exactly; in my view it's the problem
if producing a functional capacity to DO with inputs everything and
anything we can do. Whatever it takes to do THAT should turn out to be
the functional substrate of the mind, the physical mechanism with the
right "causal powers." Or that's the best we can home for, in any
case...

Chrs, Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT