Re: Searle's Chinese Room Argument

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Mon Feb 17 1997 - 22:19:58 GMT


> From: McKee, Alex <am196@soton.ac.uk>
>
> So the Chinese room is saying that the mind cannot just be symbol
> manipulation because there is no necessary understanding involved. But
> then, this person in the room moving around symbols, is effecting a
> process yes? Therefore in my mind, there must be some measure of
> understanding involved, not of the symbols, but of how to work the
> process. So isn't it symbol manipulation up front with a process which
> necessitates understanding in the background? Can that fit into an
> explanation of language? We don't understand the individual syllables,
> such as 'fo' or 'neem', but we do achieve an understanding through
> processing it. So Searle in his dojo doesn't understand what he is
> doing but he understands what he must do?

The question was: If a computer passes the Turing Test -- i.e.,
it is indistinguishable from a real (Chinese) pen-pal, exchanging
letters (in Chinese), then does that mean it understood the letters?
Searle shows it does not, because he can do what the computer did
(manipulate symbols according to the rules he is given) without
understanding a word of Chinese. Now unless you think there is someone
else in the Chinese room who DOES understand Chinese (and there isn't)
then implementing a symbol system is not what it takes to understand.

In brief: knowing how to manipulate symbols is not = understanding.

> When I said this to Stevan, it was argued that there was no
> understanding present, only knowledge of the content of the process,
> yes? As in:
>
> 1 - Receive symbol.
> 2 - Translate symbol.
> 3 - Give out symbol.
>
> Just manipulating symbols of the process to manipulate symbols with the
> process. No understanding.
>
> This would be a major change in my world view were I to accept that a
> process could be simple knowing and not as I had thought necessary
> understanding. Before, content was to knowledge as process was to
> understanding. So where is understanding? I am going to have to try and
> re-establish myself though possibly not where I had previously been. I
> guess that is one of the aims of Cognitive Science or knowledge in
> general.

I couldn't follow what you were saying about content/process there,
but it's simple: I understand you, and you understand me. We are both
people, and that understanding is some sort of a state of our brains. We
have no idea what. Then along came a candidate explanation: maybe
understanding is a computational state. Computational states are (as I
said in the lecture) independent of the specific details of the
hardware they are running on. So if understanding (e.g., Chinese) is
just the implementation of the right symbol system, then Searle
should be understanding Chinese when he is implementing the (Chinese)
understanding system.

He doesn't. So it isn't.

> It wasn't said that Searle didn't understand the process, only that he
> need not have. Perhaps a test to see if understanding existed would
> manifest itself in the idea that if he understood the process, any
> change or new element into the process would lead to him adapting his
> behaviour in order to progress with the task. Is this fair?

Not sure what you mean: Sure Searle understands that according to the
rules he has memorised, when he gets a squiggle, he should reply with a
squoggle. He understands the process of getting squoggles from
squiggles. But that wasn't what we were asking about: The questions was:
does manipulating squiggles and squoggles = understanding? The answer is
no.

> For example, a Japanese symbol is received. With no understanding what
> would happen? Presumably, a continuous search through for infinity, a
> shut down(staring at the symbol) or ignoring the symbol. It would
> depend on what other parameters were written into his behaviour. No
> matter how pedantic the argument over contents of the test can get
> however, could it not be said that as long as adaptation or some form of
> learning was in effect, understanding existed? Perhaps ignoring the
> symbol is the closest of the three previous choices. Recognition of
> non-ability, not simply non-ability, implies understanding. Yes?

You are focusing on the wrong process. Computers can manipulate symbols on
the basis of rules and people can manipulate symbols on the basis of
rules. That is not at issue. What is at issue is whether manipulating
symbols according to rules = understanding (Chinese). It doesn't unless
you think there's something else in the Chinese room, understanding
Chinese. For Searle can certainly be believed when he says HE doesn't
understand with the squiggles and squoggles are about.

And learning is irrelevant. Even if Searle somehow eventually managed to
figure out the code and learn Chinese from this exercise, that would be
irrelevant, because no one said the symbol system was LEARNING to
understand Chinese. It was supposed to BE understanding Chinese by
BEING the implementation of the right symbol system.

> Okay, so now were back with a similar view I had of the world. This
> time, understanding, what it is to have a mind, is not a process, but a
> self-adapting process. Like a self-processing process. A dynamic
> process. This does resemble Kant's differentiation between Applied and
> Pure Logic in his Theory of Mind. Such that the former is the content
> of the process and the latter the processing of that same process.
> Can we feel the feel the homunculus problem of infinite regression
> though? If it is a process with the ability to process itself, then why
> not a process of processes able to process themselves and so on?
> Understanding not explained but just put back a step and then another
> etc...

Kid-sib's getting dizzy here...

> But, instead of stopping short, why not play circumnavigation? If we
> allow infinite progression and always facilitate a place of
> understanding, then are we not in a loop? A cycle of understanding? The
> progression shows us a process adapting and learning, as long as new
> information exists, forever. Or at least for an unimaginably large
> number. What else is infinity? So if no matter where we sail we find a
> relative degree of adaptive processing, why sail further than a
> self-processing process? What does it matter how many times you can say
> processing and process in one sentence, only that you can say them once
> and effect understanding? If this process of understanding is dynamic
> and relative, then would not the process of communicating it, inwardly
> or outwardly, supply a clearer picture than what is actually
> communicated?

Alas, kid-sib understood most all the words you used up there, but
cannot make head or tail of them...

> With the Frame Problem, is there any significance that babies too have
> to develop the concept of object permanence. Are we born with
> understanding or with the ability to understand how to understand? Can
> we pick up ourselves?

First I'd like to make sure we both mean the same thing by the "Frame
Problem." It is unfortunately not connected to anything about babies
and object permanence. It is a point that always comes up sooner or
later in a symbol system that we think understands, when instead of
showing artificial intelligence it suddenly shows artificial stupidity.
Why? Because a symbol system is a system of symbols and rules for
manipulating them. It is all supposed to be intelligent, so when it
falls down, you just pick it up, shake of the dust, give it a new rule
that will cover that case, and send it on its way again. Until the next
point you had not anticipated (because you cannot anticipate everything
in a symbol system), at which it again stumbles and falls.

Here is an example: I build a simple symbol system that can "understand"
"the cat is on the mat." You say that, and show it a cat on a mat, and
it says "true." Than you say "the mat is on the cat" and show it a cat
on a mat, and it says "false." etc. Then you say "the cat is on the cat,"
and you show it a cat, and what does it do? It does nothing because it
only "knows" that X on Y when X is not = Y. When X = Y, it doesn't
know what to do. So you add another rule: When X is on Y and Y = X,
there are two X's, one on top of the other. Then it is told "the mat is
on the mat" and it is shown a mat curled on itself. What does it do?
It says "false." But that's not right. So you add the extra "knowledge"
that certain things, when curled, can be on top of themselves. And so it
goes, until it stumbles yet again.

The problem is that this is not what knowledge is. It is not just a
growing pile of squiggle-squoggle rules.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:49 GMT