Re: Searle: Minds, Brains & Programs

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Tue Jan 23 1996 - 18:06:39 GMT


> From: "Baden, Denise" <DB193@psy.soton.ac.uk>
> Date: Tue, 23 Jan 1996 16:45:30 GMT
>
> O.K, After our long debate I will admit that the Chinese penpal
> writing to Searle would be a bit dismayed at his lack of real
> understanding of their correspondence.

A little dismayed? Are you accustomed to writing to someone for 40 years
about everything under the sun, to discover that he knows nothing about
about it except that the symbols came in and they had certain
regularities which made him guess that sometimes there may have been a
question mark or a "the"? Is that the understanding you with which you
were crediting your penpal for 40 years?

> I do maintain, however, that
> Searle would not have been able to help himself putting some
> interpretation on these symbols, even if you rule out cryptology,
> because thats the sort of thing brains do.

Try gazing at a Chinese/Chinese dictionary for a while: Plenty of rules
there, consisting of what is the definition of what. Tell me what you
figure out. I'm sure Searle could figure out the same sort of thing, but
it's relevant. That's not the understanding that is at issue here, but
the phantom understanding of your 40 years' worth of messages to your
penpal.

> And I do believe I have made the point that Searle has not managed to
> show that he actually would not be 'at home' as you claim above.

A bit of a misunderstanding here. The only mental state at issue in the
Chinese room argument is that of understanding Chinese. The other things
going on in Searle, like feeling tired or bored, are mental, and of
course someone's home: Searle is. But Searle does not understand
Chinese. Nor does ANYONE ELSE in Searle's head understand Chinese. It is
the Chinese understanding that is absent, hence there is no Chinese
understander home, corresponding to the mind you had wrongly attributed
for 40 years to your penal.

> Also to say that this doesn't count as Searle has a brain and computers
> don't, is then saying that Searle's experiment is a waste of time,
> because the whole point of it was to put Searle himself in the
> unsymbol-grounded world of the computer and to then hypothesize about
> what he could or could not understand. I admit that he wouldn't
> understand Chinese without some sort of cryptology, but he can't say
> how much he'd understand because he hasn't done it.

Two points here: One, you seem to be quarreling with Pat Hayes
on whether it's fair to disqualify Searle's implementation just
because he has a mind. I would agree with you that that sounds
arbitrary, and that it's bad news for computationalism. See:
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad92.virtualmind
and
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad93.symb.anal.net.hayes

As to the fact that Searle hasn't done it: Why don't you try a simpler
approximation, say, Searle implements a chess-playing programme, except
the inputs are coded in hexadecimal and in polar coordinates (to hide
everything that's familiar -- translating it into "Chinese Chess").
Let's see whether merely by going through the motions of doing it, he
would get any sense of playing chess. I think you will have to agree
that if the answer is no, it is unlikely to be yes for the bigger task
of learning to play the Chinese-penpal game. We agree, right, that what
he might learn gradually across time is irrelevant, since the hypothesis
was that the computer understands in virtue of running the programme,
not that it gradually learns something from repeated practice...

> To fool a penpal would take such an extremely sophisticated and
> extensive analysis of the texts which would take years. I do not
> believe Searle would be able to fool anybody without at the same time
> having his own interpretations and ideas and senses of meaning emerge.

Not at all. The one it would take years would be the theorist who
created the programme. The time it took Searle to learn to run it is
irrelevant, as shown by the chess approximation above.

And note that you are prejudging it all if you say the programme is
meant to "fool" anyone: If computationalism is right, that programme is
the one really running in a real penpal's head too!

> Obviously he would be wrong, but then we were wrong about the earth
> being the centre of the universe. My little son is wrong when he tells
> me he is 'cum big' to climb on the toilet by himself, when he really
> means he is 'too small'. You don't have to be right to demonstrate a
> sense of meaning. you don't even have to be grounded. I've been to
> European discos where everyone is singing along to English records, and
> getting the words 100% wrong. I admit they won't know what the words
> actually mean in the sense that an English person will agree, but to
> say as Searle does that they have zero understanding is not true. I bet
> if you asked them to tell you what a song sung in a foreign language,
> which they have listened to time and time again means to them, they
> will have an awful lot to say.

You are overfixating on the 0% understanding. Searle may understand some
things, but they are the wrong things; they're not what the penpal was
meant to understand. He may also start to learn or guess things, but
that's irrelevant to, since his implementation is supposed to be
understanding it all, from the first letter onward; no one says the
computer is "learning" to understand the software it is running; it is
just running it. So is Searle.

And the fuzzy or incomplete understanding children and adults have of
some things sometimes is irrelevant too: It's the understanding of the
penpal that Searle would have to have, from the moment the programme
started to be implemented, that is at issue, not all this other stuff.

Chrs, S



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT