Re: Searle's Chinese Room Argument

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Thu Mar 16 2000 - 12:55:40 GMT


On Wed, 15 Mar 2000, Edwards, Dave wrote:

> > SEARLE:
> > I am simply an instantiation of the computer program.
>
> Searle is the hardware that the instantiation of the computer
> program runs on, he is not the program.

He didn't say he was the program. He said he was the implementation of
the program. If he is not the implementation of the program, who/what
is?

> The hardware (Searle) that the program is running on can have no possibility
> of understanding what it is doing, just like our neurons have no understanding
> what we are thinking. But the program could have understanding, just like our
> minds.

Yes, but just exactly what does that MEAN. Who/what (if not Searle) are
we talking about here? (Keep it in mind that all there is in the room is
Searle, and everything that is going on is going on inside Searle's
head.)

> Can Searle not understand that there are different levels of complexity? The
> door can't understand anything because it has not been programmed to. But a
> suitably complex program could (not necesserily, will) understand.
> See the ant-calculator argument later.

How much more "complex" must something be than an automatic door for
an understanding mind to kick in? Why? What makes you think so? How
would you show it was true? How would you show it was false? Why should
anyone believe it's true?

> Even assuming it is possible for Searle (or anyone) to memorize
> [and mentally execute all the rules and symbol manipulations], it will
> make no difference to the argument. Searle is the hardware and as such
> can have no possiblily of understanding what the program is doing.
> The program could have understanding, but Searle would not know.

But what on earth does that MEAN when the only one in the room is
Searle. What is a "program"? It's either code inert on paper. Or it's
code actually running. Well Searle IS the running code. So if Searle is
not understanding, who/what could be? There's no one else in sight!

> Searle is trying to get the hardware to understand, not
> the program, which is impossible.
>
> According to computationalism, an AI (or mind) is implementation independant,
> so it will not matter whether it is run on a brain, a computer, or even a set
> of water pipes.

Correct. But a poor, helpless computer Chinese T2-passing computer is
powerless to tell you that you are making a big mistake if you think it
understands Chinese, whereas Searle is able to tell you so. First, is
there any reason you should not believe Searle? If not, then if it's
true in his case that there's no Chinese-understanding whatsoever going
on in his head, just meaningless -- but computational, hence
symbol-manipulation rule-based and systematically interpretable BY US --
squiggling, as in any computation, whether T2 or payroll-calculation,
then, by the TRANSITIVITY of implementation-independence, there's no
understanding going on in the computer (or any other implementation)
either.

> >Boardman:
> >[Searle] also touches on the fact that we ascribe intentionality to animals
> >suggesting that's because we cant make sense of the animals behaviour and
> >they are made of similar stuff to ourselves.
>
> How far do you have to go before a computer/robot is
> 'ascribed intentionality'?
> Making the components out of neurons? Or will circuits do? What about a
> simulation of neurons? That's a computer program, and so is implementation
> imdependant.

Good question. My own candidate for an answer would be: passing T3. The
material (T4, T5) is irrelevant. What do others think?

> >Boardman:
> >Searle then mentions 'The other minds reply' which states that we can never
> >know that another being has a mind but by being the other being, so to
> >consider another human to have a mind you must also consider a computer to
> >also. Which he counters by saying that we know that simple computational
> >processes don't have minds so why should complex ones.
>
> The Ant-Calculator argument:
> Does an ant have a mind? It is made of the same stuff as ours, and we assume
> we have a mind. I think we all agree that a calculator does not have a
> mind. But, just as a more complicated ant's brain (a human brain) has
> a mind, why can't a more complicated calculator (a computer) have a mind?

You are putting your money on some sort of generic "complexity". But who
says mind is just a matter of some level of "complexity"? Maybe we can
design much more complex systems than the human mind, yet they would lack
a mind. And certainly nature can design MUCH simpler systems than the
human mind (and simpler even than the ant mind), and they still have a
mind. (Some of the most primitive invertebrates can do practically
nothing; even a car may be more complex than them.)

So it's not about arbitrary levels of "complexity." It's more specific
functional (and perhaps structural) properties that are needed to
generate minds. No one knows what those are. We just know that, whatever
they are, organisms have them (and calculators don't).

So how to figure out what they are? Just by arbitrarily comlexifying a
calculator? But you could quickly get to orders of magnitude greater
complexity than a calculator, and still bypass the mind of an ant!

Enter T3, and the hypothesis that it is the functional demands of
generating T3-scale performance capacity that will filter out the
winners and the losers in the mind-game (just as they did in our own
evolution).

> Imagine a program which exactly models a human brain (to the degree that
> matters, atoms perhaps). This program then runs an AI program. Everyhing
> that a human brain has, the simulated brain has. Can there be anything
> missing? If is a physical item (eg. eyes, ears) then include them in
> the model as well.

Imagine a program which exactly models a human heart (to the degree that
matters, atoms perhaps). Everything that a human heart has, the simulated
heart has. Can there be anything missing? If is a physical item (eg.
valves, hemoglobin) then include them in the model as well.

Will this simulated heart beat, pump blood? Of course not. It is
squiggling only, and the squiggles are systematically interpretable BY
US as a heart beating and pumping.

By the same token, will the simulated brain think, understand? Or is it
just squiggles that are systematically interpretable BY US as a mind,
thinking and understanding?

> > SEARLE:
> > SEARLE:
> > "Could a machine think?"
> > The answer is, obviously, yes. We are precisely such machines.
> > "Yes, but could an artefact, a man-made machine think?"
> > Assuming it is possible to produce artificially a machine with a nervous
> > system, neurons with axons and dendrites, and all the rest of it,
> > sufficiently like ours, again the answer to the question seems to be
> > obviously, yes.
>
> How about a program which models these parts? It will still be a program, not
> hardware.

You are talking here about the difference between a SYNTHETIC mind (no
problem) and VIRTUAL mind (big problem: it's just interpretable
squiggling).

> > SEARLE:
> > If you can exactly duplicate the causes, you could
> > duplicate the effects. And indeed it might be possible to produce
> > consciousness, intentionality, and all the rest of it using some other
> > sorts of chemical principles than those that human beings use. It is, as
> > I said, an empirical question.
> > "OK, but could a digital computer think?"
> > If by "digital computer" we mean anything at all that has a level of
> > description where it can correctly be described as the instantiation of a
> > computer program, then again the answer is, of course, yes, since we are
> > the instantiations of any number of computer programs, and we can think.
>
> Searle has just admitted that a set of computer programs can think.
> A human is a 'digital computer', so is a computer and so can run
> these programs, and thus can think.

No, Searle has just admitted that he accepts the Church/Turing Thesis
that (just about) everything can be simulated computationally. So, yes,
a computer program can be written that simulates by cognition, my
neurons, my behavior, whatever you like. So the same computer program
describes both of us: my brain, and the simulation of my brain.

But the simulation of my brain does not have a mind.

And (in case you forgot), there is more to me than there is to my
simulation. For my simulation is just-implementation-independent
squiggles, whereas I am also a lot of noncomputational hardware,
structures and processes. I am a hybrid system (just as a heart,
airplane, brain are). And my having a mind DEPENDS on the
noncomputational components and processes as much as on the
computational ones.

A simulated hybrid system, by the way, if not a hybrid system...

> > SEARLE:
> > Could instantiating a
> > program, the right program of course, by itself be a sufficient condition
> > of understanding?"
> > This I think is the right question to ask, though it is usually confused
> > with one or more of the earlier questions, and the answer to it is no.
> > "Why not?"
> > Because the formal symbol manipulations by themselves don't have any
> > intentionality; they are quite meaningless; they aren't even symbol
> > manipulations, since the symbols don't symbolize anything. In the
> > linguistic jargon, they have only a syntax but no semantics. Such
> > intentionality as computers appear to have is solely in the minds of
> > those who program them and those who use them, those who send in the
> > input and those who interpret the output.
>
> >Boardman:
> >This sounds pretty good; the symbols are ungrounded, the best way of
> >enabling the computer to understand the meaning of its symbols is to do it
> >the human way, learning. Get your computer to evolve and learn, start it as
> >an amoeba and work its way up, wouldn't it then have a mind?
>
> Yes, i agree that this method could also work.

Not too fast. This is the symbol grounding problem. This will be next
week's topic. The answer is that the symbolic (T2) meanings have to be
grounded in robotic (T3) capabilities. This is not just a matter of
learning; being sensorimotor, it involves hybridism in an essential,
fundamental way. The grounding cannot be computational all the way
down.

> >Boardman:
> >It seems entirely plausible that the act of memorising such a program might
> >well teach one Chinese, or at least that to do with restaurant story's.
>
> I disagree, could you understand how a calculator works by memorising it's
> machine code? No.

No, but even if you could figure out what Chinese means from the T2
program, it would be irrelevant. Computationalism doesn't say (T2)
computers figure out what their codes mean; it says that (T2) computers
understand purely in virtue of running the right codes.

> A rainstorm is wet. A simulated rainstorm has simulated wetness. A simulated
> person will get a simulated drenching. A simulated rainstorm is not supposed
> to be really wet, that's not the purpose of it. If you wanted to create a
> rainstorm, then make a duplication of it - buckets of water?

Umm, now would you run that by us again, substituting "understanding"
for "wet" please...?

> According to Searle, a brain has something a computer doesn't. Why can't a
> computer just simulate the brain. It would not be missing anything then,
> would it?

Try substituting a computer simulation of a kidney for an ailing
kidney: What would it be missing? Consider instead a synthetic, rather
than a virtual kidney. A synthetic kidney could be hybrid
computational/noncomputational. Same is true of the brain.

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT