Re: Searle: Minds, Brains and Programs

From: Button David (drb198@ecs.soton.ac.uk)
Date: Tue May 01 2001 - 12:26:00 BST


Subject: Searle, John. R. : Minds Brains and Programs (1980)

Button:
In the abstract of this paper, Searle dictates the aim of the discussion:

> SEARLE:
>This article can be viewed as an attempt to explore the consequences
>of two propositions. (1) Intentionality in human beings (and animals)
>is a product of causal features of the brain I assume this is an
>empirical fact about the actual causal relations between mental
>processes and brains. It says simply that certain brain processes are
>sufficient for intentionality. (2) Instantiating a computer program
>is never by itself a sufficient condition of intentionality The main
>argument of this paper is directed at establishing this claim The form
>of the argument is to show how a human agent could instantiate the
>program and still not have the relevant intentionality.

Button:
This has become known as Searle's Chinese Room argument and, in essence,
is the argument that what happens in the brain cannot simply be the
execution of a program. As a result, an entity with the capability of
intentionality must possess the causal powers of the brain:

>SEARLE:
>Any attempt literally to create intentionality artificially (strong AI)
>could not succeed just by designing programs but would have to duplicate
>the causal powers of the human brain.

Button:
Searle's next comments refer to the area in which he wishes to argue. His
main point is that of thinking. He argues in particular that the approach
of strong AI cannot produce a thinking, intelligent system.

>SEARLE:
>"Could a machine think?" On the argument advanced here only a machine
>could think, and only very special kinds of machines, namely brains
>and machines with internal causal powers equivalent to those of brains
>And that is why strong AI has little to tell us about thinking, since
>it is not about machines but about programs, and no program by itself
>is sufficient for thinking.

>SEARLE:
>But according to strong AI, the computer is not merely a tool in the
>study of the mind; rather, the appropriately programmed computer really
>is a mind, in the sense that computers given the right programs can
>be literally said to understand and have other cognitive states.

Button:
To aid in his argument, Searle uses the work of Roger Schank (Schank and
Abelson 1977). He explains that this work simulates the ability of the
human brain to understand stories:

>SEARLE:
>It is characteristic of human beings' story-understanding
>capacity that they can answer questions about the story even though
>the information that they give was never explicitly stated in the
>story.

Button:
Using a restaurant example, Searle shows the function of the program.
Simply, the program uses a representation of the sort of information that
humans have about restaurants and can therefore answer the sort of
questions that are asked.

>SEARLE:
>Partisans of strong AI claim that in this question and answer sequence
>the machine is not only simulating a human ability but also
>
>1. that the machine can literally be said to understand the story and
>provide the answers to questions, and
>
>2. that what the machine and its program do explains the human ability
>to understand the story and answer questions about it.

Button:
This claim, I believe, is totally unfounded. While the program is capable
of answering questions, it is in essence, simply an expert system. It
cannot therefore be understanding the story, but interpreting according
to some representation of the data. Also:

>SEARLE:
>Now Schank's machines can similarly answer questions about restaurants
>in this fashion. To do this, they have a -representation" of the sort of
>information that human beings have about restaurants, which enables them
>to answer such questions as those above, given these sorts of stories.

Button:
This particular quote suggests that the machine requires a human
representation of the restaurant and the questions that could be asked.
I therefore think that the machine cannot be understanding but using the
representation that it is given to answer some form of questions. (?)
In order to show that these two claims were unfounded, Searle introduces
a Gedankenexperiment - his Chinese Room Argument.
The Chinese Room Argument involves an native speaker of English. If this
person were placed in a room with a set of Chinese symbols (the language)
which they did not understand. If they were then given a second and then
third set of Chinese symbols (again that they cannot understand - or
possibly even know as Chinese) that correlate to the first set of
symbols, then these three could be considered as a script, a story and
some questions about the story. Then the person could be given some rules
in English that link the three sets of symbols and allow the person to
form some new set of Chinese symbols that can be considered as an answer
to the questions about the story.
This means that the person is simply following some formal set of
instructions to generate an output from an input without knowing about
either (as they do not know Chinese).
This suggests that the set of instructions can be considered as just a
program that is followed. This is a realistic possiblility as the point
of an algorithm is to take inputs and based on some set of rules generate
some outputs.

>SEARLE:
>Now just to complicate the story a little, imagine that these people
>also give me stories in English, which I understand, and they then ask
>me questions in English about these stories, and I give them back answers
>in English. Suppose also that after a while I get so good at following
>the instructions for manipulating the Chinese symbols and the programmers
>get so good at writing the programs that from the external point of view
>that is, from the point of view of somebody outside the room in which I
>am locked -- my answers to the questions are absolutely indistinguishable
>from those of native Chinese speakers. Nobody just looking at my answers
>can tell that I don't speak a word of Chinese.

Button:
It is at this point that the argument states it's claim. While the idea of
the program is possible, the concept of a program capable of producing
outputs with 'meaning' from some infinite combination of inputs is very
unlikely. However, it is not this, I believe, that Searle wishes to point
out. It is the claims made by strong AI that he wishes to discuss:

>SEARLE:
>Now the claims made by strong AI are that the programmed computer
>understands the stories and that the program in some sense explains
>human understanding. But we are now in a position to examine these claims
>in light of our thought experiment.

Button:
As shown in his description of the experiment, the manipulation of the
Chinese symbols in no way uses an understanding of the language i.e. the
person does not know Chinese. I agree with Searle's argument that the
view of strong AI, with reference to this machine, is unfounded and the
program cannot understand.
The second claim of strong AI:

>SEARLE:
>2. As regards the second claim, that the program explains human
>understanding, we can see that the computer and its program do not
>provide sufficient conditions of understanding since the computer and
>the program are functioning, and there is no understanding. But does
>it even provide a necessary condition or a significant contribution
>to understanding?

Button:
I do not think that the program explains human understanding. Searle
continues with the point that to agree with this claim it must be said
that in the Chinese Room, the person is doing the same thing in their
brain with the Chinese symbols as they are with the English questions.
This, although perhaps very difficult to confirm or deny, is a ludicrous
suggestion to me. The English answers require the person to use their
memory and intuition in order to form meaningful answers. In addition,
the questions could be answered differently but still maintain Turing
indistinguishability, whereas the Chinese answers would be set on a rigid
set of rules (the program).
Searle's next discussion is on the word understanding and what it means.
He covers the point that understanding has levels, and it is not always
simply the case of something being understood. However, he also says that
this is not the issue:

>SEARLE:
>There are clear cases in which "understanding' literally applies and
>clear cases in which it does not apply; and these two sorts of cases are
>all I need for this argument 2 I understand stories in English; to a
>lesser degree I can understand stories in French; to a still lesser
>degree, stories in German; and in Chinese, not at all. My car and my
>adding machine, on the other hand, understand nothing: they are not in
>that line of business. We often attribute "under standing" and other
>cognitive predicates by metaphor and analogy to cars, adding machines,
>and other artifacts, but nothing is proved by such attributions.

Button:
This is a very important point to the argument and his next comments
explain why we often make these arrtibutions:

>SEARLE:
>The reason we make these attributions is quite interesting, and it has
>to do with the fact that in artifacts we extend our own intentionality;3
>our tools are extensions of our purposes, and so we find it natural to
>make metaphorical attributions of intentionality to them; but I take it
>no philosophical ice is cut by such examples.

Button:
I agree. There are many occasions in which humans attribute understanding
or blame to machines that are simply doing what they are programmed to
do - and are the only things they can do.
According to the claims of string AI, the person in the Chinese Room
understands Chinese (the program understands Chinese). This understanding
Searle claims to be:

>SEARLE:
>I will argue that in the literal sense the programmed computer
>understands what the car and the adding machine understand, namely,
>exactly nothing. The computer understanding is not just (like my
>understanding of German) partial or incomplete; it is zero.

Button:
Searle next turns to the counter arguments made to his claims. The
Robot Reply claims that if Schank's program was part of a robot (it's
brain), and attached were objects that allowed it to see and act, then
it would have genuine understanding.

>SEARLE:
>The first thing to notice about the robot reply is that it tacitly
>concedes that cognition is not solely a matter of formal symbol
>manipulation, since this reply adds a set of causal relation with
>the outside world.

Button:
This reply to Searle's argument is interesting as it does raise the
issue that we cannot become the whole system as we can in the Chinese
Room Argument. Searle's comments about this reply extend the Chinese
Room Argument to include symbols fed to the person by a television screen
from the robot's 'eyes' and also that the output may drive motors in the
robot's legs. This extension reveals that the same core to the system
remains - a symbol manipulation system with no understanding.
I question this reply as, while Searle's point on extensions to the
Chinese Room do enhance the argument, the person in the Chinese Room no
longer contains the entire system. While it seems easy to tell that this
makes no difference to the overall effect, it does raise doubt. How can
we be sure that the robot has no understanding if we are not the entire
system. The central fact of the Chinese Room that makes it irrefutable is
that the person does not understand Chinese because they are the whole
system. In fact, in Searle's disagreement to the first reply (Systems
Reply), he extends the Chinese Room to incorporate the whole system:

>SEARLE:
>My response to the systems theory is quite simple: let the individual
>internalize all of these elements of the system. He memorizes the rules
>in the ledger and the data banks of Chinese symbols, and he does all the
>calculations in his head. The individual then incorporates the entire
>system. There isn't anything at all to the system that he does not
>encompass. We can even get rid of the room and suppose he works outdoors.
>All the same, he understands nothing of the Chinese, and a fortiori
>neither does the system, because there isn't anything in the system that
>isn't in him. If he doesn't understand, then there is no way the system
>could understand because the system is just a part of him.

Button:
As a result, Searle's comments about the Robot reply become confusing as
we can no longer become the whole system and therefore cannot be certain
of the system's understanding.
This second reply is an extension of the Systems Reply except that it
involves additions to the system that cannot be encapsulated within the
person. On a personal level, Searle's reply to the Systems Reply seems
quite sensible:

>SEARLE:
>Actually I feel somewhat embarrassed to give even this answer to the
>systems theory because the theory seems to me so implausible to start
>with. The idea is that while a person doesn't understand Chinese,
>somehow the conjunction of that person and bits of paper might
>understand Chinese.

Button:
The Many Mansions Reply brings the addition of a different objection:

>SEARLE:
>Your whole argument presupposes that AI is only about analogue and
>digital computers. But that just happens to be the present state of
>technology. Whatever these causal processes are that you say are
>essential for intentionality (assuming you are right), eventually we
>will be able to build devices that have these causal processes, and that
>will be artificial intelligence.
>I really have no objection to this reply save to say that it in effect
>trivialises the project of strong AI by redefining it as whatever
>artificially produces and explains cognition.

Button:
This is quite reasonable. The argument of the Many Mansions Reply, if I
have understood correctly, is that if we were capable of building the
correct machines then strong AI is possible. This is an odd reply.
Searle's argument is based on what is possible at present. How can we
consider machines of the future? While the Many Mansions Reply does
bring to attention the fact that Searle's argument does confine the
problem to a particular domain - the computational rule based program -
this is a reasonable constraint.
The remainder of the paper is a question and answer section.
An interesting part of this deals with the idea of simulations.

>SEARLE:
>The idea that computer simulations could be the real thing ought to
>have seemed suspicious in the first place because the computer isn't
>confined to simulating mental operations, by any means. No one
>supposes that computer simulations of a five-alarm fire will burn
>the neighbourhood down or that a computer simulation of a rainstorm
>will leave us all drenched. Why on earth would anyone suppose that a
>computer simulation of understanding actually understood anything?

Button:
I agree with his views on simulations as in all these circumstances the
core is simply a program executing an algorithm based on some inputs.
It is the comments on pain and feelings that must be considered. I do not
know how the brain works (obviously), but the concept of pain I find
strange. Pain, I believe, is not simply an issue of some inputs being
converted into some feeling - pain. However, first there is the issue of
'pain thresholds'. What is this? Some people find something very painful
while others do not. This could be a case of different people having
different pain algorithms.
While I do believe what Searle has discussed, the above is an interesting
question. I think that while the Chinese Room Argument holds for the
initial case, the issue raised by the Robot Reply is quite valid - we can
be certain that the program does not understand if we can become the
whole system, but if we cannot be the whole system we cannot be sure -
even if we think we know.

David Button - drb198@ecs.soton.ac.uk



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST