Re: Searle: Is the Brain a Digital Computer?

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Mon Mar 19 2001 - 18:29:26 GMT


On Mon, 5 Mar 2001, McIntosh Chris wrote:

> http://cogsci.soton.ac.uk/~harnad/Papers/Py104/searle.comp.html
>
> Mcintosh:
> First a definition of digital computer: A device capable of solving
> processes by processing information in discrete, binary form. Most
> man-made computers are of this kind.
>
> In contrast, analog computers operate on directly measurable
> (amounts of) quantities on a continuous scale, such as electrical
> signals, with applications in areas such as simulation and robotics.

You should comment on the MacLennan paper about analog computers too!

> Mcintosh:
> although the brain
> must be more than just a digital computer, it is worth investigating
> whether such a computer could still be a crucial component.

And here (in my opinion) Searle goes wrong. His Chinese Room Argument
showed quite persuasively that cognition cannot be ALL computation. But
nothing he says in that article or this one shows that it cannot be
computation AT ALL.

I will quickly point to where Searle goes wrong. He is right to say that
the symbols in a symbol system (computation) don't mean anything; like
the sentences in a book, their meaning is parasitic on the meaning in the
mind of an external interpreter. (This is the symbol grounding problem.)

But then Searle over-reaches: He says not only is there no meaning in a
"thinking" programme, but only in the mind of the interpreter (he's
right about that), but there really is no such thing as a programme or
computation, except in the minds of interpreters. (That is half-right;
interpretable code really exists; usually it is created deliberately by
people, but maybe it could be created by nature too, who knows? The only
part that is sure is that the interpretation itself is not part of the
code; the code is ungrounded.)

But having persuaded himself that it is not just the interpretation of
the code that is in the mind of the interpreter rather than in the code
("intrinsic"), Searle goes on to imply that code itself is only in the
mind of the user: Something is code only if I treat it as such;
otherwise it's just physics. So not only is the meaning of the
computation in the head of the user rather than intrinsic to the
computation, the computation itself is just in the head of the user, and
not intrinsic to the physical system that is (supposedly) "implementing"
the computation. Therefore the brain could not be doing computation even
IN PART (except if the "user" is doing conscious computation, as in doing
long division!)

This conclusion is, I think, wrong. Not only can there be interpretable
codes sitting on the static pages of books written by people with minds,
and implemented on dynamical systems (computers) by people with minds,
but they could also happen by chance (as in the case of the genetic
code) or by "natural design" (as in the case of whatever brain codes
there might be, which were in turn "shaped" by genetic evolution).

What was the human genome project, if not a project in deciphering
(hence interpreting) the genetic code, which was itself a natural
product of natural selection? The genetic code did not turn into a
symbol system only at the moment that humans deciphered it. It was
already a symbol system. The same is true about the brain sensory, motor
and cognitive codes that are being deciphered in brain research. The
codes are already in the brain, and do not require us as outside
interpreters in order to make them into real computational codes.

Why not? Partly because it is only the grounding of meaning that
requires an external interpreter; ungrounded (but interpretable) codes
are not a problem in principle. But in the case if the genetic and the
neural codes, they are "dedicated" codes. Dedicated codes are somehow
"married" to their inputs and outputs, making an external interpreter
unnecessary. DNA is both made of protein and codes a blueprint for
constructing protein. So it is not "hardware-independent" -- in its
protein-building power. That comes from its biophysics and biochemistry
too, and not just from the algorithms it codes. But the algorithms are
nevertheless really coded there, and not just in the minds of the
biochemists and geneticists who are interpreting it now.

By the same token, the brain could have a good deal of computational
code ("syntax") in it too, and not just because neuroscientists
interpret it as such. The neural code is also a dedicated one, being
implemented in neural processes that can not only code, but, for
example, move muscles.

So be selective as you read Searle. Remember that he is right about the
fact that the MEANING of the code is not in the code, but in the head of
the user/interpreter. But the code itself, the syntax, the algorithms,
are not just in the user's head (except if it happens to be your brain's
code we are talking about!): That syntax is really there, controlling
sensation, movement and other aspects of cognition in the case of the
brain, and protein-synthesis, growth and development in the case of the
genome.

Here is an exercise: Could there be NON-dedicated codes in Nature? Are
there any?

> > SEARLE:
> > It is clear that at least some human mental abilities are algorithmic.
> > For example, I can consciously do long division by going through
> > the steps of an algorithm for solving long division problems.
>
> Mcintosh:
> Which mental abilities might not be algorithmic? How could
> non-algorithmic algorithms be represented to allow brain simulation?

Your question is awkwardly put ("non-algorithmic algorithms" is a
contradiction in terms). What you mean is what brain processes are
nonalgorithmic (noncomputational). Here are a few: sensation, movement,
and any analog or parallel/distributed processing that the brain might
be doing (as you noted).
 
> > SEARLE:
> > Computationally speaking, on this view, you can make a "brain" that
> > functions just like yours and mine out of cats and mice and cheese or
> > levers or water pipes or pigeons or anything else provided the two
> > systems are… "computationally equivalent". You would just need an
> > awful lot of cats, or pigeons or waterpipes, or whatever it might be.

This is just a colourful (and confusing) way of point out the
implantation-independence (hardware-independence) of computation.

> Mcintosh:
> It is rather doubtful that a collection of pipes or pigeons could
> reproduce consciousness. Syntactical similarity does not imply that
> the implementation or its physical effects will also be similar.

It's almost as improbable that a brain (or heart or liver) could produce
consciousness. So this is a red herring. The only relevant question is
whether whatever it is that DOES produce consciousness could be, all or
part, hardware-independent computation.

The answer is: All? No. Part? No reason why not. (Searle certainly gives
none.)

> Mcintosh:
> whether something is a computer is
> determined by its syntactical properties, permitting construction
> from any physical components.

The Turing Machine.

> Mcintosh:
> Since any object could be described syntactically in terms of 0's and
> 1's, everything could be described as a digital computer.

Here you have alas picked up a faulty message from Searle. When I
simulate an airplane using a computer, I am not "describing the airplane
as a computer." I am simply creating code that is interpretable as an
airplane. And the computer is the device that implements that code. Now
a real airplane is also (trivially" "interpretable as an airplane," but
that neither makes it into a computer, nor does it support the idea that
everything can be "described as a computer."

Everything (just about) can be simulated by a computer. That's the
Church/Turing Thesis and Turing Equivalence (not the Turing Test,
though!). That does not mean everything is a computer. The computer is
the simulator, the ungrounded symbol system that is interpretable (by
the user) as the thing that is being simulated. But the thing that is
being simulated is just a thing, not a computer.

(It isn't so easy to interpret something as something else. You need
powerful algorithms to do that in a nontrivial sense. Those algorithms,
i.e., that systematically interpretable code, is real, no matter where
it came from, even its interpretation is ungrounded, and needs to be
mediated by the mind of an external interpreter -- unless it is part of
a hybrid "dedicated" system, in which case at least its APPLICATION, if
not its interpretation, is grounded autonomously, with no need of
mediation.)

> Mcintosh:
> Furthermore, since syntax is not intrinsic to physics, a computational
> interpretation must be ascribed to and can never be discovered in the
> physical world. Searle explains why this is a problem:

The differential equations of physics are not computational algorithms;
far from being hardware-independent, they describe the hardware
properties of physics. When we say that a computation is
hardware-independent, we could just as well say it is
differential-equation-independent (i.e., the dynamics of the physical
system implementing the computation are irrelevant: it would be exactly
the same computation even if it were implemented by radically different
physical systems, obeying radically different differential equations).

So let us distinguish the exact differential equations that describe,
say, a real airplane in flight, and the computational simulation of that
airplane in flight, implemented on a computer. The "syntax" of the
computer simulation is certainly not "intrinsic" to physics. But the
dynamics (differential equations) of the real airplane, flying,
certainly are. Humans discovered them, to be sure, just as they
discovered the genetic code. But that does not make those dynamics, and
the differential equations they "obey" figments of the human
imagination.

(Computing the solutions to differential equations is of course
computation, but that's another matter.)

> > SEARLE:
> > "Is there... a fact of the
> > matter about brains that would make them digital computers?"
> > It does not answer that question to be told, yes, brains are digital
> > computers because everything is a digital computer.
>
> Mcintosh:
> Searle's Chinese Room argument demonstrated that semantics is not
> intrinsic to syntax, by showing that manipulation of symbols
> according to the syntax can be achieved without understanding. His
> new point that syntax is not intrinsic to physics, as tokens are
> assigned not discovered

And his no point is wrong (whereas his old point is right). I assigned
this paper so you would all see the difference between a valid argument
and an invalid one. (Yes, they sometimes come from the same minds!)

> > SEARLE:
> > to say that something is functioning as a computational process is
> > to say something more than that a pattern of physical events is
> > occurring. It requires the assignment of a computational
> > interpretation by some agent.

No. The computational interpretation is for what the computation MEANS,
not for the fact THAT it is a computation. Code is code. And
interpretable code is interpretable code. The only part that is missing,
ungrounded, is the interpretation itself.

Example. Numerical calculations are interpretable as quantities, in the
real world. 2 + 2 = 4 (etc.) That interpretation is not part of that
symbol system itself, but it is definitely a part of that symbol system
(and not part of the symbol system for, say, baking a cake) that its
symbols can be systematically interpreted as numbers, adding, etc. but
not as ingredients for a Boston Cream Pie.

And a recipe for Boston Cream Pie would be just as much a recipe for
Boston Cream Pie if it grew on a tree or miraculously fell out of the
sky as it is if someone deliberately wrote it.

Note that most things under the sun CANNOT be interpreted as the code
for baking Boston Cream Pie. The small number that really can be so
interpreted have something in common, and that is, that they are the
algorithms for baking Boston Cream Pie. That usually is because some
mind has composed the code, knowingly; and it certainly requires a
knowing mind to interpret the code to bake a cake (unless the tree the
code grew on also grows it into a dedicated baking machine). But the
fact THAT the code is systematically interpretable in the way that it
is, the fact that sets apart everything there is under the sun from
those things that can be systematically interpreted as the recipe for
Boston Cream Pie -- THAT fact is intrinsic to the code, and not dependent
on the mind of any outside user or interpreter.

> > SEARLE:
> > Analogously, we might discover in
> > nature objects which had the same sort of shape as chairs and which
> > could therefore be used as chairs; but we could not discover objects
> > in nature which were functioning as chairs, except relative to some
> > agents who regarded them or used them as chairs.

This is a bad analogy by Searle. The only thing that "chairs" share is
that they afford "sittability-upon" to human bottoms. So in that sense,
any bum-shaped concave surface is a potential chair (so we may as well
call it a chair: chairs don't become chairs only after being baptized
by bottoms). Nothing interesting is at issue in the contrast between
purpose-built chairs and opportunistic "chairs," actual and potential
(except if we want to further define chairs a human artifacts, again not
very interesting).

In the case of interpretable code: If I have a piece of interpretable
code, it is of no interest (for present purposes) whether or not some
"agent" has written it, read it, or used it. The only relevant thing is
that it is interpretable code (interpretable as the recipe for Boston
Cream Pie, for example).

> Mcintosh:
> Searle now considers another difficulty with cognitivism - the
> homunculus fallacy, which postulates a 'little man' in the mind to
> help explain mental abilities.

If you are trying to explain how a computer is detecting shapes, it
will not do to say: "The computer displays the shape on a pixel matrix
and then a little man inside looks at it and identifies what shape it
is." You would be right to reply: "That's no explanation: Now you have
to explain how the little man does it. And don't tell me that there's
another little man inside him!"

That's the homunculus fallacy (actually an infinite regress). At some
point, if you want to explain a function, you have to replace the
homunculus by a causal component whose function is transparent and
self-explanatory.

Searle (rightly) invokes the homunculus fallacy when someone tries to
say that it's enough to have interpretable code, executing. He points
out that we still need someone to interpret the code (except in a
dedicated system -- or a grounded one, which is a special form of
dedicated system).

> Mcintosh:
> Dennett and others have tried to discharge the homunculus as follows:
>
> > SEARLE:
> > Since the computational operations of the computer can be analyzed
> > into progressively simpler units, until eventually we reach simple
> > flip-flop, "yes-no", "1-0" patterns, it seems that the higher-level
> > homunculi can be discharged with progressively stupider homunculi,
> > until finally we reach the bottom level of a simple flip-flop that
> > involves no real homunculus at all. The idea, in short, is that
> > recursive decomposition will eliminate the homunculi.
>
> Mcintosh:
> Cognitivists will admit that higher levels of computation, such as
> multiplication, are purely syntactical, and therefore they are
> observer relative and not intrinsic to the physics. But at no lower
> levels does computation ever become suddenly intrinsic, so the
> homunculus fallacy cannot be escaped so easily.

Correct.

> Searle's next difficulty with cognitivism is that syntax has no
> causal powers. Just as DNA causes particular inherited traits and
> germs cause disease, so cognitivists would like to argue that
> programs underlying brain processes cause cognition. But,
>
> > SEARLE:
> > The implemented program has no causal powers other than those of the
> > implementing medium because the program has no real existence, no
> > ontology, beyond that of the implementing medium. Physically speaking
> > there is no such thing as a separate "program level".

Again, this is a combination of correct and incorrect points. Just code
running on a digital computer, with no peripherals, even if it is
running a simulation of the entire universe, has no "causal powers"
(other than those of the dynamic hardware that is implementing the
code). More important, even though it is interpretable as representing
the universe, that interpretation is not itself part of the code. The
code is just squiggles and squoggles (0's and 1's) at EVERY level,
from the object level to the highest programming language level.

However, if the code is running on a dedicated system (one connected to
telescopes as inputs, say, and anti-meteorite bombs as output, ready to
be launched to destroy a meteorite before it hits the earth) then in
that dedicated system the code DOES have further causal "powers."

The same is true of any computations taking place inside the head of a
T3 robot.

> > SEARLE:
> > The human computer is consciously following rules, and this fact
> > explains his behavior, but the mechanical computer is not literally
> > following any rules at all. It is designed to behave exactly as if it
> > were following rules, and so for practical, commercial purposes it
> > does not matter. Now Cognitivism tells us that the brain functions
> > like the commercial computer and this causes cognition. But
> > without a homunculus, both commercial computer and brain have only
> > patterns and the patterns have no causal powers in addition to those
> > of the implementing media. So it seems there is no way Cognitivism
> > could give a causal account of cognition.

Humans can consciously follow rules, true. Turing Machines follow rules
mechanically. If they are dedicated machines, this may have causal
consequences beyond just the computer (as in the meteorite-destroyer).
No homunculus needed for that. And if it is a grounded T3 robot, the
same is true. No homunculus; and whatever computation is going on, is
really going on, and really has (or shares) the causal "powers" of the
T3 robot.

By the way. If you sit in front of a screen and follow the rule: "Press
the left button if you see red and the right button if you see green,"
you are certainly following the rule consciously, but I challenge you to
explain to me HOW. The mechanism that does explain how you do that will
have to have an unconscious causal mechanism -- whether or not it is
computational, all or in part. Will that mechanism being following a
rule? Does it even matter? YOU are following a rule, and IT is the
causal mechanism underlying your rule-following.

> Mcintosh:
> A mechanical computer could not literally be following rules since it
> doesn't know what rules are. It behaves in a certain way given its
> input, and this is usually interpreted as rule-following.
> The computer needs a homunculus, the user, in addition to its
> implementing hardware, to perform meaningful computation. For the
> brain to operate as a digital computer it therefore would also require
> a homunculus.

No, it is not a homunculus that is needed, but a causal mechanism, both
for the computer and the for the brain. "Rule-following" is a red
herring (introduced by Wittgenstein):

http://krypton.mnsu.edu/~witt/
http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Explaining.Mind97/0184.html

> > SEARLE:
> > how do we reconcile the fact that syntax, as such,
> > has no causal powers with the fact that we do give causal
> > explanations that appeal to programs?...
> > if you know that a certain pattern exists
> > in a system you know that some cause or other is responsible for
> > the pattern. So you can, for example, predict later stages from
> > earlier stages.
>
> Mcintosh:
> So when a machine is implementing a program, it is implementing the
> intentions of the homunculus. Causal explanations can now be given,
> as the programmer has determined how the program will operate. The
> program should not be interpreted as determining its own behaviour.

Chris, you have been too uncritical of Searle. Remember the cake-baking
recipe. It could have been implemented as a dedicated cake-baking
system. All the causality there is autonomous and intrinsic. The
programmer is irrelevant. It is only when an interpreter is needed to
mediate that there are problems with causality. (And the programmer,
if any, is irrelevant.)

> > SEARLE:
> > We try to discover the programs being implemented in the brain by
> > programming computers to implement the same programs. We do this in
> > turn by getting the mechanical computer to match the performance of
> > the human computer (i.e. to pass the Turing Test) and then getting the
> > psychologists to look for evidence that the internal processes are the
> > same in the two types of computer… to test the hypothesis we look for
> > indirect psychological evidence, such as reaction times.
>
> Mcintosh:
> This is the research project that seeks to understand brain processes
> by analysing the performance of computers over similar processes.
> Searle disapproves though. If we actually knew the processes, the
> explanation given via computers could be ignored. Also, the
> explanation would not be acceptable for other sorts of systems that
> we could simulate computationally.
> Successful simulation of the weather would not give us a perfect
> understanding of the underlying physical processes.

You have accepted Searle too uncritically again. "If we actually knew
the processes, the explanation given via computers could be ignored."
Of course; everything else could then be ignored. But we are talking
about how to go about getting to know the process! Searle seems to
think looking inside the brain is the right way, but in general the
brain does not wear its functional principles on its sleeve, nor even
inside its fabric. Cognitive science can use all the help it can get,
include help from AI, both weak and strong!

> > SEARLE:
> > you cannot explain a physical system such as a typewriter or a brain
> > by identifying a pattern which it shares with its computational
> > simulation, because the existence of the pattern does not explain how
> > the system actually works as a physical system.

According the the Church-Turing Thesis (and the experience of
generations of computer-modellers) you should be able to understand
how any physical system works using computer modelling, if you manage
to come up with the right model.

> > SEARLE:
> > In the brain computer
> > there is no conscious intentional implementation of the algorithm as
> > there is in the human computer, but there can't be any nonconscious
> > implementation as there is in the mechanical computer either,
> > because that requires an outside homunculus to attach a
> > computational interpretation to the physical events.

I hope by now you see how both these points are at best irrelevant, at
worst, just plain wrong.

> > SEARLE:
> > The most we
> > could find in the brain is a pattern of events which is formally
> > similar to the implemented program in the mechanical computer, but
> > that pattern, as such, has no causal powers to call its own and hence
> > explains nothing.

This is merely repeating that the ungrounded computer simulation will
not actually be thinking, but not that it cannot simulate, and help us
get a fully causal understanding of the (hybrid) system (the grounded
T3 robot) that WILL actually be thinking; nor even that the computation
itself cannot be part of what actually goes on in that T3 robot.

> > SEARLE:
> > In the case of the brain, none
> > of the relevant neurobiological processes are observer relative
> > (though of course, like anything they can be described from an
> > observer relative point of view) and the specificity of the
> > neurophysiology matters desperately.

We should see by now that it is not the "observer-relativity" of
computation that is the problem, but its ungroundedness. We need a
system with the "causal powers" of a grounded T3 robot, not necessarily
those of a T4 synthetic brain or a T5 real brain.

http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

> Mcintosh:
> In general, the most accurate simulations are achieved by knowing how
> something works in the first place. However, what if we couldn't
> discover a complete neurobiological account of the brain. Simulations
> may be then be the only way to draw new inferences.

Correct. Nor is it clear that we even need all of T5 or T4. T3 should be
enough.

> Mcintosh:
> Intuitively, brain processes that seem to be non-computational, such
> as emotions perhaps, suggest that the brain could not be just an
> information processing system.

Why just emotions? Does it not FEEL like something to think, understand,
mean?

> Mcintosh:
> But aren't there still conscious algorithmic brain processes that could
> be interpreted as information processing? What if one area of the brain
> directs another specialist area to process information, giving it the
> necessary symbolic inputs and then making use of the computed results?
> It is hard to accept Searle's suggestion that the brain does no
> computation.

Correct. Searle has shown cognition can't be ALL computation, but not
that SOME of it cannot be.

> Mcintosh:
> It is worth
> noting, however, that the brain excels at parallel tasks such as
> face-recognition, and is weaker at sequential tasks, especially if
> they involve numbers. So in any event it seems we already have an
> intuitive basis for denying that the brain could be a digital computer.

For denying that it could be JUST a digital computer.

Stevan Harnad



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:24 BST