Re: Searle: Is the Brain a Digital Computer?

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Mon Mar 19 2001 - 15:42:38 GMT


On Fri, 23 Feb 2001 wrote:

> Salcedo:
> what if the mind itself, the thing we call
> consciousness, is nothing but reactions to some more complex set of
> procedures or operations, that can actually be defined formally and
> described in some analogous way to computer programs, syntactically?

If it is, it certainly not at all obvious how or why it is. (Why should
it FEEL like something to be the reaction to a set of procedures?) And,
even more important, what if it's not?

The logic is: If the moon is nothing but a hunk of green cheese, then
the moon is just a dairy product. But why should I think the moon is
nothing but a hunk of green cheese? Just because it looks like a hunk
of green cheese?

> Salcedo:
> When asking if we
> can actually simulate brain operations on a digital computer, we have
> to remind ourselves that a simulation is not by itself a complete
> representation of what it simulates.

Kid-sib is confused by this language of "operations," "simulations" and
"representations."

He just wants to know whether or not that computer over there, whatever
code it is running (and whether it's an operation, simulation, or
representation), is really thinking, the way his brain is really
thinking. It's the same as asking whether whether or not something is
really flying, the way a plane is really flying.

> SEARLE:
> > At some level of description brain processes are syntactical; there are
> > so to speak, "sentences in the head".
>
> Salcedo:
> If at that level of description we can get a syntactical representation
> of the process, then by Churchs we can define them on a digital
> computer. And as such, the brain itself would be a digital computer.
> We would need to understand how they are semantically related and why they
> mean what they do, how and what do they actually mean.

But wouldn't exactly the same thing be true of sentences on a piece of
paper? Yet we would never ask ourselves whether the paper is thinking or
understanding. What difference does it make if the sentences are
implemented dynamically. The question still remains: Is that really what
thinking is? Is there any understanding going on there?

> SEARLE:
> > Now it seems reasonable to suppose there might also be a whole lot of mental
> > processes going on in my brain nonconsciously which are also computational.
>
> Salcedo:
> But what is consciousness? How can we distinguish something that is
> conscious to me to one that it is not?

Now that (for a change) is an easy one (as long as you don't confuse
the ontic question -- Am I really having a conscious thought? -- with
the epistemic question -- How can someone else tell whether or not I am
having a conscious thought?).

The answer is simple, and exactly the same as the answer to the question
"Does this pinch really hurt?" You know the answer to that for sure. You
can't be wrong. And no one else can know for sure, either what you felt,
or that you felt.

That is consciousness. The pinch you did not feel, you are not conscious
of.

> Salcedo:
> A thought is itself a conscious
> process, because its part of reasoning or is it unconscious because it
> happens naturally?

It is conscious because you feel yourself thinking it. An "unconscious
thought" is a contradiction in terms. (If I didn't think it,
consciously, who/what did?) No such problem with Searle's unconscious
PROCESSES, because no one is THINKING them, they are simply happening,
in the same way that your balance and your heart-rate and your breathing
are maintained by unconscious processes. But if it weren't for the
conscious part, you would be a Zombie, and thinking would equal T2 or T3
by definition (because there would be nothing else for it to be!). But
there IS something else, and each of us knows exactly what that
something else is, and hence it is not as easy to answer whether or not
computation alone is thinking as it is to answer whether or not
computation alone is flying (it's not).

> Salcedo:
> No-one ever pre-defines what thought he/she is going
> to have. If doing it, they would be processing a thought as well.
> What matters here is which of these can be algorithmically defined. If
> assuming that both could be described as an algorithm, they would then be
> computational and thus we could simulate them on a digital computer.

One gets out of an assumption exactly what one puts into it. If we
ASSUME thought is just computation, then of course it follows that
thought is just computation. The question we are addressing here,
however, is whether or not that assumption (a hypothesis, really) is
correct.

> Salcedo:
> How can we actually compare internal representations between something
> that, for now, is completely nonconscious as a computer program, and
> something that can either be done consciously/unconsciously by a human
> brain?

We are not "comparing internal representations." We are asking whether or
not the system really thinks.

> Salcedo:
> If different people think in different ways, and therefore have
> completely different outcomes even while having exactly the same
> internal processes, how can we actually say that the computer mirrors
> the brain computer?

The differences between people are irrelevant. They all think (and they
all pass T2.) The question is whether or not the T2-passing computer
really thinks.

> Salcedo:
> So what if I believe in the existence of a soul and that mere belief
> could be possibly transcribed algorithmically to a set of computational
> processes?
> Then a mechanical computer that would mirror these computational
> processes would believe he had a soul?
> A belief doesnt mean it is the truth. So would a computer be fooled
> into thinking he had a soul, just as I was?

You are just going in circles. The same question can be asked of
"belief" as of thinking: Does it really believe, or is it just symbols
that are interpretable as beliefs (just like sentences on paper).

> SEARLE:
> > Computational states are not discovered within the physics, they are assigned
> > to the physics.

A physical system is a certain dynamical system (e.g., a planet, a
spring, a plane). Physical states can be simulated on a computer, but
the computer (which is also a dynamical, physical system) is not a
planet, a spring, or a plane. It lacks all those physical properties. It
is just a symbol system that is interpretable as having those properties
(just like a sentence on paper).

> Salcedo:
> Computational states are thus not intrinsic to the physics. This is why
> this is so important to the discussion whether or not the brain
> processes are computational. They are not said to be computational
> because of the physical and chemical processes that occur in the brain,
> but only because they are actually equivalent to a simple symbol
> manipulation, be it neurons changing states or 1s and 0s manipulation.
> It is completely relative to the observer who assigns the process as
> being computational.

Kid-sib is confused upon hearing this: I understand that gravitational
attraction is a physical property that is "intrinsic" to physical
bodies. Same is true of chemical bonds. So far we have covered planets
and brains (gravitation, chemistry). Where does thinking and computation
come in?

A programme can be run on an infinity of different hardwares. The
physics of the hardware is irrelevant ("implementation-independence") as
long as it is implementing the right programme. So the question about
the brain is whether, in generating real thinking, it is just being the
(irrelevant) hardware for running a certain programme, or is there
something more to generating thinking than just implementing the right
programme?

> Salcedo:
> This is by far the main debate between science and philosophy.
> Philosophers believe in the meta-physical mind, something that belongs
> to us but doesnt at the same time.

Not at all. Philosophers are asking exactly the same question about
whether thinking is just computation as we can ask about whether flying
is just computation.

> SEARLE:
> > So the puzzle is, how do we reconcile the fact that syntax, as such, has no
> > causal powers with the fact that we do give causal explanations that appeal to
> > programs?

What Searle really means to say here with this unfortunate "causal
power of the brain" business is this: We know that the right
computations, implemented on a computer, will have certain "causal
powers": They can solve problems, answer questions, simulate physical
systems like planes, flying, etc. But we also know that there are
certain "causal powers" they lack: They can simulate flying, but they
cannot actually fly. Could this be true about thinking too? For then the
brain has the "causal power" to think, whereas the computer does not,
just has the plane has the "causal power" to fly, whereas the computer
does not.

The reason thinking is trickier is that anyone can observe whether or
not there is flying going on, whereas the only one who can observe
whether or not thinking is going on is the thinker!

> SEARLE:
> > So how do we get computation into the brain without a homunculus?

> Salcedo:
> When considering the brain, who is the homunculus? Its obviously the
> owner of the brain. If we now consider a brain without a homunculus
> what would it be? How can it still compute? Would it compute?

This reverse question is of no interest: We were asking about whether a
computer, computing, is really thinking.

We know a brain, thinking, is really thinking. Whether it is also doing
some computation as part of thinking is not an interesting question. (Of
course it is: At the very least, when I am calculating something, my
brain is doing it too.)

And the "homunculus" is merely the thinker. We know that the brain
generates a thinker, but we don't know how. What is on trial here is
whether or not the answer to that "how" is merely: by implementing the
right computations.

Stevan Harnad



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:24 BST