Re: Searle: Is the Brain a Digital Computer?

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Mon Mar 19 2001 - 20:45:43 GMT


On Thu, 8 Mar 2001, Bon Mo wrote:

> Searle: Is the Brain a Digital Computer?
> http://cogsci.soton.ac.uk/~harnad/Papers/Py104/searle.comp.html

> Mo:
> These thoughts are from your
> associative memory banks. These calls are often from long term memory.
> Most thoughts are from routines that you have performed many times.
> The unconscious thought process may actually be a layer behind the
> conscious level. It could be more abstract and recall more random memory
> sources,

You have translated ordinary experiences into computer language. You
have to set that aside to think of these questions, because if you
substitute computational theories for your previous descriptions of
your thought processes -- if you start to say "I computed" instead of
"I thought", for example -- then you will simply answer the question
of whether or not cognition is just cognition by your choice of
vocabulary!

> Mo:
> this is why people believe that they are not really thinking, as
> these thoughts have no relevance to what they are doing.

Better to pick a more intense mental state: Can people believe their
are not really feeling pain when they really are, or believe they are
really feeling pain when they are really not? This is back to
Descartes' question about what you can and cannot be sure about.

Can I say "I am thinking about a delicious food now" and someone can
say (correctly): Wrong, you are not!

Or, vice versa: "I am not thinking about food now" and someone says
(correctly): Wrong, you are!

> Mo:
> Assuming that consciousness exists,

Assuming? When I am feeling sad, am I just ASSUMING I feel sad? Can I
be wrong about what I feel (I don't mean wrong about feeling sad,
given that everything is going so well, I mean wrong THAT I am feeling
sad)?

> Mo:
> There may literally be billions of ways a human brain can be
> modeled.

T3/T4-scale?

> Mo:
> So how could they be generalised to such an abstract level that
> more one brain may work the same?

How is that different from modeling the liver?

> Mo:
> The main worry is that with such
> abstraction there may be over generalisation which removes functionality.

That's certainly a worry. But T3 sounds like a safe target. T2 and t1
may be underdetermined (Granny Objection #10) and T4 and T5 maybe
overdetermined (irrelevant extra functionality).

> Mo:
> What functions would need to be generalised from the human brain?

The ones that help pass T3.

> Mo:
> how would they interact to give any reasonable representation to how
> the human brain works?

Don't know what you mean here....

> Mo:
> Most of the brain is just sensory material connected to external sensory
> devices.

And motor.

> Mo:
> So the brain must be the organ that gives us intelligence.

That was never in doubt. The question is how? And what sort of
artificial system could do it too? For example, can a purely
computational system do it?

Stevan Harnad



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:24 BST