Re: Harnad: Cognition Isn't Computation

From: Boston Robert (rtb198@ecs.soton.ac.uk)
Date: Tue May 01 2001 - 13:33:50 BST


>HARNAD:
>With operations as elementary as these, everything that mathematicians,
>logicians and computer scientists have done so far by means of logical
>inference, calculation and proof can be done by the machine. I say "so
>far," because it is still an open question whether people can "compute"
>things that are not computable in this formal sense: If they could, then
>CTT would be false.

Boston:
It seems to me that the set of things called computable is defined as the
set of things a Turing Machine (TM) can do, logical inference,
calculation and proof, etc. If something is computable it is possible to
derive/prove/logically follow from something else. Something that
people can compute that is not computable this way on a TM cant be
called calculation. It can not be logically semanticly interpreted if it is
not possible to logically derive it on a TM, it would be a branch of
computation not connected to the rest of the tree.

>HARNAD:
>Meaning does not enter into the definition of formal computation.
>So although it is usually left unstated, it is still a criterial, if not a
>definitional property of computation that the symbol manipulations must
>be semantically interpretable.

Boston:
It is undesirable to start using meaning in the deffinition of computation.
The power of computation comes from the fact its symbopls are
abritrarily given meaning, to start inflicting this on the deffinition of
computation is an unneeded constraint. It is true however that meaning
is needed in order to use computation otherwise it is "a meaningless
syntactic game" but that is done by the interpreter and is not a property
of the computation.

>HARNAD:
>In other words, the set of semantically interpretable formal symbol
>systems is surely much smaller than the set of formal symbol systems
>simpliciter, and if generating uninterpretable symbol systems is
>computation at all, surely it is better described as trivial computation,
>whereas the kind of computation we are concerned with (whether we are
>mathemematicians or psychologists), is nontrivial computation: The
>kind that can be made systematic sense of.
>And I suspect that the difference will be an all-or-none one, rather than a
>matter of degree.

Boston:
I don't believe it is wise or useful to label some computations trivial and
others non-trivial. I am not sure there is a simple classification for
deciding/detecting trivial computations, there are several different types
of formal language with increasingly complex grammars each grammar
builds on a simpler grammar. The trivial computations are what non-
trivial computations are built from, by examining a sub-part of a non-
trivial symbol system you mat find trivial one.

>HARNAD:
>a wall can be excluded from the class of computers (or included only as
>a trivial computer).

Boston:
I think it best to include the wall as a trivial computer and possible
component of a non-trivial computer.

>HARNAD:
>So we are interested only in nontrivial computation.

Boston:
We only suspect non-trivial computation is insufficient so shall consider
nontrivial computation which may be composed of trivial computation.

>HARNAD:
>So we are interested only in nontrivial computation.That means symbols,
>manipulated on the basis of their shapes only, but nevertheless amenable
>to a systematic interpretation
>the symbols of natural language likewise have this property of
>arbitrariness in relation to what they mean

Boston:
Computation is implementation independent, the symbols used are
arbitrary with respect to what they mean. Useful computation is
systematically interpretable otherwise it is pointless, this has been called
the cryptographers constraint.

>HARNAD:
>For example, in formal Peano arithmetic, the equality symbol "=" is
>manipulated purely on the basis of its shape

Boston:
"=" was chosen as the symbol for equality because it composed of two
equal length paralell straight lines, nothing more equal could be thought
of.

>HARNAD:
>A cat on a mat can be interpreted as meaning a cat on the mat, with the
>cat being the symbol for cat, the mat for mat, and the spatial
>juxtaposition of them the symbol for being on. Why is this not
>computation? Because the shapes of the symbols are not arbitrary in
>relation to what they are interpretable as meaning, indeed they are
>precisely what they are interpretable as meaning.

Boston:
A cat on a mat is not computation, it is not implementation independent
as the symbols used to describe it are in fact the things it is describing.
If
the cat on a mat was moved to different hardware and the cat was
replaced by a dog the interpretation would change to a dog on a mat.

>HARNAD:
>Perhaps it was quite natural to conclude under the circumstances that
>since (1) we don't know how cognizers cognize, and since (2)
>computation can do so many of the things only cognizers can do,
>cognition is just some form of computation (C=C). After all, according
>to the CTT, computation is unique and apparently all-powerful; and
>according to the CTTP, whatever physical systems can do, computers
>can do.

Boston:
It is tempting to say cognition is computation because computation can
do so much.

>HARNAD:
>Consider the difference between a causal/functional system that
>adaptively avoids tissue damage and another that does the same thing,
>but feels/avoids pain in so doing: one is tempted to speak of the
>functional/causal role of the pain , but whatever functional/causal role
>one assigns it, one could just as well assign to its physical substrate,
and
>then one could just as well subtract the pain and refer only to the
>functional causal role of its physical substrate. And what is true -- or
>untrue -- of pain, is true of belief and desire too, indeed of all mental
>states. They all have this peekaboo relation to their physical substrate

Boston:
Where would a symbol for pain be grounded? a T3 robot may be able to
token cats mats and fat bats but how could pain or any other mental state
be grounded? Grounding symbols is an important ability of a device that
may cognise but fails where there is no ground for the symbols. Pain
(and other mental states) could be grounded in the physical substrate but
that isn't good enough. Even cats on mats are not feline mammals upon
woven floor coverings, there are mental states attached (when someone
sees a cat on a mat most instinctively go " ahhh, aren't you a sweetie,
here puss puss puss etc. etc.) .

>HARNAD:
>Looking for meaning in such a system is analogous to looking for
>meaning in a Chinese/Chinese dictionary when one does not know any
>Chinese: All the words are there, fully defined; it is all systematic and
>coherent. Yet if one looks up an entry, all one finds is a string of
>meaningless symbols by way of definition, and if one looks up each of
>the definienda in turn, one just finds more of the same.

Boston:
Looking for meaning in a system with ground symbols is analogouse to
looking for meaning in a Chinese/Chinese dictionary which has pictures
in, when you don't know any Chinese. You can find out what a cat is but
what picture goes next to pain. The symbol gounding problem states the
necessity for robotic interation with the real world but there is a need for
emotional connection to the real world. Emotions are probably
hardwired into us ( the first thing babies do is scream) as a results of
evolution, maybe this can be hardwired into a T3?

I am not aware of any interpretation going on in my head as I think. I
think SGR=C is false ( symbol grounded robots cant cognise), mental
states have no formal system or logic to them neither do thoughts.
mental states are not grounded and not computational, they are
infulenced by anything and everything. We can be in two opposite states
simultaneously, we can be happy and sad at the same time, this is
illogical and therefor uncomputational (even if happyness and sadness
can be grounded.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST