Re: Computation

From: Lucas, Melody (MFL93PY@psy.soton.ac.uk)
Date: Sun Feb 04 1996 - 16:09:47 GMT


Computation is symbol manipulation on the basis of their shapes which
is systematically interpretable. Computational systems are symbol
systems. The success of AI in mimicking human cognition has encouraged
some psychologists to think that due to a computer's and a human's
functional indistinguishability, they are one and the same. The Church
- Turing thesis supported this, by using the analogy of the Turing
machine (T1); a machine which manipulates an infinitely long tape which
holds binary codes. The machine plays, fast - forwards and pauses the
tape on the basis of rules which dictate every action dependent on its
internal states. Turing thought that he could prove computation was
cognition if he could make someone believe that a computer (T2) was a
person. He achieved the deception by having subjects write to a
'penpal' which was actually a computer. But, in contrast with popular
belief, he failed to prove C=C.

A characteristic shared by computation and cognition is implementation
- independence (Pylyshyn disagrees with this). We are investigating the
formal properties, not the physical ones. A different piece of hardware
with a different piece of software may be performing the same
computation. Likewise, the cognition performed by a human - being is
irrelevant of the biological composition of the brain (although
obviously correlated). Additional evidence to suggest C=C is that
computers can effectively simulate events, for example, a computer can
simulate an aeroplane in flight. However, the plane is not really
flying, so simulation may be computation, but the plane is not a real
plane and is not really flying.

Searle presented the Chinese Room arguement to illustrate what a
computer actually does and to disclaim T1 and T2. He suggested that a
person could use a set of symbols without actually understanding the
meaning of it (by using rules such as if X, then Y; If P then not Q
etc.) The man in the Chinese Room could only use the Chinese language
as if he were using a chinese - chinese dictionary. One symbol would
just lead to an explanation of more squiggles and squoggles. This leads
to the symbol grounding problem; people understand because they ground
the symbols in sensory experience. Even a robot which had light
transducers and pressure sensors (T3) does not cognize, because
transduction is not computation. If such a robot could ground symbols,
it would not have intrinsic intentionality (Searle) i.e. it would not
be capable of autonomous thought, an external interpreter would always
be necessary.

To accept cognition and computation as one and the same would be
homuncular (if this holds, humans would manipulate symbols and need a
'little man' to read and understand it). This is not true as we do
understand the meaning behind symbols and accepting a homunculus would
be 'passing the buck'. The confusion arises because computers can
appear to be understanding symbols. Computation can describe part of
the function of cognition, but it does not hold the complete story.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:58 GMT