Re: Harnad (2) on Computation and Cognition

From: Pentland, Gary (gp397@ecs.soton.ac.uk)
Date: Fri Apr 07 2000 - 16:48:01 BST


On Mon, 27 Mar 2000, Terry, Mark wrote:

> COMPUTATION IS JUST INTERPRETABLE SYMBOL MANIPULATION; COGNITION ISN'T
> HARNAD, Stevan
> http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.computation.cognition.html
>
> > HARNAD:
> > not everything is
> > a computer, because not everything can be given a systematic interpretation;
> > ... mental states will not just be the
> > implementations of the right symbol systems, because of the symbol grounding
> > problem: The interpretation of a symbol system is not intrinsic to the
> > system; it is projected onto it by the interpreter. This is not true of our
> > thoughts.
>
> TERRY:
> There may be some argument here along the lines of "we just interpret
> our thoughts from some internal symbol system, and project a meaning
> onto them".
> This extra layer of abstraction doesn't actually matter though, as even
> if we give meaning to internal squiggles and squoggles, the
> interpretation is still intrinsic in the system (our brains).

Yes but, the fact that symbols are grounded, even if they only are in our
own minds still means that they are grounded. Through language we can
convey the same mental state to another, therefore the gorunding of the
symbols must be similar in both minds, the interpretation must be the
same, so the system (brain) must be similar.
 
> > HARNAD:
> > We must accordingly be more than just computers. My guess is that
> > the meanings of our symbols are grounded in the substrate of our robotic
> > capacity to interact with that real world of objects, events and states of
> > affairs that our symbols are systematically interpretable as being about.
>
> TERRY:
> And computers must therefore be less than us. It is interesting that
> Harnad supposes that interaction is key. Defining what level this
> interaction must occur at would seem an important problem. ie, is being
> told what a donkey looks like enough, or do we have to see a donkey, or
> do we have to see a donkey in the correct context to be able to
> correctly identify another donkey.

I can identify something from a good description, but I suppose I would
link that that description to my own interactions with the world. How
many interactions do you need to successfully ground all symbols?

> > HARNAD:
> > Let me declare right away that I subscribe to
> > what has come to be called the Church/Turing Thesis (CTT) (Church 1956),
> > which is based on the converging evidence that all independent attempts to
> > formalise what mathematicians mean by a "computation" or an "effective
> > procedure," even when they have looked different on the surface, have turned
> > out to be equivalent (Galton 1990).
>
> TERRY:
> So do I, if only for the reason that no one has been able to disprove
> it.
> This is just to remind us that if we accept this, we know the limits of
> computatution, and can't make brash claims about what computers "may be
> able to do". I'll assume we are all familiar with the Turing machine's
> operation.
> Regarding this formal model of computation:

Yes, I agree, however if this is a fixed limit does this subject (AI) have
a purpose or should it be renamed to something more realistic.

Alternative Intelligence?

> > HARNAD:
> > it is still an open question whether people can "compute" things
> > that are not computable in this formal sense: If they could, then CTT would
> > be false. The Thesis is hence not a Theorem, amenable to proof, but an
> > inductive conjecture supported by evidence; yet the evidence is about formal
> > properties, rather than about physical, empirical ones.
>
> TERRY:
> It's good to keep the above in mind - CTT isn't a theorem. It has not
> yet been disproved, and subscribers to it believe it never will.

If we can understand a Goedel sentence (not computable) then CTT does not
apply to us but what if you added that as a fact, another would result and
the system would have to be infinitely complex to deal with them all.

Does there reach a point that a system is so complex that it no longer
matters weather it is proovable or not?

> > HARNAD:
> > There is a natural generalisation of CTT to physical systems (CTTP).
> > According to the CTTP, everything that a discrete physical system can do (or
> > everything that a continuous physical system can do, to as close an
> > approximation as we like) can be done by computation. The CTTP comes in two
> > dosages: A Weak and a Strong CTTP, depending on whether the thesis is that
> > all physical systems are formally equivalent to computers or that they are
> > just computers.
>
> TERRY:
> Harnad points out that much of his following argument is reliant on his belief
> in both CTT and CTTP.

It would be nice if Harnad would define what HE means by equivalent to a
computer.
 
> > HARNAD
> > shape-based operations are usually called "syntactic" to contrast them with
> > "semantic" operations, which would be based on the meanings of symbols,
> > rather than just their shapes.
>
> TERRY:
> As we know. Just keep it in mind below:
>
> > HARNAD:
> > Meaning does not enter into the definition of formal computation.
>
> TERRY:
> This is clearly the crux of the argument. Harand then uses the first
> time you were formally taught arithmatic or similar.

But isn't computation "semanticaly interpretable", attaching a meaning to
the squiggles and squoggles.

> > HARNAD:
> > At no time was the meaning of the
> > symbol used to justify what you were allowed to do with it. However,
> > although it was left unmentioned, the whole point of the exercise of
> > learning formal mathematics (or logic, or computer programming) is that all
> > those symbol manipulations are meaningful in some way ("+" really does
> > square with what we mean by adding things together, and "=" really does
> > correspond to what we mean by equality). It was not merely a meaningless
> > syntactic game.

> TERRY:
> When we are given some new symbol, the fist thing you want to know is
> what it means. The meaning of the symbol was entirely used to justify
> what we could do with it. The first time I was taught algebra, and the
> notion of "value X" we were taught that it's any number we like, and
> should be treated as such. Maybe I was just taught in a strange way. I
> agree that is isn't just syntax, but I think meaning was crucial in the
> teaching.

Meaning is essential, many people find algebra difficult as they are
unable to take a symbol without attaching a definate meaning to it. The
ability to abstract your mind to understand algebra is a skill that has to
be learnt.

> > HARNAD:
> > definitional property of computation that symbol manipulations must be
> > semantically interpretable -- and not just locally, but globally: All the
> > interpretations of the symbols and manipulations must square systematically
> > with one another, as they do in arithmetic, at the level of the individual
> > symbols, the formulas, and the strings of formulas. It must all make
> > systematic sense, in whole and in part (Fodor & Pylyshyn 1988).
>
> TERRY:
> This is restating another of the requirements for computation, as
> defined in class. The symbols must be interpetable systematically,
> throughout the system, and they must make sense. As Harnad states, this
> is not trivial.
>

Symbol grounding problem again?

> > HARNAD:
> > It is easy to pick a bunch of arbitrary symbols and to
> > formulate arbitrary yet systematic syntactic rules for manipulating them,
> > but this does not guarantee that there will be any way to interpret it all
> > so as to make sense (Harnad 1994b).
>
> TERRY:
> The definition of 'make sense' would be interesting. What makes perfect
> sense to one person may make no sense to the next. Chinese doesn't make
> sense to me, but it does to someone who speaks it. Should the above
> read "make sense to somebody" ?

Yes, but isn't Harnad stating that the interpretation must be consistent,
an inconsistent system will never make sense to anyone.
 
> > HARNAD:
> > the set of semantically interpretable formal symbol systems
> > is surely much smaller than the set of formal symbol systems simpliciter,
> > and if generating uninterpretable symbol systems is computation at all,
> > surely it is better described as trivial computation, whereas the kind of
> > computation we are concerned with (whether we are mathemematicians or
> > psychologists), is nontrivial computation: The kind that can be made
> > systematic sense of.
>
> TERRY:
> So it's pointless to consider symbol systems that make no sense as they
> don't do anything usefull. We are only concerned with the sort that
> make sense. Further definitions of a trivial symbol system:

Trivial systems, why mention them, we are only interested in systems that
have a use, are interpretable aren't we?

> > HARNAD:
> > Trivial symbol systems have countless arbitrary "duals": You can swap the
> > interpretations of their symbols and still come up with a coherent semantics
> > . Nontrivial symbol systems do not in
> > general have coherently interpretable duals, or if they do, they are a few
> > specific formally provable special cases (like the swappability of
> > conjunction/negation and disjunction/negation in the propositional
> > calculus). You cannot arbitrarily swap interpretations in general, in
> > Arithmetic, English or LISP, and still expect the system to be able to bear
> > the weight of a coherent systematic interpretation (Harnad 1994 a).
>
> TERRY:
> Clearly, if I learn chinese and randomly swap the meaning of words
> about, I will still be taking chinese, but not making any sense. Thus
> chinese is non-trivial.
> Harnad makes a stronger claim:

Surely if you swap the meanings of words you would be speaking a new
language (TERRYESE?) or merely Chinese incorrectly, depending on the
number of words you change, someone may still understand you.

> > HARNAD:
> > It is this rigidity and uniqueness of the
> > system with respect to the standard, "intended" interpretation that will, I
> > think, distinguish nontrivial symbol systems from trivial ones. And I
> > suspect that the difference will be an all-or-none one, rather than a matter
> > of degree.
>
> TERRY:
> Things aren't generally classified as being "a bit trivial" or "half
> trivial".

I agree with Terry here

> > HARNAD:
> > The shapes of the
> > symbol tokens must be arbitrary. Arbitrary in relation to what? In relation
> > to what the symbols can be interpreted to mean.
>
> TERRY:
> I think most people would assume that the shape of letters and numbers
> are arbitrary in relation to what they actually mean (apart from maybe
> the numbers 1 and 0). As Harnad points out.

But what does "0" mean? Does it have a value, or is it just
relative to 1 and -1?

> TERRY:
> Harnad then addresses my earlier question about interpretation:
>
> > HARNAD:
> > We may need a successful human interpretation
> > to prove that a given system is indeed doing nontrivial computation, but
> > that is just an epistemic matter. If, in the eye of God, a potential
> > systematic interpretation exists, then the system is computing, whether or
> > not any Man ever finds that interpretation.
>
> TERRY:
> Isn't it possible that every symbol system has the potential to be
> systematically interpretable? Can we ever say "there is no systematic
> interpretation to system X" and be guaranteed correctness ?

Just like the CTT, not disprooved, but not proven either. I like time as
an example, is time a relative dimension, that from outside the universe
appears along an axis, or is it only defined within the universe? No one
will ever be able to answer this, but does time's interpretation exist
from outside the system (Universe)?

Does the symbol grounding problem solve itself by being grounded from
within the system, but interpretable outside, and our mind can ground a
symbol, but to a Neuroligist, can't be seen as sense, merely the
collection of cells that make the system, with an aparent hint of how it
might work.

> > HARNAD:
> > It would be trivial to say that every object, event and
> > state of affairs is computational because it can be systematically
> > interpreted as being its own symbolic description: A cat on a mat can be
> > interpreted as meaning a cat on the mat, with the cat being the symbol for
> > cat, the mat for mat, and the spatial juxtaposition of them the symbol for
> > being on. Why is this not computation? Because the shapes of the symbols are
> > not arbitrary in relation to what they are interpretable as meaning, indeed
> > they are precisely what they are interpretable as meaning.
>
> > Another way of characterising the
> > arbitrariness of the shapes of the symbols in a formal symbol system is as
> > "implementation independent": Completely different symbol-shapes could be
> > substituted for the ones used, yet if the system was indeed performing a
> > computation, it would continue to be performing the same computation if the
> > new shapes were manipulated on the basis of the same syntactic rules.
>
> So know we also have the implementation independence part of
> computation.
> If the symbols in a system are not shape independent it is not
> computation.
>
> > HARNAD:
> > The power of computation
> > comes from the fact that neither the notational system for the symbols nor
> > the particulars of the physical composition of the machine are relevant to
> > the computation being performed. A completely different piece of hardware,
> > using a completely different piece of software, might be performing exactly
> > the same formal computation. What matter are the formal properties, not the
> > physical ones. This abstraction from the physical particulars is part of
> > what gives the Universal Turing Machine the power to perform any computation
> > at all.
>
> This is, of course, all leading us towards the hybrid system idea.
> Could our thoughts really be independent from our bodys ?
> Harnad then presents some arguments for Computationalism (C=C).
> He talks of the mind-body problem, "a problem we all have in seeing how
> mental states could be physical states" and offers how computation and
> cognition seemed related (computers can do many things only cognition
> can also do, and CTTP states that whatever physical systems can do
> computers can).
> Harnad mentions Turing's test and his interpretation:
>

Are our thoughts independant from our bodies? I don't think so as once we
are dead we no longer think, or at least no longer give any evidence of
thinking. This could end up as religious argument about eternal life and
souls etc....

> > HARNAD:
> > So I see Turing as championing machines in general that have functional
> > capacities indistinguishable from our own, rather than computers and
> > computation in particular. Yet there are those who do construe Turing's Test
> > as support for C=C. They argue: Cognition is computation. Implement the
> > right symbol system -- the one that can pass the penpal test (for a
> > lifetime) -- and you will have implemented a mind.
>
> This view is what we discussed in the first part of the course. Harnad
> then gives Searle's chinese room argument as refuting the above view. I
> had problems accepting Searle's test - it always seemed like a trick
> (Can we actually say we understand how _our_ minds process input and
> produce output? No.
> So we no more understand the symbol system going on in our heads that
> we do the memorised pen-pal program. So why is our symbol system the
> only mind present?)
> Anyway, Harnad defends the Turing test:
>
But the Turing test is flawed, if you pass the penpel test, do you have a
mind or just a piece of software that can pass the penpal test? Turing has
a good insight, but it is not enough to proove that a system has a mind,
in fact can you proove that at all?

> > HARNAD:
> > But, as I suggested, Searle's Argument does not really impugn Turing Testing
> > (Harnad 1989); it merely impugns the purely symbolic, pen-pal version of the
> > Turing Test, which I have called T2. It leaves the robotic version (T3) --
> > which requires Turing-indistinguishable symbolic and sensorimotor capacity
> > -- untouched (just as it fails to touch T4: symbolic, sensorimotor and
> > neuromolecular indistinguishability).
>
>
> > meaning, as stated earlier, is not contained in the symbol system.
>
> > Now here is the critical divergence point between computation and cognition:
> > I have no idea what my thoughts are, but there is one thing I can say for
> > sure about them: They are thoughts about something, they are meaningful, and
> > they are not about what they are about merely because they are
> > systematically interpretable by you as being about what they are about. They
> > are about them autonomously and directly, without any mediation. The symbol
> > grounding problem is accordingly that of connecting symbols to what they are
> > about without the mediation of an external interpretation (Harnad 1992 d,
> > 1993 a).
>
> At this point I'd like to point out my previous problems with Searle's
> CRA are well and truly wiped out - this is the difference between
> Searle's mind and the program he's memorised.

I agree, but a simulation of a mind (or robot) could pass T2 or T3 but
would it be real? The Chinese room has fallen over at this point.

> > HARNAD:
> > One solution that suggests itself is that T2 needs to be grounded in T3:
> > Symbolic capacities have to be grounded in robotic capacities. Many
> > sceptical things could be said about a robot who is T3-indistinguishable
> > from a person (including that it may lack a mind), but it cannot be said
> > that its internal symbols are about the objects, events, and states of
> > affairs that they are about only because they are so interpretable by me,
> > because the robot itself can and does interact, autonomously and directly,
> > with those very objects, events and states of affairs in a way that coheres
> > with the interpretation. It tokens "cat" in the presence of a cat, just as
> > we do, and "mat" in the presence of a mat, etc. And all this at a scale that
> > is completely indistinguishable from the way we do it, not just with cats
> > and mats, but with everything, present and absent, concrete and abstract.
> > That is guaranteed by T3, just as T2 guarantees that your symbolic
> > correspondence with your T2 pen-pal will be systematically coherent.
>
> > But there is a price to be paid for grounding a symbol system: It is no
> > longer just computational! At the very least, sensorimotor transduction is
> > essential for robotic grounding, and transduction is not computation.
>
> Harnad then goes over the old "a virtual furnace isn't hot" argument
> and points out;
>
> > HARNAD
> > A bit less obvious is the equally valid fact that a
> > virtual pen-pal does not think (or understand, or have a mind) -- because he
> > is just a symbol system systematically interpretable as if it were thinking
> > (understanding, mentating).
>
> Harnad goes onto point out that we could simulated a T3 robot, but it
> still wouldn't be thinking, it would still be ungrounded symbol
> manipulation. Only by interacting with the real world and grounding its
> understanding in what it interacts with can something be said to be
> cognizing. This seems to fit in with my understanding of how people
> work. We can of course imagine worlds different from our own,
> inventions not yet real etc. However, all these things must be based on
> the world we know. Otherwise, such things would make no sense to us.
>
> > HARNAD
> > I actually think the Strong CTTP is wrong, rather than just vacuous,
> > because it fails to take into account the all-important
> > implementation-independence that does distinguish computation as a natural
> > kind: For flying and heating, unlike computation, are clearly not
> > implementation-independent. The pertinent invariant shared by all things
> > that fly is that they obey the same sets of differential equations, not that
> > they implement the same symbol systems (Harnad 1993 a). The test, if you
> > think otherwise, is to try to heat your house or get to Seattle with the one
> > that implements the right symbol system but obeys the wrong set of
> > differential equations.
>
> At this point you may well be thinking "But flying / being hot are
> physical states. Thinking is a mental state". So what is a mental state
> if it is anything more than a phyical thing? This is back to the Turing
> test, and if their is indeed some other thing present, we will never be
> able to produce machines that think.

The physical state could be simulated, so if a mental state is a physical
state it could also be simulated. Is simulation good enough, it may well
pass the Turning test (T3), but if we know its a simulation then we know
it's not real and therefore doesn't have a mind, again how do you proove
that somthing HAS a real mind?

> > HARNAD:
> > For cognition, defined by ostension (for lack of a cognitive scientific
> > theory), is observable only to the mind of the cognizer. This property --
> > the flip-side of the mind/body problem, and otherwise known as the
> > other-minds problem -- has, I think, drawn the Strong Computationalist
> > unwittingly into the hermeneutic circle. Let as hope that reflection on
> > Searle's Argument and the Symbol Grounding Problem, and especially the
> > potential empirical routes to the latter's solution (Andrews et al in prep;
> > Harnad 1987, Harnad et al 1991,1994), may help the Strong Computationalist
> > break out again. A first step might be to try to try to deinterpret the
> > symbol system into the arbitrary squiggles and squoggles it really is (but,
> > like unlearning a language one has learnt, this is not easy to do!).
>
> It becomes eminently clear why we keep coming back to "It's just
> sqiggle squoggles" in class know. There was an interesting program
> about robots, where scientists had designed a system that used sonar
> (like bats) to recognise objects. They could learn the name of a human
> face, and if presented with the same face, could identify it again.
> This initially seems exciting, but you quickly realise that in order to
> learn concpets we need to be able to break the world into categories,
> and a signal-wave was completely incapable of doing this. So visual
> interpretation of the world (to the same level of detail as us to be as
> intelligent) would seem nessecery. I think more than visaul interaction
> is only nessacery to identify things in different ways. Having said
> that, certain things by their nature are only identifiable to us in one
> way (a smell, a noise). It's interesting to note that their would be no
> need to stop at our 5 senses when designing a robot - why not
> incoporate the bats sonar as well?
>
> Terry, Mark <mat297@ecs.soton.ac.uk>
>
I agree, it is just squiggles and squoggles, I don't think this paper has
helped a great deal, but merely confused my understanding of has a mind,
is the turing test adequate? If I was designing a robot, I too would like
to make the best possible, say a cat's night vision, bat's sonar, etc. If
all of this was included then would it pass T3 as it would be
distinguishable, superior even. I wish that someone could work on the
Turing test and improve on it. Even another way of
suggesting that something has a mind, proof I think will be impossible.

Pentland, Gary
GP397@ECS.SOTON.AC.UK



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT