Re: Harnad (1) on Symbol Grounding Problem

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Fri Mar 24 2000 - 15:49:27 GMT


On Fri, 24 Mar 2000, Grady, James <jrg197@ecs.soton.ac.uk> wrote:

> A goldfish seems to me to be a
> creature which performs very little computation. Perhaps it may be
> possible that we could do everything a goldfish does using only
> connectionism. Could we then scale up to a very clever fish and
> eventually to a human.

Two questions there: Could a neural net pass the goldfish T3, and if it
could it scale up to the human T3?

> It seems to me a little fishy that the brain follows some explicit
> algorithm (Symbolic Manipulation) and rather an implicit one (Neural
> Nets).

The difference between symbol systems and neural nets (if there is one)
is not the difference between explicit and implicit algorithms.
(Actually, what do you mean by explicit vs. implicit algorithms?
Hard-wired vs. software-coded?)

> With this in mind I will float the idea that symbolic
> manipulation is a fool's gold T3. It seems a little premature to commit
> ourselves to thinking hybrid system will do the job.

Not clear what you mean. On the face of it, symbol systems alone could
not pass T3 because sensorimotor transduction (essential to T3) is
nonsymbolic. Could everything past the sensorimotor surface (inside) be
symbolic? In principle, possibly; but in practise?

> Just because many of the things we do appear to be symbolic don't
> forget that a lot aren't. A symbolic system must 'rulefully combine'
> symbol tokens and also be 'semantically interpretable'. We have already
> established that this will be incomplete as we are inconsistent, are
> often irrational, illogical and make mistakes. This to me sounds more
> like implicit computation. (It also sounds a lot like my neural network
> I programmed for Bob Damper AI 1!)

Please tell us more about the implicit/explicit distinction, and how it
relates to the definition of computation.

Inconsistency, irrationality, illogicality, and errors are a piece of
cake for ANY system (e.g., a symbol system), so I'd hate to try to base
anything fundamental on the capacity to generate any of those easy
outcomes!

> Is the language of thought important since all languages are equivalent.
> (Any squiggle or squoggle will do).

Yes, and yes. All languages are intertranslatable, and all notational
systems are arbitrary, hence there are many equivalent ones (this is
related to implementation-independence and the arbitrariness of the
shape of the symbol tokens).

But the language of THOUGHT, unlike, say ENGLISH, is not just
squiggle/squoggle. Our thoughts are meaningful: intrinsically meaningful
(there's an "intrinsic" for you); English (like a book, or computer
code) is only EXTRINSICALLY meaningful; it becomes intrinsically
meaningful in the head (and hence the mind) of an English speaker, but
that's because it has been translated into the language of thought.

The symbol grounding problem applies to the language of thought: It
can't be (and isn't) just squiggles and squogles.

> It might be good to mention that there are two different ways of being
> able to categorize a horse as a horse. We either found it out the hard
> way or we were given the information by others.

Correct. And for us to be able to get the information from others, the
symbols in the message have to be grounded (either directly, in prior
robotic interactions with the members of the categories they name, or
in information from others, that was grounded in.... -- but it can't be
hearsay all the way down).

> > Brown
> > So, in essence, we can use inheritance to ground symbols. If someone is
> > able to identify a horse(1), and identify stripes(2), and is then told that
> > the symbol "Zebra" is a stripey horse, then they can recognise one, without
> > ever having seen it.
>
> So interestingly we can use known symbols to ground unseen symbols.

Correct. Except what you mean is: to ground (seen) new symbols that stand
for unseen new categories.

> Which raises the question we looked at in class. What is the minimum
> number of grounded symbols we need to be able to understand a language?

This question surely has a scientific answer (although it might be
that, as words refer to more and more abstract things, they need some
"new sensorimotor blood" every now and again, instead of just being
built on the same bottom-level minimal set of symbols. (Suppose you
were sent into the jungle; would it be enough if someone gave you a
list of the features of all the kinds of creatures you have to be
careful not to step on? Even if the list were complete and unambiguous,
and you knew it by memory?)

> Following on from this there must also be a ceiling where having more
> grounded symbols doesn't help.

I wonder if that's ever true? What do others think? Do we ever get to a
stage in life where, as long as people can tell us about it in advance,
there is nothing left that we need to learn by direct experience? Do we
ever get to a stage of dictionary knowledge where only definitions, are
needed; no more sensorimotor experience helps?

And does vocabulary ever come to a close, so we no longer need to
lexicalize no signals (like the "Peekaboo-unicorn"): Strings of
existing symbols are enough?

> To conclude I would like to mention the mushroom game from class
> yesterday. It was proposed that there are 2 species of mushroom
> eaters. One has the ability to learn to classify mushrooms into edible
> and inedible, the 'hard way' by eating and learning which ones are
> nourishing and which are poisonous. The other knows that if they hear
> one of the other species eating a mushroom the mushrooms around it are
> going to be edible. We will call the first species the 'toilers' and
> the other the 'thieves'. The problem is that in this ecosystem their
> will be periodic success of one species after the other as one species
> becomes too successful and starts to starve or is eaten by the other.
>
> Supposedly the toilers represent the categorization of knowledge the
> hard way and the other the theft of knowledge rather similar to person
> being informed that a zebra was a horse with stripes.
>
> I propose a couple of ways to control this environment to bring it to
> an equilibrium. How valid they are is another question
>
> 1. The thieves are only allowed to hunt alone. This would cure the
> problem of having packs of thieves following a toiler.

It would also mean symbolic theft was not possible. (Remember
sensorimotor toil is categorisation learning with error-corrective
feedback, and it's the trial-and-error part that the theft is supposed
to spare us. If, in order to learn which mushrooms are edible, I must
accompany toilers who know, but I cannot gobble up the mushrooms they
identify and edible, then I have really become just a vicarious toiler
when I go along on their jaunts, not allowed to hunt myself; I still
have to figure out which features make some mushrooms make the toilers
eat and others make them avoid; figuring that out by trial and error is
toil, and at least as hard as the toilers' toil. So where's the theft?)

> 2. Introducing a third species called 'God's Wrath' or the 'predators'
> to be more PC. This species is sent as divine judgment to feed on the
> thieves and thus curbs their success. This would leave us with quite a
> natural looking food chain. Possibly the best solution.

But it would still just be an oscillating equilibrium, with the number
of parasitic thieves kept in check by the predators. Look around you:
We're ALL thieves, and there is not a single toiler left! How was that
possible, if thieves' numbers were just kept in check by God's wrath
predators?

> 3. Allow the thieves to suppress the toilers and bread them
> artificially. The thieves would then be responsible for keeping the
> equilibrium.

Well, that's rather the way the French use dogs and pigs to find
truffles for them, but it doesn't sound like a recipe for ever being
able to steal them on the basis of symbolic information alone...

More hypotheses, please!

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT