Re: Harnad (1) on Symbol Grounding Problem

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Wed Mar 22 2000 - 17:41:34 GMT


On Tue, 21 Mar 2000, Brown, Richard wrote:

> The fact that no one has been able to develop a non "toy" implementation of
> these models suggests that they are incapable of providing anything other
> than toys, despite Chalmers attempts at showing computational sufficiency
> for cognition.

That's true, but of course it could be because of lack of creativity in
modelers so far too, no?

> Here we see what I feel is a very good argument against connectionism. Many
> of the things we do are symbolic, therefore a model of the mind should also
> be symbolic. Harnad argues that this may be a reason for the limited
> successes of neural nets. Rather than propose that only symbol systems be
> used instead, Harnad introduces the symbol grounding problem (TSGP), which
> may, in turn explain the toy like results that are achieved with symbolic
> AI.

In other words, the best model, the one most likely to be able to go
the distance to T3, is a hybrid symbolic/nonsymbolic one.

> okay, so we have to generalise to identify, icons tell us that two horses
> are different, but the reason we know they are horses is not because they
> match some internal icon of a horse but because the horse has certain
> features such as four legs and yellow teeth, that make it a horse.

In other words, it can't all just be analog copying and throughput. The
invariant features in the sensorimotor projections of objects have to
be detected so we can identify (and respond appropriately to ) the KIND
(category) of object they are.

Don't think of icons or even category representations as internal
"pictures," though; think of them as learned, dynamic feature-detectors
that "filter" the "shadows" of objects on our sensorimotor surfaces
onto their appropriate response (of which an arbitrary symbol, the name,
would be the most abstract and general response; others are things like
eating it, running from it, manipulating it, etc.)

> > Harnad
> > Iconic representations
> > no more "mean" the objects of which they are the projections than the
> > image in a camera does.
>
> It is good to stop here and realise that we aren't using symbols yet in this
> system, that discrimination is akin to comparing two photographs, the images
> of which are in our heads

And not even category names are symbols yet; they are just part of a
static taxonomy (in computer science these have some to be called an
"ontology," but they're really just a hierarchy of labels).

We get to symbols when they are part of a combinatory symbol system
(e.g., sentences rather than just words).

> > Harnad
> > What is the representation of a zebra? It is just the symbol string
> > "horse & stripes." But because "horse" and "stripes" are grounded in
> > their respective iconic and categorical representations, "zebra"
> > inherits the grounding, through its grounded symbolic representation.
>
> So, in essence, we can use inheritance to ground symbols. If someone is
> able to identify a horse(1), and identify stripes(2), and is then told that
> the symbol "Zebra" is a stripey horse, then they can recognise one, without
> ever having seen it.

Fine. Now how general is that? Can it cover anything/everything? Can
anyone think of exceptions, alternatives, counterexamples?

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT