Re: Ziemke on "Rethinking Grounding"

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Wed May 17 2000 - 12:30:57 BST


On Tue, 16 May 2000, Shaw, Leo wrote:

> Shaw:
> My understanding of Ziemke's argument was that, while some approaches
> can provide grounding of symbols, the problem still remains that the
> agent isn't doing things for its own reasons, but because it has been
> told to do so. For example, in the case of Regier's system, the system
> can recognise faces, but has no reason to do so. On the other hand, if
> it was recognising faces because one person 'switched it off', and the
> other 'fed it', it would have a reason.

These worries only arise with sub-total, sub-Turing toy fragments (like
disembodied "face-recognizers"). Reaching T3-scale takes care of all of
this, and you no longer have to worry about what's going on inside (any
more than we worry about one another).

> > Harnad:
> > The symbol problem is real enough (how do we connect symbols to their
> > meanings without the mediation of an external interpreter's mind?), but
> > where does the "degree" come in? A symbol system whose meaning are
> > autonomously connected to the things they are about is grounded, but
> > only nontrivial symbol systems are worth talking about. (An "on/off"
> > system, whose only two symbolic states are "I am on" and "I am off" is
> > grounded if it's on when it's on and off when it's off, but so what?)
>
> Shaw:
> But surely the point is that the on / off action isn't grounded: an
> amoeba moves away from sharp objects, for which it has a good reason.
> It may be the simplest kind of behavior, but it could be a step in the
> right direction. An on / off switch has nothing.

The words (squiggles) "on" and "off" are ungrounded. But if the
squiggles (which could be anything) happen to be the setting of a light
switch, then "on" and "off" ARE "grounded," but only in a trivial sense
(like the "Life is Like a Bagel" joke). It is T3-scale that makes
grounding nontrivial.

Put it another way: a "grounded" on/off switch is a toy. A trivial robot
is a toy too. Their problem (unlike the problem of a nontrivial symbol
system), is not that they are ungrounded, but that they are subtotal,
hence trivial.

> > Harnad:
> > But to meet this condition, to be grounded, all a system needs is
> > autonomy (and T3 power). With that, it's grounded, regardless of
> > whether it is integrated or modular, and regardless of whether (or how)
> > its transducers are "designed."
> > ...
> > The only requirement for groundedness is
> > that there should be no human mediator needed in the exercise of its T3
> > capacity. How it got that capacity is irrelevant.
>
> Shaw:
> Perhaps Ziemke's argument could be interpreted as meaning that trying to
> allow a system to define its own behaviour is a SENSIBLE way to go about
> creating an artificial intelligence, not the only way.

I agree. But the test of whether it is sensible is whether it succeeds
in generating performance, where other attempts fail. Otherwise it is
merely a speculation.

> Shaw:
> It seems to me
> that creating an agent that could pass T3 is a colossal task,
> especially if the only way of measuring success is to subject the final
> product to a Turing test.

True (although approximate way-stations will no doubt be milestones
along the way: ant and turtle and mammal "pseudo-T3"'s).

> Shaw:
> Surely, human cognitive capacity evolved
> because it provided an advantage over competition. As time progressed,
> the capacity got greater. Maybe what we consider 'thought' is just an
> extension to this and the best way to produce a system with similar
> cognitive capacity to our own is to try to allow it to 'evolve' rather
> than attempting to define it artificially.

Again, speculations about how to pass T3 are welcome, but only
successful implementations can carry any real weight...

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT