Re: Ziemke on "Rethinking Grounding"

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Sat May 13 2000 - 12:30:51 BST


On Fri, 12 May 2000 Grady, James <jrg197@ecs.soton.ac.uk> wrote:

> Grady:
> Enactivism believes cognition requires three things. Firstly that any
> intelligent creature must have a body, such that it has individuality
> and functionality.

Robots have to have bodies, to be sure, otherwise they would not be
robots, and couldn't pass T3. But what (over and above being one robot and
having T3-power) is "individuality" and "functionality"?

> Grady:
> Secondly this body is embedded in an environment,
> a biological, psychological and cultural context.

Doctrines aside, a T3 robot has to be able to deal with our T3 world:
that's what T3 means. Not sure what the biology has to do with it.
The psychology, I suppose, is the capacity. The "cultural context" just
means it needs to be able to do what we can do, under whatever conditions
we can do it. In other words, it's all covered by T3.

> Grady:
> And thirdly the body must be able to interact within this environment.

What else do robots' bodies do? (Ziemke seems here to be solemnly
re-stating the obvious.

> In contrast to the cognitivist robot's extrinsic 'central' personality
> the enactivist robot's has a number of parallel subsystems from which
> its behaviour emerges.

It seems to me that we want T3 power. The "-isms" and the "central
personality" don't have much to do with anything...

> >Shaw:
> >Incidentally, although this is not really the point of the paper, the
> >enactivist approach seems a little unnatural. Although the idea of an
> >agent's behaviour 'evolving' as more components are added is sound, there
> >is no central 'intelligence' that could think about, for example, which
> >actions to take.
>
> Grady:
> A bit like a zombie perhaps. A creature which portrays life-like
> characteristics but which has no real intentions or reasons.

No. T3 itself is our only protection from Zombies, not legislation about
whether the innards have to be modular or integrated, etc.

> >>ZIEMKE
> >> Accordingly, for the above labelling act to make sense to an agent, that
> >> agent would have to be able to at least use its spatial labels in some
> >> way, to profit in some way from developing the capacity to do so, etc.
>
> Grady:
> This raises the issue of where is such a creature going to get any kind
> intentionality or motivation from. Even if it has functionality why
> would it use it. What is to stop our creature being a couch potato?

Doesn't T3 already cover that? It has to have T3 capacity,
indistinguishable from ours; but it can be a couch-potato just
as much (and as little) as we can.

> >>ZIEMKE:
> >>most cognitivist approaches follow the tradition of neglecting action
> >>and attempt to ground internal representations in sensory invariants
> >>alone.
>
> Grady:
> Surely a creature must have a clearly grounded sensorimotor capacity.
> Without which any model would struggle to react or be able to initiate
> interaction. It can't just absorb information, it must interact.

Correct, but Ziemke is probably stereotyping the "cognitivist" approach:
If it is to scale to T3, it must be based on sensorimotor invariants and
not just sensory ones.

> Grady:
> If the robots actions are intrinsic they could be said to be a
> fundamentally inseparable part of the robot; coming naturally from
> within not extrinsically imparted by a designer. Any 'separate'
> (created externally) routines given to the robot from the designer
> would be, by nature, a fundamentally separate part of the machine
> so could not be described as intrinsic. It follows that entirely
> intrinsic behaviour needs to be derived by the robot for itself from
> the environment. The designer's job is not to create but to facilitate
> creation.

Seems to me T3 has all that covered. (T3 includes the need to be
autonomous, and to learn.) The rest of the constraints sound arbitrary:
surely the only justifiable constraint is "whatever it takes to pass T3"
(without cheating).

> Grady:
> It does seem that to create intelligent beings evolution is the most
> natural way. Evolution used as a creative tool.

And for flying too. So can we not design an artificial flying system,
even one that flies the way a bird does -- without having to go through
the whole evolutionary process to get us there?

Recapitulating or imitating evolution is only relevant to reverse
bioengineering inasmuch as it helps get us to a successful model.
Otherwise, its constrainst are as arbitrary as those of the (irrelevant)
aspects of brain function (whatever they are: T3 is the only filter for
relevance).

> Grady:
> Developing on the extrinsic/intrinsic debate: could a robot
> with extrinsic mathematical ability substitute for one able to grasp
> the concept of simple addition?

Our mathematical capacity is just a (toy) subset of our Total T3
capacity. I don't know what "extrinsic mathematical capacity" means, but
it is true that maths can be done by mindless symbol-manipulation rules
as well us through conscous understanding. We'll never know whether
anyone else but ourselves is conscious of ANYTHING, but I'm ready to
assume someone understands the factoring of quadratic equations (rather
than merely being a Zombie implementing n ungrounded symbol system) if
his maths is grounded in his overall T3 capacity.

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT