Ziemke on "Rethinking Grounding"

From: Shaw, Leo (las197@soton.ac.uk)
Date: Tue May 09 2000 - 03:16:08 BST


    Ziemke, T. (1997) Rethinking Grounding.
    In: Riegler, Alexander & Peschl, Markus (Eds.)
    Does Representation need Reality? Proceedings of the International
    Conference 'New Trends in Cognitive Science' (NTCS 97) Perspectives
    from Cognitive Science, Neuroscience, Epistemology, and Artificial
    Life, pp. 8794, Austrian Society for Cognitive Science, ASoCS Technical
    Report 9701, Vienna, Austria, May 1997.
    http://www.cogsci.soton.ac.uk/~harnad/Temp/CM302/ziemke.htm

In his paper 'Rethinking Grounding', Ziemke argues that previous
attempts to produce a grounded system have been insufficient because
they have failed to yield a 'fully' grounded system. Two paradigms of
cognitive science are introduced, offering different approaches to the
grounding problem:

> Cognitivism, ... can be said to be "dominated by a 'between the ears',
> centralized and disembodied focus on the mind" ... characterized by the
> assumption of a stable relation between manipulable agent-internal
> representations ('knowledge') and agent­external entities in a pre­given
> external world. Hence, the cognitivist notion of cognition is that of
> computational, i.e. formal and implementation­independent, processes
> manipulating the above representational knowledge internally.

And the alternative:

> The enaction paradigm ... emphasizes the relevance of action, embodiment
> and agent­environment mutuality. Thus, in the enactivist framework,
> cognition is not considered an abstract agent­internal process, but
> rather embodied action, being the outcome of the dynamical interaction
> between agent and environment and their mutual specification during the
> course of evolution and individual development.

The explanation of cognitivism seems fairly straightforward - the
processes of transduction (percepts -> internal representations), and
cognition (manipulation of the internal representations) are distinct.
(Interestingly, this still permits cognitive processes to be
implementation independent).
Enactivism, on the other hand, is based on the less intuitive concept of
cognition as a function of 'embodied action'. This term refers to the
belief that cognition is inseparably linked to processes of perception
and action that are experienced through sensorimotor faculties. To
elaborate, a typical enactive system according to Ziemke might be:

> a number of behavioural subsystems or components working in parallel
> from whose interaction the overall behaviour of a system emerges. Hence,
> each of these subsystems (as well as the overall system) can be viewed
> as transducing sensory input onto motor output, typically more or less
> directly, i.e. without being mediated by internal world models.

Incidentally, although this is not really the point of the paper, the
enactivist approach seems a little unnatural. Although the idea of an
agent's behavior 'evolving' as more components are added is sound, there
is no central 'intelligence' that could think about, for example, which
actions to take.
The crux of Ziemke's argument is that neither paradigm produces a
sufficiently grounded system. For cognitivism, there are two problems:
The first is ungrounded behavior, in the case of Regier's system (that
learns spatial relations between two objects, eg. 'on', 'into'):

> Accordingly, for the above labelling act to make sense to an agent, that
> agent would have to be able to at least use its spatial labels in some
> way, to profit in some way from developing the capacity to do so, etc.

The second problem is imposition of artificial design ideas:

> ...could we then speak of a fully grounded system? No, we still could
> not, since the transducer (Regier's labelling system) itself (its
> structure, organization, internal mechanisms, etc...) is not grounded in
> anything but Regier's design ideas...

The first point seems robust: an agent would certainly have to
understand its actions so they would need to be 'intrinsic' to the
system. The second point is less clear - the concept of a 'fully
grounded system' hasn't really been justified and it isn't obvious why
the organisation of the transducer shouldn't be determined by the
designer.
In the case of enactivism, Regier has one fundamental objection: the
modules from which the agent is formed are artificially engineered:

> this approach to constructing the agent function could as well be
> characterized as incremental trial and error engineering, bringing with
> it ... the limitations of designing / engineering the transducer which
> we already noted in the discussion of Regier's work: The result of the
> transduction (i.e. the system's actions) could be considered grounded,
> the transducer itself however (i.e. the agent function as composed of
> the behavioural modules and their interconnection) is in no way
> intrinsic to the system.

This argument against imposing artificial design on the system makes
more sense in the context of the enactive system, because the functions
of the transducers define the behavior of the whole agent. Hence, by
artificially constructing 'behavior modules', the behavior of the system
is made extrinsic.
Ziemke describes several interesting solutions that involve using
(sometimes several) connectionist networks to link perception and
action, which can therefore 'learn' agent functions. Once again,
artificial choice of architecture comes under scrutiny:

> The problem of design, however, remains to some degree, since by choice
> of architecture (including number of hidden units, layers, etc.) the
> designer will necessarily impose extrinsic constraints on the system

In light of the enactivist belief that cognition is a result of
interaction between agent and environment, and crucially for this point,
their mutual specification, it does seem fair to require that designer
input be minimised. A couple of methods for achieving this are
mentioned.
In the section 'Grounding Complete Agents', mention is made of the fact
that the only known intelligent systems are the result of millions of
years' co-evolution between individual systems and their environment.
In light of this, it is suggested that attempts to produce artificial
agents should pay greater attention to factors like 'physiological
grounding', of which one example is:

> ... a perfect match/correspondence between the ultraviolet vision
> of bees and the ultraviolet reflectance patterns of flowers.

Followed by the observation:

> Compare this natural pre­adaptation ... to that of the typical robot
> which is rather arbitrarily equipped with ultrasonic and infrared
> sensors all around its body, because its designers or buyers considered
> that useful (i.e. a judgement entirely extrinsic to the robot).

Again, this could be seen as taking the idea of minimising designer
input too far. Although the sensory inputs of biological organisms are
the result of evolution, it is hard to see how the presence of
ultrasonic sensors on a robot would hinder its cognitive capacity.

To summarise, some of the ideas presented in the report seem very solid:
the idea of producing grounded behavior by allowing the agent to 'learn'
functions in an evolutionary style is sensible, and the enactivist
concept of cognition as embodied action has merit. On the other hand,
treating the removal of designer contribution as Holy Grail seems overly
stringent, bearing in mind that the goal is to produce 'Artificial
Intelligence'. In addition, while constructing a system in a 'bottom-
up' fashion could be attractively simple, in my opinion there is a
definite requirement of a central intelligence for cognition.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT