From: Kyriacou Elias (firstname.lastname@example.org)
Date: Thu Mar 01 2001 - 15:45:39 GMT
This paper by Ziemke reexamines the problem of 'grounding' and in particular
discusses the difference between intrinsic and extrinsic representation of
Both Searle's and Harnad's arguments are introduced by Ziemke in order to
compliment the fact that grounding is essential for synthesis and the
modelling of intelligent behaviour.
Then the paper discusses different approaches to overcome the problem by
reviewing Regier's work on perceptual grounding of spatial semantics and
Brooks' work on 'physical grounding'.
Ziemke begins his paper by first discussing what the grounding problem is
and gives the examples of Searle's Chinese room and Harnad's extended and
refined Searle's analysis of the problem.
> A number of approaches to grounding have been proposed, all of which
> basically agree in two points:
> 1) Escaping the internalist trap has to be considered "crucial to the
> development of truly intelligent behaviour"(Law & Miikkulainen, 1994).
> 2) In order to do so, machines have to be 'hooked' (Sharkey & Jackson,
> 1996) to the external world in some way, i.e. there have to be causal
> connections, which allow the internal mechanisms to interact with their
> environment directly and without being mediated by an external observer.
> The question of what exactly has to be hooked to what and how, however,
> divides the different approaches, as will be discussed in this section.
> For the purpose of this paper different approaches to grounding can be
> categorised into two groups according to whether they follow the
> cognitivist or the enactive paradigm in cognitive science.
Even though machines are 'hooked' to the external world, which allows their
internal mechanisms to interact with their environment directly and without
being mediated by an external observer, this does not automatically define
them as intelligent systems. This is because the things that have to be
'hooked', have to be implemented and follow a specific set of rules which
they will use to interact with their environment. But the system will still
not be conscious of what it is doing and it will be in the same predicament
as Searle's Chinese room.
> Cognitivism can be said to be "dominated by a 'between the ears',
> centralised and disembodied focus on the mind". In particular,
> cognitivism is based on the traditional notion of representationalism,
> characterised by the assumption of a stable relation between manipulable
> agent-internal representations ('knowledge') and agentexternal entities
> in a pregiven external world. Hence, the cognitivist notion of cognition
> is that of computational, i.e. formal and implementation independent,
> processes manipulating the above representational knowledge internally.
> The enaction paradigm on the other hand, emphasises the relevance of
> action, embodiment and agent environment mutuality. Thus, in the
> enactivist framework, cognition is not considered an abstract agent-
> internal process, but rather embodied action, being the outcome of the
> dynamical interaction between agent and environment and their mutual
> specification during the course of evolution and individual development.
> Hence, the enactive approach
> ...provides a view of cognition capacities as inextricably linked to
> histories that are lived, much like paths that only exist as they are
> laid down in walking. Consequently, cognition is no longer seen as
> problem solving on the basis of representations; instead, cognition in
> its most encompassing sense consists in the enactment or bringing forth
> of a world by a viable history of structural coupling.
The enaction paradigm then may be considered as giving an insight into an
understanding of how modern man has accomplished his current status.
If one is to look back at prehistoric man when we first evolved as
Homo-sapiens, one may consider them as being inferior to modern man and
significantly less intelligent.
However, a modern man will only consider himself as far more superior
because of all the knowledge and memories that they possess. As with the
example that was given by professor Harnad, if we are all shown the proof of
Formatís last theorem or even Einstein's equations of relativity, then we
may consider ourselves as slightly more intelligent than we where yesterday
because we understand these mathematical problems. This, however, is not
necessarily true because we only obtain an understanding and the true
intelligence lies with the person who actually solved or discovered the
This problem that I gave above can be better explained if one considers how
modern man gains his knowledge, which in most cases happens to be from
schooling. Without schooling, a modern man would not be as intelligent or
possess as much knowledge as he does today.
The term schooling used here considers anything from basic pre-school,
nursery, primary, secondary and so on.
Thus, taking all of the points that I made above, if one was to take a
prehistoric newly born baby and a modern day newly born baby, then it is my
belief that they will both be identical in terms of intelligence. If they
where both raised in exactly the same environment, where given access to
exactly the same knowledge and after a period of 21 years, the Turing test
was carried out on both of these subjects, then I believe that they will be
Turing indistinguishable from each other.
Hence, this above statement suggests that if a prehistoric mans brain and a
modern day mans brain are indistinguishable in terms of power and what they
can both do, then intelligence is not knowledge, but it is having a
conscious and being aware of what is around you and knowing what is
happening around you.
Thus, only if these traits are present can learning begin and knowledge
Then Ziemke discusses grounding atomic representation where a causal
connection between agent and environment is made by hooking atomic internal
representations to external entities or object categories.
> Harnad himself suggested a possible solution to the symbol grounding
> problem which mostly fits into the cognitivist framework. Harnad proposed
> a hybrid symbolic/connectionist system in which symbolic representations
> are grounded in nonsymbolic representations of two types: Iconic
> representations, which basically are analog transforms of sensory
> percepts, and categorical representations, which exploit sensorimotor
> invariants to transduce sensory percepts to elementary symbols
> (e.g. 'horse' or 'striped') from which again complex symbolic
> representations could be constructed (e.g. 'zebra' ˇhorse' + 'striped').
Ziemke then discusses how some approaches deny the need of symbolic
representations as suggested in Lakoff's (1993) paper and if one wishes to
acquire a detailed account of the differences between symbolic and
connectionist computational engines and grounding approaches, they should
see the paper (Sharkey & Jackson, (1998)).
> Let us have a closer look at Regier's system. Do we have a fully grounded
> system here, i.e. a system whose function and all of whose internal
> mechanisms, elements, etc. are intrinsic to the system itself? of course,
> we don't. Anything that goes on in the system, except for the produced
> labels, is still completely ungrounded: the system has no concept of what
> it is doing or what to use the produced labels for. That means, for
> Regier's system to be considered fully grounded, there are at least two
> things missing, which will be discussed in the following.
The statement that was made above by Ziemke, I agree with completely and I
believe that just a basic computer that is carrying out predefined specific
rules, will never actually truly understand what it is doing and will never
be aware of the significance or meaning of what it produced.
The following quote by Ziemke outlines the two things that are missing in
order to make a system fully grounded.
> Firstly, the created labels (i.e. the results of the transduction) could
> possibly be considered grounded. The act of labelling itself however,
> since it does not have any functional value for the labelling system,
> sure cannot be intrinsic to it. That means, a semantic interpretation of
> the system's behaviour is of course possible, it is however definitely
> not intrinsic to the system itself, it is just parasitic on the
> interpretation in our heads.
> Secondly, and more importantly, assuming there were such central systems,
> that made the act of transduction intrinsic to the overall system, could
> we then speak of a fully grounded system? No, we still could not, since
> the transducer itself is not grounded in anything
Then Ziemke gives an account of grounding behaviour in terms of robotic
agents who have physical grounding which only offers a pathway for hooking
an agent to its environment, but it does not ground behaviour or internal
> Most commonly the grounding of behaviour is approached as a matter of
> finding the right agent function, i.e. a mapping from sensory input
> history to motor outputs that allows effective self preservation. There
> are basically two different ways of achieving this, which will be
> discussed in the following:
> 1) engineering/designing the agent function
> 2) learning the agent function
> Engineering Agent Functions: The classical example for the engineering of
> agent functions is Brooks' subsumption architecture, in which the overall
> control emerges from the interaction of a number of hierarchically
> organised behaviour producing modules, e.g. the control of a simple robot
> that wanders around avoiding obstacles could emerge from one module
> making the robot go forward and a second module which, any time the robot
> encounters an obstacle, overrides the first module and makes the robot
> turn instead.
> Learning Agent Functions: The typical approach to learning an agent
> function is to connect sensors and actuators with a connectionist
> network. The approach has some obvious advantages, since the agent
> function can now be learned through adjustment of connection weights
> instead of having to be programmed entirely, i.e. the weights in a
> trained network and the resulting behaviour generating patterns could be
> considered grounded. The problem of design, however, remains to some
> degree, since by choice of architecture the designer will necessarily
> impose extrinsic constraints on the system, in particular when designing
> modular or structured connectionist networks.
> If we aim for fully grounded systems, i.e. systems in which every aspect,
> element or internal mechanism is intrinsic to the whole, then we have to
> start looking at systems which as a whole have developed in interaction
> with their environment.
> In fact, the only truly intelligent systems we know of are (higher)
> animals, i.e. biological systems whose genotype has evolved over millions
> of years, and who in many cases undergo years of individual development
> before achieving full intelligence. Thus, animals are grounded in their
> environments in a multitude of ways, whereas most grounding approaches
> rather aim for hooking pregiven agents to pregiven environments, by means
> of representations or effective behaviour.
This then ties in slightly with my point made previously about what learning
> AI and cognitive science, in their attempt to synthesise and model
> intelligent behaviour, have always been based on highlevel abstractions
> from the biological originals. The grounding problem, in its broad
> interpretation as discussed in this paper, seems to suggest, that in fact
> 1) we have to be very careful about such abstractions, since any
> abstraction imposes extrinsic design constraints on the artefact we
> develop, and
> 2) we will have to reexamine some of the 'details' which perhaps
> prematurely have been abstracted from earlier.
> One of these 'details' is what might be called physiological grounding as
> provided through the coevolution and mutual determination of agents /
> species and their environments. Two simple examples:
> 1) As Varela (1991) note, there is a perfect match/correspondence between
> the ultraviolet vision of bees and the ultraviolet reflectance patterns
> of flowers.
> 2) Similarly, the sounds your ears can pickup are exactly those sound
> frequencies which are relevant for you in order to be able to interact
> with your environment.
Ziemke concluded that to be able to successfully synthesise and model fully
grounded and truly intelligent agents, then one would probably have to carry
out what is called 'evolutionary and developmental situated robotics', i.e.
the study of embodied agents / species developing robotic intelligence
bottomup in interaction with their environment, and possibly on top of that
a 'mind' and 'higherlevel' cognitive capacities.
I personally agree with this statement and I think that any system to be
classified as intelligent, it must have a conscious, for this is what
probably is a mind.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:18 BST