RETRIEVE PDF VERSION (to see formulas anf figures)

Robotics and the Common Sense Informatic Situation

Murray Shanahan

Department of Computer Science,
Queen Mary & Westfield College,
Mile End Road,
London E1 4NS, England.

© 1996 M. P. Shanahan
ECAI 96. 12th European Conference on Artificial Intelligence
Edited by W. Wahlster
Published in 1996 by John Wiley & Sons, Ltd.

Abstract. This paper proposes a logic-based framework in which a
robot constructs a model of the world through an abductive process
whereby sensor data is explained by hypothesising the existence,
locations, and shapes of objects. Symbols appearing in the resulting
explanations acquire meaning through the theory, and yet are
grounded by the robot's interaction with the world. The proposed
framework draws on existing logic-based formalisms for
representing action, continuous change, space, and shape.

INTRODUCTION
Without ignoring the lessons of the past, the nascent area of
Cognitive Robotics [Lespérance, et al., 1994] seeks to reinstate the
ideals of the Shakey project, namely the construction of robots
whose architecture is based on the idea of representing the world by
sentences of formal logic and reasoning about it by manipulating
those sentences. The chief benefits of this approach are,

° that it facilitates the endowment of a robot with the capacity
to perform high-level reasoning tasks, such as planning, and

° that it makes it possible to formally account for the success
(or otherwise) of a robot by appealing to the notions of
correct reasoning and correct representation.

This paper concerns the representation of knowledge about the
objects in a robot's environment, and how such knowledge is
acquired. The main feature of this knowledge is its incompleteness
and uncertainty, placing the robot in what McCarthy calls the
common sense informatic situation [1989]. The treatment given in
the paper is rigorously logical, but has been carried through to
implementation on a real robot.

1 ASSIMILATING SENSOR DATA

The key idea of this paper is to consider the process of assimilating
a stream of sensor data as abduction. Given such a stream, the
abductive task is to hypothesise the existence, shapes, and locations
of objects which, given the output the robot has supplied to its
motors, would explain that sensor data. This is, in essence, the map
building task for a mobile robot.

More precisely, if a stream of sensor data is represented as the
conjunction Y of a set of observation sentences, the task is to find
an explanation of Y in the form of a logical description (a map) DM
of the initial locations and shapes of a number of objects, such that,

SB Ù SE Ù DN Ù DM Y
where,
° SB is a background theory, comprising axioms for change
(including continuous change), action, space, and shape,

° SE is a theory relating the shapes and movements of objects
(including the robot itself) to the robot's sensor data, and

° DN is a logical description of the movements of objects,
including the robot itself.

The exact form of these components is described in the next
three sections, which present formalisms for representing and
reasoning about action, change, space, and shape. In practice, as
we'll see, these components will have to be split into parts for
technical reasons.

The provision of a logic-based theoretical account brings issues
like noise and incompleteness into sharp focus, and permits their
study within the same framework used to address wider
epistemological questions in knowledge representation. It also
enables the formal evaluation of algorithms for low-level motor-perception
tasks by supplying a formalism in which these tasks can
be precisely specified.

2 REPRESENTING ACTION

The formalism used in this paper to represent action and change,
including continuous change, is adapted from the circumscriptive
event calculus presented in [Shanahan, 1995b] However, it employs
a novel solution to the frame problem, inspired by the work of
Kartha and Lifschitz [1995]. The result is a considerable
simplification of the formalism in [Shanahan, 1995b].

Throughout the paper, the language of many-sorted first-order
predicate calculus with equality will be used, augmented with
circumscription. Variables in formulae begin with lower-case letters
and are universally quantified with maximum scope unless
indicated otherwise.

In the event calculus, we have sorts for fluents, actions (or
events), and time points. It's assumed that time points are
interpreted by the reals, and that the usual comparative predicates,
arithmetic functions, and trigonometric functions are suitably
defined. The formula HoldsAt( f, t) says that fluent f is true at time
point t. The formulae Initiates( a, f, t) and Terminates( a, f, t) say
respectively that action a makes fluent f true from time point t, and
that a makes f false from t. The effects of actions are described by a
collection of formulae involving Initiates and Terminates.

For example, if the term Rotate( r) denotes a robot's action of
rotating r degrees about some axis passing through its body, and the
term Facing( r) is a fluent representing that the robot is facing in a
direction r degrees from North, then we might write the following
Initiates and Terminates formulae.

Initiates( Rotate( r1), Facing( r2), t) ¬ (2.1)
HoldsAt( Facing( r3), t) Ù r2 = r3 + r1

Terminates( Rotate( r1), Facing( r2), t) ¬ (2.2)
HoldsAt( Facing( r2), t) Ù r1 1 0

Once a fluent has been initiated or terminated by an action or
event, it is subject to the common sense law of inertia, which is
captured by the event calculus axioms to be presented shortly. This
means that it retains its value (true or false) until another action or
event occurs which affects that fluent.

A narrative of actions and events is described via the predicates
Happens and Initially. The formula Happens( a, t) says that an action 1
or event of type a occurred at time point t. Events are instantaneous.
The formula Initially( f) says that the fluent f is true from time point
0. A theory will also include a pair of uniqueness-of-names axioms,
one for actions and one for fluents.

The relationship between HoldsAt, Happens, Initiates, and
Terminates is constrained by the following axioms. Note that a
fluent does not hold at the time of an action or event that initiates it,
but does hold at the time of an action or event that terminates it.

HoldsAt( f, t) ¬ Initially( f) Ù Ø Clipped( 0, f, t) (EC1)

HoldsAt( f, t2) ¬ (EC2)
Happens( a, t1) Ù Initiates( a, f, t1) Ù t1 < t2 Ù
Ø Clipped( t1, f, t2)

Ø HoldsAt( f, t2) ¬ (EC3) Happens( a, t1) Ù Terminates( a, f, t1) Ù t1 < t2 Ù

Ø Declipped( t1, f, t2)
Clipped( t1, f, t2) « (EC4)
$ a, t [Happens( a, t) Ù [Terminates( a, f, t) Ú Releases( a, f, t)] Ù t1 < t Ù t < t2]

Declipped( t1, f, t2) « (EC5)
$ a, t [Happens( a, t) Ù [Initiates( a, f, t) Ú Releases( a, f, t)] Ù t1 < t Ù t < t2]

These axioms introduce a new predicate Releases [Kartha &
Lifschitz, 1994]. The formula Releases( a, f, t) says that action a
exempts fluent f from the common sense law of inertia. This non-inertial
status is revoked as soon as the fluent is initiated or
terminated once more. The use of this predicate will be illustrated
shortly in the context of continuous change.

Let the conjunction of (EC1) to (EC5) be denoted by EC. The
circumscription policy to overcome the frame problem is the
following. Given a conjunction of Happens and Initially formulae
N, a conjunction of Initiates, Terminates and Releases formulae E,
and a conjunction of uniqueness-of-names axioms U, we are
interested in,

CIRC[ N ; Happens] Ù
CIRC[ E ; Initiates, Terminates, Releases] Ù U Ù EC
This formula embodies a form of the common sense law of
inertia, and thereby solves the frame problem. Further details of this
solution are to be found in [Shanahan, 1996a]. The key to the
solution is to put EC outside the scope of the circumscriptions, thus
ensuring that the Hanks-McDermott problem is avoided [Hanks &
McDermott, 1987]. In most cases, the two circumscriptions will
yield predicate completions, making the overall formula
manageable and intuitive.

3 DOMAIN CONSTRAINTS AND CONTINUOUS CHANGE

Two additional features of the calculus are important: the ability to
represent domain constraints, and the ability to represent continuous
change.

Domain constraints are straightforwardly dealt with in the
proposed formalism. They are simply formulated as HoldsAt
formulae with a single universally quantified time variable, and
conjoined outside the scope of the circumscriptions along with EC.
For example, the following domain constraint expresses the fact that
the robot can only face in one direction at a time.

HoldsAt( Facing( r1), t) Ù HoldsAt( Facing( r2), t) ® r1 = r2
In the event calculus, domain constraints are used to determine
values for fluents that haven't been initiated or terminated by

actions or events (non-inertial fluents) given the values of other
fluents that have. (Domain constraints that attempt to constrain the
relationship between inertial fluents can lead to inconsistency.)
Following [Shanahan, 1990], continuous change is represented
through the introduction of a new predicate and the addition of an
extra axiom. The formula Trajectory( f1, t, f2, d) represents that, if the
fluent f1 is initiated at time t, then after a period of time d the fluent
f2 holds. We have the following axiom.

HoldsAt( f2, t2) ¬ (EC6)
Happens( a, t1) Ù Initiates( a, f1, t1) Ù t1 < t2 Ù
t2 = t1 + d Ù Trajectory( f1, t1, f2, d) Ù
Ø Clipped( t1, f1, t2)
Let CEC denote EC Ù (EC6), and U denote the conjunction of a
set of uniqueness-of-names axioms. If R is the conjunction of a set
of domain constraints and T is the conjunction of set of formulae
constraining Trajectory, then we are interested in,
CIRC[ N ; Happens] Ù
CIRC[ E ; Initiates, Terminates, Releases] Ù
T Ù R Ù U Ù CEC.
Notice that we are at liberty to include formulae which describe
triggered events in N. Here's an example of such a formula, which
describes conditions under which the robot will collide with a wall
lying on an East-West line 100 units north of the origin.
Happens( Bump, t) ¬
HoldsAt( Moving, t) Ù HoldsAt( Facing( r), t) Ù
Ð90 < r < 90 Ù HoldsAt( Location( Robot, áx, 90ñ), t)
4 REPRESENTING SPACE AND SHAPE

The formalism used in this paper to represent space and shape is
adapted from [Shanahan, 1995a]. Space is considered a real-valued
co-ordinate system. For present purposes we can take space to be
the plane ´ , reflecting the fact that the robot we will consider
will move only in two dimensions. A region is a subset of ´ . A
point is a member of ´ . I will consider only interpretations in
which points are interpreted as pairs of reals, in which regions are
interpreted as sets of points, and in which the Î predicate has its
usual meaning.

Objects occupy open, path-connected regions. For example, the
following formula describes an open circle of radius z units centred
on the origin.

p Î Disc( z) « Distance( p, á0,0ñ) < z (Sp1)
Distance is a function yielding a positive real number, defined in
the obvious way.
Distance( áx1, y1ñ, áx2, y2ñ) = Ö ````````````` (x1Ð x2) 2 + ( y1Ð y2) 2 (Sp2)
The function Bearing is also useful.
Bearing( áx1, y1ñ, áx2, y2ñ) = r ¬ (Sp3)
z = Distance( áx1, y1ñ, áx2, y2ñ) Ù z 1 0 Ù
Sin( r) = x2Ð x1 z Ù Cos( r) = y2Ð y1 z
Using Distance and Bearing we can define a straight line as
follows. The term Line( p1, p2) denotes the straight line whose end
points are p1 and p2. The Line function is useful in defining shapes
with straight line boundaries.
p Î Line( p1, p2) « (Sp4)
Bearing( p1, p) = Bearing( p1, p2) Ù
Distance( p1, p) £ Distance( p1, p2)
Spatial occupancy is represented by the fluent Occupies. The
term Occupies( w, g) denotes that object w occupies region g. No 2
object can occupy two regions at the same time. This implies, for
example, that if an object occupies a region g, it doesn't occupy any
subset of g nor any superset of g. We have the following domain
constraints.
[HoldsAt( Occupies( w, g1), t) Ù (Sp5)
HoldsAt( Occupies( w, g2), t)] ® g1 = g2

HoldsAt( Occupies( w1, g1), t) Ù (Sp6)
HoldsAt( Occupies( w2, g2), t) Ù w1 1 w2 ®
Ø $ p [p Î g1 Ù p Î g2]

The first of these axioms captures the uniqueness of an object's
region of occupancy, and the second insists that no two objects
overlap.

The term Displace( g, áx, yñ) denotes the result of displacing the
region g by x units east and y units north. The Displace function is
primarily used to describe motion: if an object moves, the region it
occupies is displaced.

áx1, y1ñ Î Displace( g, áx2, y2ñ) « áx1Ð x2, y1Ð y2ñ Î g (Sp7)
The final component of the framework is a means of default
reasoning about spatial occupancy [Shanahan, 1995a]. Shortly, a
theory of continuous motion will be described. This theory insists
that, in order for an object to follow a trajectory in space, that
trajectory must be clear. Accordingly, as well as capturing which
regions of space are occupied, our theory of space and shape must
capture which regions are unoccupied.

A suitable strategy is to make space empty by default. It's
sufficient to apply this default just to the situation at time 0 Ñ the
common sense law of inertia will effectively carry it over to later
times. The following axiom is required, which can be thought of as
a common sense law of spatial occupancy.

AbSpace( w) ¬ Initially( Occupies( w, g)) (Sp8)
The predicate AbSpace needs to be minimised, with Initially
allowed to vary.

Where previously we were interested in CIRC[ N ; Happens], it's
now convenient to split this circumscription into two, and to
distribute Initially formulae in two places. Given,

° the conjunction O of Axioms (Sp1) to (Sp8),

° a conjunction M of Initially formulae which mention only
the fluent Occupies, and

° a conjunction N of Happens formulae and Initially formulae
which don't mention the fluent Occupies, and

° conjunctions E, T, R, U, and CEC as described in the last
section,

we are now interested in,
CIRC [O Ù M ; AbSpace ; Initially] Ù
CIRC[ N ; Happens] Ù
CIRC[ E ; Initiates, Terminates, Releases] Ù
T Ù R Ù U Ù CEC.
5 SENSORS AND MOTORS: THE THEORY

We now have the logical apparatus required to construct a formal
theory of the relationship between a robot's motor activity, the
world, and the robot's sensor data. The present paper assumes
perfect motors and perfect sensors. The issue of noise is dealt with
in [Shanahan, 1996b].

The robot used as an example throughout the rest of the paper is
one of the simplest and cheapest commercially available mobile
robotic platforms at the time of writing, namely the Rug Warrior
described by Jones and Flynn [1993] (Figure 1). This is a small,
wheeled robot with a 68000 series microprocessor plus 32K RAM
on board. It has a very simple collection of sensors. These include
three bump switches arranged around its circumference, which will
be our main concern here. In particular, we will confine our
attention to the two forward bump switches, which, in combination,
can deliver three possible values for the direction of a collision.

Wheel Wheel
Switch1 Switch2

Caster
Switch3

Figure 1: The Rug Warrior Robot from Above

Needless to say, each different kind of sensor gives rise to its
own particular set of problems when it comes to constructing SE .
The question of noise is largely irrelevant when it comes to bump
sensors. With infra-red proximity detectors, noise plays a small part.
With sonar, the significance of noise is much greater. The use of
cameras gives rise to a whole set of issues which are beyond the
scope of this paper.

The central idea of this paper is the assimilation of sensor data
through abduction. This is in accordance with the principle,
"prediction is deduction but explanation is abduction" [Shanahan,
1989]. To begin with, we'll be looking at the predictive capabilities
of the framework described.

The conjunction of our general theory of action, change, space,
and shape with the theory SE , along with a description of the initial
locations and shapes of objects in the world and a description of the
robot's actions, should yield a description of the robot's expected
sensory input. If prediction works properly using deduction in this
way, the reverse operation of explaining a given stream of sensor
data by hypothesising the locations and shapes of objects in the
world is already defined. It is simply abduction using the same
logical framework.

In the caricature of the task of assimilating sensor data presented
in Section 1, the relationship between motor activity and sensor data
was described by SE . In practice, this theory is split into parts and
distributed across different circumscriptions (see Section 3).
First, we have a collection of formulae which are outside the
scope of any circumscription. Let B be the conjunction of CEC with
Axioms (B1) to (B6) below. Axioms (B1) and (B2) are uniqueness-of-
names axioms. The robot is assumed to travel at a velocity of one
unit of distance per unit of time.

UNA[ Occupies, Facing, Moving, Blocked, Touching] (B1)

UNA[ Rotate, Go, Stop, Bump, Switch1, Switch2] (B2)
Trajectory( Moving, t, Occupies( Robot, g2), d) ¬ (B3)
HoldsAt( Occupies( Robot, g1), t) Ù HoldsAt( Facing( r), t) Ù
g2 = Displace( g1,ád. Sin( r), d. Cos( r) ñ)

HoldsAt( Facing( r1), t) Ù HoldsAt( Facing( r2), t) ® r1= r2 (B4)
HoldsAt( Blocked( w1, w2, r), t) « (B5)
$ g1, g2 [HoldsAt( Occupies( w1, g1), t) Ù HoldsAt( Occupies( w2, g2), t) Ù

w1 1 w2 Ù $ z1 [z1 > 0 Ù " z2 [z2 £ z1 ®
$ p [p Î g2 Ù p Î Displace( g1,áz2. Sin( r), z2. Cos( r) ñ)]]] 3
HoldsAt( Touching( w1, w2, p), t) « (B6)
HoldsAt( Occupies( w1, g1), t) Ù
HoldsAt( Occupies( w2, g2), t) Ù w1 1 w2 Ù
$ p1, p2 [p Î Line( p1, p2) Ù p 1 p1 Ù p 1 p2 Ù " p3 [[ p3 Î Line( p1, p) Ù p3 1 p] ®

p3 Î g1] Ù
" p3 [[ p3 Î Line( p, p2) Ù p3 1 p] ® p3 Î g2]]

The fluent Blocked( w1, w2, r) holds if object w1 cannot move any
distance at all in direction r without overlapping with another
object. The fluent Touching( w1, w2, p) holds if w1 and w2 are
touching at point p. This is true if a straight line exists from p1 to p2
at a bearing r which includes a point p3 such that every point
between p1 and p3 apart from p3 itself is in g1 and every point from
p2 to p3 apart from p3 itself is in g2.

Next we have a collection of Initiates, Terminates, and Releases
formulae. Let E be the conjunction of the following axioms (E1) to
(E6). A Bump event occurs when the robot collides with something.

Initiates( Rotate( r1), Facing( r1+ r2), t) ¬ (E1)
HoldsAt( Facing( r2), t)

Releases( Rotate( r1), Facing( r2), t) ¬ (E2)
HoldsAt( Facing( r2), t) Ù r1 1 0

Initiates( Go, Moving, t) (E3)
Releases( Go, Occupies( Robot, g), t) (E4)
Terminates( a, Moving, t) ¬ (E5)
a = Stop Ú a = Bump Ú a = Rotate( r)

Initiates( a, Occupies( Robot, g), t) ¬ (E6)
[a = Stop Ú a = Bump] Ù HoldsAt( Occupies( Robot, g), t)

Now we have a collection of formulae concerning the narrative
of actions and events we're interested in. This collection has two
parts. Let N be N1 Ù N2. The first component part concerns
triggered events. The events Switch1 and Switch2 occur when the
robot's forward bump switches are tripped (see Figure 1). Let N1 be
the conjunction of Axioms (H1) to (H3) below.
Happens( Bump, t) ¬ (H1)
[HoldsAt( Moving, t) Ú Happens( Go, t)] Ù
HoldsAt( Facing( r), t) Ù
HoldsAt( Blocked( Robot, w, r), t)

Happens( Switch1, t) ¬ (H2)
Happens( Bump, t) Ù HoldsAt( Facing( r), t) Ù
HoldsAt( Occupies( Robot, Displace( Disc( z), p1)), t) Ù
HoldsAt( Touching( Robot, w, p2), t) Ù
rÐ 90 £ Bearing( p1, p2) < r+ 12

Happens( Switch2, t) ¬ (H3)
Happens( Bump, t) Ù HoldsAt( Facing( r), t) Ù
HoldsAt( Occupies( Robot, Displace( Disc( z), p1)), t) Ù
HoldsAt( Touching( Robot, w, p2), t) Ù
rÐ 12 £ Bearing( p1, p2) < r+ 90

The term Occupies( Robot, Displace( Disc( z), p1)) is employed in
Axioms (H2) and (H3) to obtain the centre p1 of the region
occupied by the robot, which can be thought of as its location. Note
that Axiom (H1) caters for occasions on which the robot attempts to
move when it is already blocked, as well as for occasions on which
the robot's motion causes it to collide with something. In the former
case, an immediate Bump event occurs, and the robot accordingly
moves no distance at all.

For present purposes, the Bump event is somewhat redundant. In
Axioms (E5) and (E6) it could be replaced by Switch1 and Switch2
events, and in Axioms (H2) and (H3) it could be simplified away.

But abolishing the Bump event would violate a basic principle of
the present approach, according to which the assumption of an
external world governed by certain physical laws, a world to which
its sensors have imperfect access, is built in to the robot. The
robot's task is to do its best to explain its sensor data in terms of a
model of the physics governing that world. In any such model,
incoming sensor data is the end of the line, causally speaking. In the
physical world, it's not a sensor event that stops the robot but a
collision with a solid object.

A

Robot
0
1
2

3
4

0 1 2 3 4

Figure 2: A Sequence of Robot Actions
The second component of N is a description of the robot's
actions. Suppose the robot behaves as illustrated in Figure 2. Let N2
be the conjunction of the following formulae, which represent the
robot's actions up to the moment when it bumps into obstacle A.
Happens( Go, 0) (5.1)
Happens( Stop, 2· 8) (5.2)
Happens( Rotate(Ð 90), 3· 3) (5.3)
Happens( Go, 3· 8) (5.4)
The final component of our theory is O Ù M, where M is a map
of the robot's world and O is the conjunction of Axioms (Sp1) to
(Sp8) Like N, M is conveniently divided into two parts. Let M be
M1 Ù M2, where M1 is a description of the initial locations, shapes,
and orientations (where applicable) of known objects, including the
robot itself. For the example of Figure 2, M1 would be the
conjunction of the following formulae.
Initially( Facing( 80)) (5.5)
Initially( Occupies( Robot, Displace( Disc( 0· 5), á1,1ñ))) (5.6)
The form of M2 is the same as that of M1. However, when
assimilating sensor data, M2 is supplied by abduction. For now
though, let's look at the predictive capabilities of this framework,
and supply M2 directly. Let M2 be the following formula, which
describes the obstacle in Figure 2.
$ g [Initially( Occupies( A, g)) Ù (5.7) " x, y [áx, yñ Î g « 1 < x < 3 Ù 3· 5 < y < 4· 5]]
The following proposition says that, according to the
formalisation, both bump switches are tripped at approximately
time 5· 5 (owing to a collision with obstacle A), and that the bump
switches are not tripped at any other time.
Proposition 5.8.

CIRC [O Ù M1 Ù M2 ; AbSpace ; Initially] Ù
CIRC[ N1 Ù N2 ; Happens] Ù
CIRC[ E ; Initiates, Terminates, Releases] Ù B
Happens( Switch1, T bump ) Ù
Happens( Switch2, T bump ) Ù [[ Happens( Switch1, t) Ú
Happens( Switch2, t)] ® t = T bump ] 4
where T bump = 2· 5 + 2· 8. Cos( 80) Cos(Ð 10) + 3· 8.

Proof. In full version of paper.

The process of assimilating sensor data is the reverse of that of
predicting sensor data. As outlined in Section 1, the task is to
postulate the existence, location, and shape of a collection of objects
which would explain the robot's sensor data, given its motor
activity.

Let Y be the conjunction of a set of formulae of the form
Happens( Switch1,t) or Happens( Switch2,t) where t is a time point.
What we want to explain is the partial completion of this formula,
for reasons that will be made clear shortly. The only-if half of this
completion is defined as follows.

Definition 5.9.

COMP[ Y] ºdef
[Happens( a, t) Ù [a = Switch1 Ú a = Switch2]] ®

Ú áa, tñÎG [a = a Ù t = t] where G = {áa, tñ | Happens( a, t) Î Y}.

Given Y, we're interested in finding conjunctions M2 of
formulae in which each conjunct has the form,
$ g [Initially( Occupies( w, g)) Ù " p [p Î g « P]]
where r is a point constant, w is an object constant, and P is any
formula in which p is free, such that O Ù M1 Ù M2 is consistent
and,
CIRC[ O Ù M1 Ù M2 ; AbSpace ; Initially] Ù
CIRC[ N1 Ù N2 ; Happens] Ù
CIRC[ E ; Initiates, Terminates, Releases] Ù B
Y Ù COMP[ Y].
The partially completed form of the Happens formula on the
right-hand-side of the turnstile eliminates anomalous explanations
in which, for example, the robot encounters a phantom extra
obstacle before the time of the first event in Y. If Y on its own were
used instead of this partially completed formula, it would be
possible to construct such explanations by shifting all the obstacles
that appear in a proper explanation into new positions which take
account of the premature interruption in the robot's path caused by
the phantom obstacle.

Clearly, from Proposition 5.8, if Y is,

Happens( Switch1, T bump ) Ù Happens( Switch2, T bump )
then (5.7) is an explanation that meets this specification. Note that
the symbol A in (5.7) (or rather its computational counterpart in the
actual robot), when generated through the abductive assimilation of
sensor data, is grounded in Harnad's sense of the term [Harnad,
1990], at the same time as acquiring meaning through the theory.
Furthermore, the theoretical framework within which such
explanations are understood,
° links the symbols that appear in them directly to a level of
representation at which high-level reasoning tasks can be
performed, and

° licenses an account of the robot's success (or otherwise) at
performing its tasks which appeals to the correctness of its
representations and its reasoning processes.

However, (5.7) is just one among infinitely many possible
explanations of this Y of the required form. In the specification of
an abductive task like this, the set of explanations of the required
form will be referred to as the hypothesis space. It's clear, in the
present case, that some constraints must be imposed on the
hypothesis space to eliminate bizarre explanations. Furthermore, the
set of all explanations of the suggested form for a given stream of
sensor data is hard to reason about, and computing a useful
representation of such a set is infeasible. This problem is tackled in
the full paper by adopting a boundary-based representation of shape
(see [Davis, 1990, Chapter 6]). Space limitations preclude further
discussion of this topic here.

CONCLUDING REMARKS

A great deal of further work has already been completed, including
a treatment of noise via non-determinism and a consistency-based
form of abduction [Shanahan, 1996b]. This has led to the design of
a provably correct algorithm for sensor data assimilation, which
forms the basis of a C implementation which has been used in a
number of experiments with the robot. All of this is described in the
full paper, which is available from the author.

ACKNOWLEDGEMENTS

The inspiration for Cognitive Robotics comes from Ray Reiter and
his colleagues at the University of Toronto. Thanks to Neelakantan
Kartha and Rob Miller. The author is an EPSRC Advanced
Research Fellow.

REFERENCES

[Davis, 1990] E. Davis, Representations of Commonsense Knowledge,
Morgan Kaufmann (1990).

[Hanks & McDermott, 1987] S. Hanks and D. McDermott, Nonmonotonic
Logic and Temporal Projection, Artificial Intelligence, vol 33 (1987), pages
379-412.

[Harnad, 1990] S. Harnad, The Symbol Grounding Problem, Physica D, vol
42 (1990), pages 335-346.

[Jones & Flynn, 1993] J. L. Jones and A. M. Flynn, Mobile Robots: Inspiration
to Implementation, A. K. Peters (1993).

[Kartha & Lifschitz, 1994] G. N. Kartha and V. Lifschitz, Actions with
Indirect Effects (Preliminary Report), Proceedings 1994 Knowledge
Representation Conference, pages 341-350.

[Kartha & Lifschitz, 1995] G. N. Kartha and V. Lifschitz, A Simple
Formalization of Actions Using Circumscription, Proceedings IJCAI 95,
pages 1970-1975.

[Lespérance, et al., 1994] Y. Lespérance, H. J. Levesque, F. Lin, D. Marcu,
R. Reiter, and R. B. Scherl, A Logical Approach to High-Level Robot
Programming: A Progress Report, in Control of the Physical World by
Intelligent Systems: Papers from the 1994 AAAI Fall Symposium, ed.
B. Kuipers, New Orleans (1994), pages 79-85.

[McCarthy, 1989] J. McCarthy, Artificial Intelligence, Logic and
Formalizing Common Sense, in Philosophical Logic and Artificial
Intelligence, ed. R. Thomason, Kluwer Academic (1989), pages 161-190.

[Shanahan, 1989] M. P. Shanahan, Prediction Is Deduction but Explanation Is
Abduction, Proceedings IJCAI 89, pages 1055-1060.

[Shanahan, 1990] M. P. Shanahan, Representing Continuous Change in the
Event Calculus, Proceedings ECAI 90, pages 598-603.

[Shanahan, 1995a] M. P. Shanahan, Default Reasoning about Spatial
Occupancy, Artificial Intelligence, vol 74 (1995), pages 147-163.

[Shanahan, 1995b] M. P. Shanahan, A Circumscriptive Calculus of Events,
Artificial Intelligence, vol 77 (1995), pages 249-284.

[Shanahan, 1996a] M. P. Shanahan, Solving the Frame Problem: A
Mathematical Investigation of the Common Sense Law of Inertia, MIT Press
(1996), to appear.

[Shanahan, 1996b] M. P. Shanahan, Noise and the Common Sense Informatic
Situation for a Mobile Robot, Proceedings AAAI 96, to appear. 5