Re: Morgenstern: Frame Problem

From: Hosier Adam (adam_s_hosier@hotmail.com)
Date: Sat May 12 2001 - 17:44:55 BST


Hosier: <ash198@ecs.soton.ac.uk>
Hudson: <jh798@ecs.soton.ac.uk>
MORGENSTERN: <www.citeseer.nj.nec.com/morgenstern95problem.html >

Hosier:
The Morgenstern paper attempts to address and clarify some aspects of
the 'frame problem'. Hudson succinctly explains the frame problem as
shown below.

Hudson:
>1/ The problem of knowing which variables in a situation change and which
>don't from one moment in time to the next.
>
>2/ As 1/ but with-out using an unbounded list of frame axioms (rules about
>particular causes and their effects).
>
>3/ The problem of identifying the relevant or salient factors in and the
>context of a situation so a sensible decision about what action (if any) to
>take can be made.
>
>4/ The problem of replicating 'common sense' reasoning.

Hosier:
Morgenstern's paper talks in detail about how a prolog like computer
language is not suitable for common sense reasoning. For instance using
this language Morgenstern shows that there is no link between moving
two colored blocks from a flat floor, on top of each other and how the
computer 'understands' what has happened to the blocks in the
situation.

MORGENSTERN:
>>The problem is that this inference is not sanctioned by the theory. The
>>theory as it stands says nothing about how a block's color is - or is not
>>- affected by the occurrence of actions.

Hudson:
>I don't understand what Morgenstern means by "The problem is that this
>inference is not sanctioned by the theory. ". Why not? 'Result', [the
>function used to calculate a situation result], should be more clearly
>defined.

Hosier:
I think what Morgenstern is trying to imply here is that: Although you
could more clearly define 'Result()' in this situation, so that the
computer can 'interpret' that the blocks remain the same color even
though they have been moved on top of each other; it is not possible
and certainly not practical to attempt to explicitly state all the
variations and effects of every single action that can occur within the
world. Instead of explicitly telling the computer what changes occur
within the world whenever any action occurs the computer needs a sense
of 'reasoning' and 'common sense' to deduce logical effects of each
action.

Explicitly stating all effects of an action would seem to be the 'frame
axiom' solution to the problem. Morgenstern then suggests this solution
which Hudson clearly identifies as an unsuitable solution for the
reasons explained above.

Hudson:
>Yes but then we are using frame axioms. Which is not good.

Hosier:
Morgenstern then comments on another area that humans perform
particularly well compared to all other artificially created systems.

MORGENSTERN:
>>Closely related to the problem of backward temporal projection is the
>>problem of explanation: if we predict that a certain fact will be true at
>>time t and we are wrong, can we explain what must have gone wrong previous
>>to time t?

Hosier:
Explaining what has happened in the world of logical inference is
simply a case of examining past assumptions and finding where the
system was incorrect.
However Hudson introduces the element of 'inductive reasoning' which humans
seem to use so well.

Hudson:
>If the incorrect prediction was made primarily on the basis of induction
>then we may not have a clue why our prediction failed. If the prediction
>was largely the product of deduction/inference then we have a basis to
>explain why we might have been wrong.

Hosier:
Having said this, what is Induction? The dictionary reference states:
"General inference from particular instances." Thus it would seem that
induction or 'gut sense' is loosely based on inference anyway. Which
makes sense - you can make general rules from empirical evidence. So it
would seem that Morgenstern's idea is not missing 'induction' - but a
novel way of 'reasoning' and creating rules, from inference.

Morgenstern comments on the 'Yale Shooting Problem' which seems to be a
case of correct rule precedence as commonly found when using Prolog.
For instance ordering the axioms and rules in Prolog code can affect
the outcome of a particular code section. Again the conclusion of the
Yale Shooting Problem seems to be that a Prolog style language is not
comparable to human like reasoning.

Hudson:
>Morgenstern wraps up by suggesting that future research focuses on building
>and improving on past research. If the examples Morgenstern gives of past
>attempts are representative of the best then I think we are better off
>starting afresh.

Hosier:
It should always be kept in mind that the work performed so far and
summarized by Morgenstern has laid the foundations for all the work
currently taking place. With the benefit of hindsight the work might
seem nave, however perhaps this gives an insight into how humans solve
problems in general.

Hudson:
>I think a workable solution would be something more eclectic and algorithm
>centered than anything described in Morgenstern's paper. I also think it
>would be seamlessly integrated into the rest of the system, so it would be
>closely tied in with the learning mechanisms and motor-sensory sub-systems.

Hosier:
The addition of the motor-sensory sub systems would seem to be a
necessary part of any 'real' system interacting with the 'real' world.
This suggests that the symbol-grounding problem may be very closely
linked with the frame problem. i.e.
The solution to the frame problem is a system which can intelligently
'reason'.
However in order to 'reason' a system seems to need to be embedded into the
real
world through the use of real physical devices.

Hudson goes on to mention that any intelligent system would need to
have an overall objective. Although I agree with this idea in general,
for instance it makes sense that a system needs a basic objective in
all situations in order to give appropriate responses based on that
objective, I still wonder what 'my own' overall objective is in life -
and yet I have some intelligence. However apart from the overall
objective of my life, I can see that in every other minor situation I
seem to have an objective and it is here that I believe that Hudson has
hit on a point lacking in much AI research.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST