Morgenstern: Frame Problem

From: Hudson Joe (jh798@ecs.soton.ac.uk)
Date: Sat Mar 03 2001 - 23:34:18 GMT


'The Problem with Solutions to the Frame Problem'
http://citeseer.nj.nec.com/morgenstern95problem.html

Hudson:
This paper by Leora Morgenstern examines various flawed solutions to
the Frame problem and suggests how future research could more
successfully be conducted.

First of all what is the Frame problem?

Some descriptions:

1/ The problem of knowing which variables in a situation change and
  which don't from one moment in time to the next.

2/ As 1/ but with-out using an unbounded list of frame axioms (rules
 about particular causes and their effects).

3/ The problem of identifying the relevent or salient factors in and
 the context of a situation so a sensible decision about what action
    (if any) to take can be made.

4/ The problem of replicating 'common sense' reasoning.

MORGENSTERN:
>The frame problem, the problem of efficiently determining which things
>remain the same in a changing world

Hudson:
The first description of the frame problem in Morgenstern's paper.
About the same as 1/ above.

MORGENSTERN:
>In its most literal sense, the frame problem is the problem that=20
>arises from having a plethora of axioms in order to reason that most
>features about the world remain unchanged as actions occur. McCarthy=20
>and Hayes (1969), however, immediately identified the frame problem as=20
>the problem of predicting within the situation calculus and without=20
>using frame axioms that most things about the world remain the same as=20
>actions are performed. Very often, this problem is viewed as a=20
>representational one: How can we say in a concise manner, "Except for=20
>the features that are explicitly known to change, everything remains=20
>the same" ?

Hudson:
Here Morgenstern gives description 2/ but includes 'situation
calculus'. Which is...

MORGENSTERN:
>For our purposes, the situation calculus, a discrete branching=20
>timeline of situations connected by actions, can be summarized as=20
>follows.
>
>We think of a situation as being a snapshot of the world at a
>particular instant of time. Consider, e.g., 1 p.m. EDT, July 17, 1992.=20
>In that situation, George Bush is president of the United States, Bill=20
>Clinton is the presidential nominee of the Democratic party, it is hot=20
>and muggy in New York City, and warm, dry, and sunny in Palo Alto. A=20
>state is defined as a collection of situations. For example, the state=20
>of Richard Nixon being president is the collection of situations=20
>starting on January 20, 1969, and ending on August 9, 1974. The state=20
>On(BlockA, BlockB) is the collection of all situations in which Block=20
>A is on top of Block B. A state is one type of fluent; intuitively,=20
>something whose value changes over time. States are Boolean fluents;=20
>other fluents are time-varying terms. For example, President(USA) is a=20
>fluent; it has the value George Bush in 1989, and has the value Bill=20
>Clinton starting in January, 1993. The predicates Holds or True relate=20
>fluents and situations. Thus we can say Holds(S77,On(BlockA,BlockB)).

Hudson:
Sounds like Prolog to me. But what is ment by a snapshot? Is a snapshot
a complete collection of all the distinguishable states in the world or
just some of them? The discription so far seems to be of nothing more
interesting than a database with a few inference rules ( as in Prolog).=20

Where Morgenstern says, "the state of Richard Nixon being
president is the collection of situations starting on January 20, 1969,
and ending on August 9, 1974.", does this mean that Mrs Jones making a
cup of coffee at 3:14 in 3/9/71 somewhere in Ohio is part of the
'state' of Richard Nixon being president? It would appear to make more
sense to say: in all the situations from January 20, 1969 to August 9,
1974 the state of Richard Nixon being president is constant. So instead
of saying: many states exist in a situation, Morgenstern seems to be
saying: many situations can be contained within one state.

MORGENSTERN:
>Actions are defined as functions on states. For example, the action
>Puton(BlockA, BlockB) maps those states in which Block A and Block B=20
>are clear (with nothing on top of them) to those states in which Block=20
>A is on top of Block B. Result is a function mapping the situation=20
>before the action has occurred to the situation after the action has=20
>occurred. So, if we have the following statements in our theory:
>Holds(S0, Clear(BlockA)) Holds(S0, Clear(BlockB)) we can talk about
>Result(Puton(BlockA, BlockB), S0).

>The situation calculus gives us a way to talk about causation. To
>state that the Puton(BlockA, BlockB) action causes Block A to be on=20
>Block B, we say that in the situation resulting from the Puton action,=20
>Block A is on Block B. That is: Holds(Result(Puton(BlockA, BlockB),=20
>S0), On(BlockA, BlockB))

Hudson:
If the functions 'Clear' and 'Puton' return TRUE (i.e. it 'Holds') for
ALL situations where a block (which was the argument to the function)
has nothing on it, and where one block is somehow on top of another
respectively, then how are these functions evaluated? Their
implementation doesn't appear to be trivial.

MORGENSTERN:
>Thus far, everything is quite straightforward. But now, things become
>complicated. Suppose we start out with the following facts: Holds(S0,=20
>Red(BlockA)) Holds(S0, Clear(BlockA)) Holds(S0, Clear(BlockB)) Is it=20
>also true that Holds(Result(Puton(BlockA, BlockB), S0), Red(BlockA))?=20
>That is, if Block A is red at the start, is it still red after we have=20
>put Block A on top of Block B? This question certainly is not=20
>difficult for a human to answer: of course Block A is still red. The=20
>problem is that this inference is not sanctioned by the theory. The=20
>theory as it stands says nothing about how a block's color is - or is=20
>not - affected by the occurrence of actions..

Hudson:
So S0 is a (not 'the') situation where blockA is red, BloackA is clear
and BloackB is clear. Fine. Whether

Holds(Result(Puton(BlockA, BlockB), S0), Red(BlockA))=20

is true depends on how 'Result' and 'Red' are implemented. If 'Result'
returns the state which is the result of all the reactions/effects of
putting BlockA on BlockB (from S0), then how does 'Red' work? Is
'BlockA' some sort of address (for its data structure) into the current
state returned by 'Result'?=20

If Holds(Situation, Condition) can be read as 'the Condition is met in
the Situation, and 'Red' checks (or courses 'Holds' to check) the
actual colour of a block (presumably after all other actions/reactions
have been processed i.e. in the current state) and checks if this
colour is red, then yes

Holds(Result(Puton(BlockA, BlockB), S0), Red(BlockA))=20

would return True.

I don't understand what Morgenstern means by "The problem is that this
inference is not sanctioned by the theory. ". Why not? 'Result' should
be more clearly defined.

If Result used (somehow) inductive reasoning, combined with a bit of
statistics and inference then the frame problem would be solved
wouldn't it? And within the framework of the Situation Calculus (if
I've understood it correctly).

MORGENSTERN:
>Thus, we have a problem: the problem of predicting - within the
>situation calculus - how things stay the same as actions occur. Now,=20
>this is not the frame problem, although researchers in AI have often=20
>identified this as the frame problem (see section 3). McCarthy and=20
>Hayes (1969) never named this problem; they just offered a way of=20
>handling it: namely, writing down axioms that specify how fluents do=20
>not change as actions occur. Such axioms are called frame axioms. An=20
>example of a frame axiom is: Holds(s, Red(b1)) ) Holds(Result(Puton=20
>(b1,b2), s), Red(b1)) or more generally: color(s,b1) =3D color(Result
>(Puton(b1,b2),s), b1) Such an axiom would allow the desired inference:
>that Block A is still red after putting it on Block B.

Hudson:
Yes but then we are using frame axioms. Which is not good.

MORGENSTERN:
>Going upward in the generality scale, the frame problem has been
>understood as encompassing both the persistence problem and its dual -=20
>namely, the problem of determining how things change over time. These=20
>two problems together are known as the (forward) temporal projection=20
>problem (Morgenstern and Stein, 1988). ... This problem is known as=20
>the ramification problem (Finger, 1988).
>
>On a still more general level, the frame problem has been identified
>with the general problem of temporal reasoning. This includes forward=20
>temporal projection - reasoning about persistence, reasoning about=20
>change, and the ramification problem, as well as the backward temporal=20
>projection problem (Morgenstern and Stein, 1988): how can we reason=20
>about what happened at an earlier time point if we are told what is=20
>true at a later time

Hudson:
How are these more general? The first desciption, "the problem of
efficiently determining which things remain the same in a changing
world" implied the need for what the above says explicitly, i.e.
temporal reasoning.

MORGENSTERN:
>Of course, one could explicitly write out all these derived and
>indirect causes. Such a strategy, however, is flawed for two reasons.=20
>First, it would be anti-intuitive: it is odd to say that the placing=20
>of the newspaper on the vent causes the room to be stuffy. Moreover,=20
>it would turn this from a computational problem to a representational=20
>problem, for we would then need a plethora of causal rules listing all=20
>the direct, indirect, and derived effects of an action. These casual=20
>rules would often be difficult to write down. For example: placing the=20
>newspaper on the air vent causes the room to be stuffy only if the air=20
>vent is the only source of air in the room..

Hudson:
This would obviously be the wrong tactic. If what we want is a system
that approaches our ability to minimise the frame problem then I think
it is clear that it would have to have a strong sense of context, i.e.
it would build up a tempory, limited and specific set of 'rules of
thumb', implications and speculations relating to its current
environment. This set would be modified according to how its
environment changes. There is a need for context because without one
the data storage, organisation and processing requirements would soon
get rediculous in an evironment as complex and variable as the one most
of us live in. Yes some people live dull and repetitive lives buy
comparison to others, but generally there is still lots of
unpredeterminable factors such as which way a conversation with a
recently met person will go.

MORGENSTERN:
>Closely related to the problem of backward temporal projection is the
>problem of explanation: if we predict that a certain fact will be true=20
>at time t and we are wrong, can we explain what must have gone wrong=20
>previous to time t? It also includes the qualification problem=20
>(McCarthy, 1986): roughly speaking, the problem of specifying what
>conditions must be true in the world for a given action to have its=20
>intended effect.=20

Hudson:
If the incorrect prediction was made primarily on the basis of
induction then we may not have a clue why our prediction failed. If the
prediction was largely the product of deduction/inference then we have
a basis to explain why we might have been wrong. We can examin at which
stage of our iferencing we start to go wrong according to the observed
results which we failed to predict, and then revise (or recall) the
assumptions made at that stage. Of course our 'explaination' may only
be (and often is) in terms of 'concepts' that are themselves based on
inductive reasoning ('it's been that way so far so I assume it will
continue to be that way, don't ask me why.'). But this is straying into
the symbol grounding problem which is not the subject of Morgenstern's
paper.

MORGENSTERN:
>The classic example here is the case of turning the ignition key. If
>you turn the ignition key in your car, you expect the car to start.=20
>Actually, though, many conditions have to be true in order for this=20
>statement to be true: The battery must be alive, the starter must=20
>work, there must be gas in the tank, the tailpipe must be clear of=20
>obstacles, and so on. As with the original frame problem, this is a=20
>representational problem with a computational sidekick. We certainly=20
>would not wish to write down causal rules with cumbersomely long=20
>antecedents. Moreover, even if we were to write down such causal=20
>rules, would we necessarily know enough to apply them? I reason that I=20
>will be able to drive my car to work tomorrow morning, even though I=20
>do not know for certain that the battery will be alive and that there=20
>will be no potato in the tailpipe; in fact, I never explicitly=20
>consider these car parts.=20

Hudson:
Morgenstern almost identifies the two components of reasoning in her
example: Induction and Deduction, (The car starts, therefore the=20
battery is charged, the starter plug has worked and the exaust is clear
of obstructions. If the car fails to start then there must have been a
problem with one of the above.).=20

MORGENSTERN:
>Finally, the problem includes general questions on the nature of
>causation: What is a causal rule? Do we even know if a causal rule is=20
>true? What is the connection between causation and material=20
>implication (the standard "if-then" connective of classical logic)?=20
>and so on (Shoham, 1988).

Hudson:
This is interesting. "Do we even know if a causal rule is true?". If
this means 'Do we know that a rule we have made relating certain
actions to certain reactions is true?' Then this imeadiately calls into
question truth itself. We are not objective creatures so we can't make
objective statements so we can't really know truth (leaving aside
religion) which by common definition is objective. But then I couldn't
even be sure of what I've just said which appears to be an objective
statement itself. As does the one I've just written disclaiming the
first... So it seems we have to take a good guess, make do, and not
worry to much, otherwise things get depressing.

"What is the connection between causation and material implication"?=20
There is no difference at-all that I can see.

MORGENSTERN:
>Lastly, some philosophers have interpreted the frame problem to be a
>general problem of reasoning. Fetzer (1991), for example, has argued=20
>that the frame problem is just an instance of the general principle of=20
>induction: we realize that Block A will be blue after the Puton action=20
>because that is the way it has been every other time we did the Puton=20
>action.

Hudson:
Broardly I agree, the frame problem is a problem of reasoning because
it will continue to occur until we make our solution capable of
reasoning. But I think that that includes inference as well as
induction. That is not to say that we should necessarily attempt to
solve the frame problem just at this level. It would probably be
usefull to work up from smaller pieces at the same time.

Morgenstern offers the following minimum requirements for a serious
solution to the frame problem:

MORGENSTERN:
>a. The solution must go beyond overly restrictive temporal languages
>such as the situation calculus. In particular, a solution should work=20
>for an expressive temporal reasoning system that allows for concurrent=20
>actions, the representation of "gaps" (i.e., periods where one does=20
>not have complete knowledge of all that is going on in the world) and=20
>partial specification of actions (i.e., describing an action such as=20
>going down to Hertz, renting a Ford, getting a map from the rental=20
>agency, and driving to Buffalo).
>
>b. The solution should be compatible with a general theory of temporal
>reasoning. This does not mean that the solution should necessarily=20
>solve the qualification or ramification problem, or that it should=20
>give a convincing account of causation. But it does mean that the=20
>solution should not preclude further work in causal reasoning. For=20
>example, as discussed in Section 5.3.3, the works of Kautz(1986),=20
>Lifschitz(1986), and Shoham (1988) are inconsistent with a theory that=20
>allows for backward temporal reasoning. These solutions are thus not=20
>satisfactory for our purposes.
>
>These constraints allow a wide range of solutions of varying degrees
>of generality. Within this acceptable range, however, the following=20
>dictum applies: all other things being equal, the more general the=20
>solution, the better.

Hudson:
Morgenstern then goes on to detail several flawed attempts at a
solution. These include flavours of proceedual, monotonic logic and
nonmonotonic logic. All of them each assume a complete knowedge of the
world (the systems environment), can't deal with concurrent events or
require silly numbers of frame axioms. So as solutions to the frame
problem they are all effectively useless.

Nonmonotonic logics are clearly more suitable than monotonic ones
because they are able to make statements about assumptions, exceptions,
or what usually happens. All prevelent in common sense reasoning.
Proceedural solutions are attractive because they are most closely
related to computation which seems able to do almost anything.

Morgenstern introduces the 'Yale Shooting Problem' which seems to be
about knowing what effects of what actions persist into the future.

MORGENSTERN:
>The Yale Shooting Problem was discovered when Hanks and McDermott=20
>(1986) attempted to integrate temporal and nonmonotonic logics. It can
>be briefly described as follows: We are told that a gun is loaded at=20
>time 1, and that the gun is fired at Fred at time 5. Loading a gun=20
>causes the gun to be loaded, and firing a loaded gun at an individual=20
>causes the person to be dead. In addition, the fluents alive and=20
>loaded persist as long as possible; that is, these fluents stay true=20
>unless an action that is abnormal with respect to these fluents=20
>occurs. Thus, a person who is alive tends to remain alive, and a gun=20
>that is loaded tends to remain loaded. What can we conclude about=20
>Fred's status at time 6? If we work within the situation calculus and=20
>we assume that the starting situation is S0, we can phrase the=20
>following question: Is Holds(Alive(Fred), Result(Shoot, Result(Wait,=20
>Result(Wait, Result (Wait, Result(Load, S0)))))) true? 7 Although=20
>common sense argues that Fred is dead at time 6, the facts support two=20
>models. In one model (the expected model), the fluent loaded persists=20
>as long as possible. Therefore, the gun remains loaded until it is=20
>fired at Fred, and Fred dies. In this model, at time 5, shooting is=20
>abnormal with respect to Fred's being alive. In the other, unexpected=20
>model, the fluent alive persists as long as possible (i.e., Fred is=20
>alive after the shooting). Therefore, the fluent loaded did not=20
>persist; somehow the gun must have become unloaded. That is, in some=20
>situation between 2 and 5, Wait was abnormal with respect to the gun=20
>being loaded.

Hudson:
I do not understand where the 'unexpected' model comes from. If "the
fluents alive and loaded persist as long as possible; that is these
fluents stay true unless an action that is abnormal with respect to
these fluents occurs."
Also what is there in :

Holds(Alive(Fred), Result(Shoot, Result(Wait, Result(Wait, Result
(Wait, Result(Load, S0))))))

that says what or who the gun is pointed at? The questions raised with
the first example of the situation calculus apply again here.

MORGENSTERN:
>Although these solutions do work for the Yale Shooting Problem, it is
>clear that they do not solve the frame problem. First, the strong=20
>constraint on forward-in-time reasoning works well when one is=20
>considering problems of prediction (i.e., will Fred be dead after the=20
>gun is fired?), but does not work when one is considering problems of=20
>belief revision or explanation. In such cases, backward-in-time=20
>reasoning is necessary. For example (Kautz, 1986), suppose there is a=20
>default rule that states that a car will typically stay where it is=20
>parked. If I park my car in a lot at 9:00 a.m., I will predict that=20
>the car will be there when I return at 5:00 p.m. But if I return to=20
>find the car gone, forward-in-time reasoning will entail that the car=20
>disappeared from the lot at the last minute - just before 5:00 p.m. !=20
>This is clearly not a reasonable conclusion. Chronological approaches=20
>are thus inadequate for the general problem of temporal projection.=20
>Second, Lifschitz's and Kautz's solutions are firmly based on the=20
>situation calculus, and thus inherit all the problems that the=20
>situation calculus entailed for McCarthy's original solution.

Hudson:
Just because the reasoning is forward-in-time and we have a default
rule that states that a car will typically stay where it is parked why
does that mean that the reasoning must conclude that the car
disappeared just before 5:00pm? Whatever reasoning we use we still have
a period from 9:00am to 5:00pm where we don't know what happened. By
assuming that everything that has changed in this period happened at
one end of the period we are doing more than reasoning in one direction
of time. We are assuming that because we don't know what happened in
that period then nothing happened. So we are failing to cope with an
incomplete knowledge of our world. This isn't really anything to do
with forward-in-time or backward-in-time reasoning.

Morgenstern wraps up by suggesting that future research focuses on
building and improving on past research. If the examples Morgenstern
gives of past attempts are representitive of the best then I think we
are better off starting affresh.

I think a workable solution would be something more eclectic and
algorithm centred than anything decribed in Morgenstern's paper. I also
think it would be seemlessly integrated into the rest of the system, so
it would be closely tied in with the learning mechanisms and
motor-sensory sub-systems. This would be neccessary I think because the
frame problem can't be seperated from the problem of reasoning, and
reasoning might well require intimate knowledge of the host systems
pysical state (depending on the intended application) and will certainly
require the capacity to adapt from experience, i.e. learning, if its
environment is unpredictable.

The following general and vague characteristics summerise what I think
would be a potential frame problem solution.

1/ The system starts very simply, but with the ability the make a good
guess when it needs to become more sophisticed to cope with its
environment. This would suggest it has a sense (or encoding) of
objective.

2/ When it make this descision the added complexity is in the form of a
new clasification/clasifying catagory produced by combining old
catagories as best as possible.=20

3/ There is the potential in the architecture to develope multiple
levels of clasification for varying levels of abstraction. So elements
in a certain situation may be clasified into multiple catagories at
different levels.

4/ Categories are culled when their nieghbouring categories are
consistently accessed more frequently ( the deciding difference in
access frequency would be decided by a lower level system operating of
a longer time scale ).

5/ When decisions relating to the systems external behaviour are mad=20
it is on a bases of wieghted majority vote from the catagor=20
recognizers (or clasifiers). The wieghtings would be determined by how
the particular catagories related to the systems objective. These
relations would again be decided by a similar lower level system as in
4/.

6/ There has to be an internal mechanism for continually assessing the
ammount of success in achieving the systems objective in-order to give
positive or negative feedback to the other sub-systems.

7/ Each of the functional sub-systems would be influenced (or trained)
by 'helper' mechanisms which would take the output of 6/ and consider
it alongside a memory of the previous states of the systems to actually
produce the positive and negative feedback required for learning from
the environment.

8/ These 'helper' mechanisms are able to spawn short term
sub-objectives. This might be nessesary if our system was intended to
contruct a complex machine. The sub-objectives would be to construct
the individual components of the machine.

Problems with this idea: The design of 1/ and the 'lower level'
sub-systems are not clear to me. But I think they would make use of
ideas from conectionism, symbolic-AI and traditional software design.
Of course if a system satisfying all the above points were built it
would probably do a lot more besides just solve the frame problem.

The important point is the need for an objective (or criteria for
responce or action). How can something make descisions about how to
act or respond unless its trying to achieve something? But then if our
frame problem solving system has an objective and is to operate in a
similar environment as us then surely it would need a lot of the
capacity of 1/ to 8/ above (depending of the intended function of the
system)? =20

Finally whatever method is adopted to solve the frame problem the
issues of data representation and processing will be key. If the system
is given a strong learning ability then these issues will be less
critical at the design stage, but then we move the difficulty to making
sure the architecture of the learning mechanisms is powerfull enough for
the system to develope effective ways to represent and process
data itself.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:19 BST