Re: The Frame Problem

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Sun Mar 09 1997 - 13:30:25 GMT


> From: Hurden, Jackie <jwh196@soton.ac.uk>
>
> As we grow and develop as children, I think we are being "programmed"
> with all the information we require to live in this complicated world.

That is a metaphorical use of the word "programme." Even if we are
explicitly taught some symbol manipulation rules, and then we use them
to solve problems, that does not mean we were "programmed." The
same is true if we learn features or rules implicitly (i.e., without
being aware if it); that's still not "programming."

Not even our genetic codes are programmes for cognition. They are
programmes for building tissues and organs with proteins and enzymes.
One of those organs is the brain, and it might have certain things
"hard-wired" at birth: nursing, crying in distress, eventually also
movement patterns and vision are partly hard-wired, but that's still not
a programme in the sense of a computer programme, which is a set of
symbols and rules that can be implemented on a programmable computer.

We shouldn't confuse the effects of "input data" (what we see, hear,
etc.) on our later behaviour with "programming," which has a specific
technical meaning. Many systems other than computers can change as a
result of input: A stalled car starts up again when it is jumped with
another car's battery: A thermostat turns on the heat when its
thermometer drop below a fixed point: No programming is involved.

Hypnosis and "mind-control" are not "programming" either, despite the
fact they are called that on pop TV shows.

> If we had a lifetime in which to programme a computer to deal with all
> these variables, surely it would be possible, since we have been able
> to do the same. If we can take in the information at the rate of
> thousands of pieces per day, it would be possible to give this
> information to a computer to use to make its decisions.

This issue is controversial and complicated. Let's see how well I can
explain it kid-sib-style:

Yes it is true that everything (except what the sensory surfaces that
receive the shadows of distal objects do, and what our motor endorgans
do) can be done computationally (through symbol-manipulation rules).

The thesis that this is true is called the "Church-Turing Thesis" that
everything that can be "done" at all can be done by computation too.
But the fact that it CAN be done computationally in principle does not
mean that doing it computationally is best, or even feasible, in
practice.

First there is the problem of time and capacity: The number of games
that can be played on a chessboard is finite, although there is an
ENORMOUS number of them. The number of sentences of 20 words or less
that can be said in a language (without adding new vocabulary words)
is also finite, but again monstrously big: People rarely say sentences
that have more than 20 words, but they rarely say the same sentence
twice. All of this and much more would have to be encoded symbolically,
along with the rules for generating them, if someone where to attempt
what you suggest. (The fact that the Frame Problem keeps on haunting
symbolic "knowledge" symptoms is a symptom that something's not quite
right with the enterprise.)

The enterprise requires so much computation that is effectively what
mathematical complexity theorists call "NP-Complete," which means,
roughly, that it would require as much time as it would take for
chimpanzees randomly tapping typewriter keys to happen to type out
one of the plays of William Shakespeare (after all, that's just one
of many, but not infinitely many, finite strings of symbols!)

(You get a sense of the overwhelmingness of combinatorics when you ask
yourself why you don't always bet on the numbers "1, 2, 3, 4, 5, 6" in
the Lottery: After all, all the numbers have an equal chance of
winning, so this sequence is as good (or bad) as any! But the
obvious improbability of that sequence would probably discourage us
from playing the Lottery at all; so we don't pick that sequence, but
other sequences which "feel" less unlikely...)

NP-Completeness is related to what you might have heard described as
"combinatory explosion": If you look at all possible combinations of a
set of objects, the number of possibilities grows much more quickly than
the number of objects: Even a table of every possible 4-word sentence
in English would be monumentally big.

Now encoding everything that we know and can do in symbols would
involve colossally huge numbers of symbols and rules, numbers
exceeding the number of atoms in the universe and the age of the
universe in seconds.

We shouldn't pat ourselves on the back, though, and brag about how much
more complex we are than computers and the universe: This is just a
constraint on encoding all the possible things we can do under all
possible circumstances COMPUTATIONALLY. The same thing applies to all
the possible paths an airplane could use to get from London to
Birmingham (and planes are not that smart).

There are other ways to do things that don't lead to combinatory
explosion. (Combinatory explosion occurs when you want to make every
possible combination of a finite number of things explicit.) Some of
these alternative ways can be computational: A relatively short
algorithm, encoding the size of the plane, the distance involved, and
the size of the earth and its atmosphere, could be used to generate any
of the possible flights without generating ALL if them. That's the
power of an algorithm when there is one.

But no one has come up with an algorithm for doing everything our minds
can do. This is partly because some of the things our minds can do
are "modular" (functionally independent from one another). But it is
also because many of our capacities (e.g., object constancy and
phoneme perception) are better captured by analog processes (e.g.,
mental rotation) or neural nets (e.g., geon detection) than by
symbol systems, even though they COULD be done by symbols systems,
but not economically or efficiently.

> The part I have trouble with is whether a computer would be able to use
> this information in the way we do as humans. Would it be able to
> make judgements, avoid hurting someone's feelings, or decide which
> colour it prefers?

If you accept the implication of Searle's Chinese Room Argument and my
Symbol Grounding Problem (and the Frame Problem), you needn't trouble
yourself about this, because a computer can never have/be a mind. (On
the other hand, if you are thinking not of a computer -- i.e., not just
a symbol system, but a robot, and a robot that can do everything we can
do, and do it so well that we can't tell it apart from one of us -- then
the questions you raise do come up.

> All I can say is its a good thing that we are capable of continuing to
> learn - otherwise I would be in serious trouble with this course!!

What sets our species apart from others, is, among other things, a very
long, stretched-out period of the kind of curiosity and "information
uptake" capacity that normally only the very young of a species have.
We both look and act, throughout our lifetimes, more like the YOUNG of
our primate cousins than like their adults. This process is called
"neoteny" in evolutionary biology, in which one of the richest sources
of adaptive variation comes, not from cosmic rays causing mutations but
from variations in the length and timing of developmental processes: In
neoteny, parts of the developmental process are stretched out, so the
successor species looks more like the young than the adult of the
ancestral species in some of its traits.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:51 GMT