Re: Bottom up vs. Top down

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Sat Feb 22 1997 - 14:53:27 GMT


> From: Ryan, NM <nmr196@soton.ac.uk>
>
> Is bottom up another way of saying empirical? Is top down another way
> of saying rational? Why make language more complex with new terms?
> Niel Ryan

"Bottom up" and "Top down" have unfortunately been used to mean several
different things. They do not mean "empirical" (i.e., "facts learned through
[sensory] experience") vs. "rational" ("facts learned through
reasoning"), but both dichotomies (B/T, E/R) can be aligned that way;
they do have some things in common.

But the Empirical/Rational distinction is not particularly about mental
processes or about models for mental processes. It is about the sources
of facts (or what we think are facts): Some of our facts are based on
experience (e.g., that it has been raining most of this past week; or,
grander, that F = ma in physics); and there are facts that are based
on reasoning (e.g., that if it has been raining, then it has been wet;
or if you have 2 things, and another 2 things, then you have 4 things in
all).

That's philosophy. Now here's Cog Sci: There are two ways to try to
explain cognition: One is to start by modeling the sensory processes
(hearing, seeing, touch, etc.) and the motor processes (eating,
walking, pronunciation) and moving upward to the intellect from there.
The other is to start by modeling the intellectual processes, such as
reasoning, problem-solving, and language, and connecting with the
bottom-up sensorimotor processes later.

Notice that both bottom-up and top-down modeling is making an
ASSUMPTION about modularity: Modules are defined as being independent
of one another. So you can understand and explain one of them without
needing to worry about other modules. But the assumption that bottom-up
processes and top-down processes can be understood independently
could be wrong.

During the heyday of computational modeling of the mind (i.e., for the
last 25 years or so), it was assumed that it was at the computational
level that most of the explanation of the mind would be, and that it
was safe to ignore bottom-up questions until the hard work had been
done.

These days, however, it is thought that there is only one way to
get "up" there to the lofty intellectual activities of reasoning,
problem-solving, etc., and that one way is from the bottom up.

Symptoms of the fact that this may be so came from the failure of
of computational models to "scale up" to lifesize intellectual
capacities. The computational models were all little "toy"
models that could only do a restricted number of things (e.g., play
chess, do calculation, prove theorems, describe scenes, answer
questions). It was not plausible that all these toy models are really
independent modules, and that our minds are explained by putting
countless toys together,

The Frame Problem, which kept arising in all top-down models of the
mind, was a symptom that something was wrong: If the computational
model left out any possibility in the body of symbolic "knowledge" of
which the model was built, then the model would do well under some
conditions, but eventually it would always run into a condition it
could not handle, at which point its performance was SO bad that it
called into question the assumption that it was the right model even
for the things it COULD do before it ran into a Frame Problem.

The "Frame Problem," for those who need a kid-sib reminder, is this: A
computational model is a symbol system, with symbols and rules
("algorithms") for manipulating the symbols. You can think of it as a
recipe for making a cake.

So you have a cake-making computer. It gets ingredients as input, and
it must produce a cake as output. It is so good at making cakes that
you think it's a good model for what goes on in the head of a real cook.
So one day, the cake-baking computer is adding three eggs to the batter
and one of the eggs falls and cracks. What does the computer do?

That depends on how its programme was written, but suppose the rule
was:

(1) Make N = 0. (2) Take egg, add 1 to N, check whether (3 - N) is
greater than 0. (3) If (3 - N) is not greater than zero, then put back
egg and move on to step (4) of the recipe. If (3 - N) is greater than
0, break egg into batter, throw away shell, and return to step (2) of
the recipe. (4)...

This programme makes no provisions for dropping eggs, so what happens
if the computer drops an egg? Suppose it happens in the middle of
step (3). The computer continues to go through the motions, even
though there is no egg to break, and ends up with a cake that has
one egg less than it is supposed to have.

Now this oversight can easily be remedied: The programme can include an
instruction for what to do if an egg falls, but you can be sure that
other frame problem will arise again somewhere else. The reason is that
cognition is not just a body of symbols and symbol manipulation rules.
"Egg" means nothing to the computer; hence if its rule book has not
said exactly what to do under any possible condition (and that's
impossible to do, because the possibilities are infinite), then the
computer will always run into the Frame Problem eventually.

I'm sure you want to reply "Hang on a minute! It seems to me that
people eventually run into the "Frame Problem" too, because they don't
know what to do under every possible condition either."

That's true, but we don't know what else is going on in our heads
besides symbols and rule-based symbol manipulations. The Frame Problem
arises for any symbol system because the system works only on the basis
of symbols and rules, and not on the basis of their meaning.

It's not that our knowledge is infinite either. It's that the
limitations of our knowledge are not the limitations of a symbol
system. The reason we know what to do if the egg falls and breaks is
not because we have one more rule in our heads that the computer
doesn't have. It is that we know what a cake is, and how and why we bake
it. In what does this knowledge consist if it is not just computational?

That's where bottom-up processing comes in: Could it be that our
symbols are not just arbitrary squiggles and squoggles, but the names
of things that we have learned to identify through neural nets that
detect the features in the analog sensory projection? If so, then "egg"
is not just a squiggle for us, which we must squoggle by "breaking it
into batter." Rather, might our knowledge and understanding be
implemented in our brains in the form of analog processes and neural
net activity rather than just symbol manipulation?

There's reason to think it's so, not just because of the limitations of
purely top-down models, but also because in so little of our mental
life do we know what rule we're following. The fact that rules are
implicit rather than explicit doesn't guarantee that they are not
symbolic, of course. But the extreme difficulty that symbol systems
have with things that can be easily accomplished by analog processes
and neural nets suggests not only that bottom-up processes need
to be understood before top-down ones, but that top-down processes
must be "connected" to those bottom-up processes, otherwise top-down
processes are hanging from a skyhook (like the Cheshire cat's
smile).



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:50 GMT