> From: Dunsdon, Graham <firstname.lastname@example.org>
> Can someone help me with a definition for 'functionalism' which
> distinguishes it also from connectionism please?
> At present, to me functionalism is a branch of computational modeling
> which categorizes and explicitly processes objects according to
> similarities in symbols eg., a sundial, a face, a circle.
Before I define functionalism we have to get (arbitrary-shaped)
"symbols" and (analog) "images" sorted out:
A sundial is an analog representation, not a symbolic one. The passage
of the shadow around the sundial is an analog of the motion of the sun.
The only part of it that is digital is the numbers we attach to the
positions of the tip of the shadow of the sun on the dial.
"Not-analog" does not = symbolic, by the way, because analog
representations are continuous, like the movement of the shadow of the
sun around the sundial. The ticks of a clock are digital, but they
aren't arbitrary; the pendulum is built so it swings exactly 60 times
The only symbolic representation of the time is a digital watch that
simply changes the numerals on the clockface: "12:01," "12:02" etc.
Unlike the analog sundial's image, which DOES resemble the sun's
position in the sky, "12:01" does not, so it is a symbolic
representation of the time.
But even inside that digital clock is a mechanism that matches the
passage of time by one analog means or another. Only its DISPLAY is
symbolic. [The clock inside a computer is not symbolic either; it's
really a peripheral device and not part of the computer itself.]
As to faces: a face is just a face, so it is not a representation at
all. Perhaps you mean a drawing or a photo of a face. Well that is of
course an analog of the face it is the photo or drawing of, because it
resembles it (just as the "shadows" cast on our sense organs resemble
the distal objects of which they are the "shadows").
A circle is just an object. If it is meant to be the representation of
something else that is round, then again it is an analog representation,
not a symbol.
Here's the simple rule: if a representation resembles the thing it
represents, then it is an analog image, not a symbol. A symbol's shape is
arbitrary; it does not resemble the thing it represents (and if it does,
the resemblance is irrelevant, as is the resemblance of the word
"chatter" to the sound of chattering; the word for chattering could
just as well have been "squiggle," or "00110110").
What is physical resemblance? That's tougher to make precise, but here
is a good rule of thumb: In the "topography" (geography, shape) of the
analog image, nearby points in the image will also be nearby points in
the object of which it is the image. That's why it is not just the
image on your retina that is an analog of the distal object that cast
that image. Nearby points stay nearby as that retinal projection
projects to higher-level analog images in the rest of your brain (about
15 of them at last count).
Nor does the object and its analog have to be in the same medium:
Whether a sound is high or low is "transduced" through your eardrum
onto different points along your cochlea: Higher sounds on the "high"
end of the cochlea, lower sounds on the "low" end, and in-between
sounds in between. Nearby loudnesses are still nearby on the cochlea.
For other senses, the analog of the intensity of the stimulation of
your sense organs is transduced into the frequency of nerve firing
(stronger stimulation turns into faster neural firing.) Again, like the
swinging of the clock's pendulum, neural firing is "digital" (i.e., it
is broken into units instead of being continuous right down to the
biophysical level), but high intensity still corresponds to high
frequency, low to low, and in between to in between.
To be a symbol, a representation must not have any of these properties,
or if it does, then they must not be USED in the symbol manipulation.
For symbol manipulation, everything could just as well be a string
of 0's and 1's. The symbol manipulations rules just tell you
what to do with the 0's and 1's. That a 0 is circular in irrelevant.
Now I can tell you what functionalism is: Functionalists think that
things with a certain function can be built in more than one way, and
the function they share can be understood in an abstract way.
An example is vision: Among all the creatures that can see, sight has
evolved independently in two different ways: In mammals, with some
simplification, we can say that seeing is accomplished by retinal cells
that transduce photons (light waves) into firing frequency (how fast
the cells fire). In certain invertebrate creatures such as the
horseshoe crab, seeing is accomplished by a compound eye with cells
called ommatidia, that likewise transduce light, but in a different
way. This is an example of "convergent evolution," where a function
(seeing) evolved in two different ways, but with (roughly) the same
functional outcome: an organ that can transduce light.
So one sense of functionalism is that many different mechanisms may be
performing the same function, and that that function can be understood by
looking at what all those different ways of implementing have in COMMON,
and ignoring the differences.
This is biological functionalism, but it also includes robotic
functionalism, because all those different ways of implementing the same
function (e.g., transducing light) include natural as well as man-made
ways of doing it. In artificial vision and robotics, optical transducers
have been designed that do many of the things that natural eyes do, and
the things they all have in common, ignoring the difference, are helpful
in understanding vision in a way that would be much harder if there
were one and only one way to implement vision.
So that's biological/robotic functionalism. There are two more senses of
functionalism, but you only need to know one of them; in fact, you know
it already: Computationalism -- the hypothesis that cognition is just
some form of computation -- is also a kind of functionalism. You could
call it symbolic functionalism to distinguish it from the biorobotic
kind of functionalism: The BIG difference between these two kinds of
functionalism is that biorobotic functionalism is "bottom-up"
functionalism, concentrating on sensory input, transduction, analog
representations, neural nets, and sensorimotor interactions with the
world, whereas symbolic functionalism is "top-down," focused on
symbolic representations and algorithms only.
Both functionalisms abstract from the details of the way the function is
implemented physically; both agree that not all the details of the
physical implementation matter. But only symbolic functionalism
abstracts so much that there's nothing left but a computer programme,
which could be implemented just as well in real organisms or a digital
computer. In contrast, biorobotic functionalists agree that there is
more than one way to implement certain functions, but that
noncomputational properties of the implementations (e.g., transduction,
analog representation, feature-detecting neural nets) are important
too, not just symbols and symbol manipulation rules.
The last kind of functionalism, about which you need not bother your
heads unless you want to, is a philosophical kind of functionalism, and
it too consists of two kinds: "narrow" and "wide" pfunctionalism.
(I'm using the "pf" to cue you that it's philosophical pfunctionalism
we're talking about.)
Pfunctionalism is a theory of meaning in philosophy: We all know that
our words and our thoughts mean something. When I say "cat," I mean
those little furry things. Now according to "narrow" pfunctionalists,
the meaning of cat, whatever it turns out to be, will be a functional
state of your body, in which some parts of that internal state are
connected in certain ways to other parts of that internal state, and
when they are connected in the right way, the meaning is in there,
somewhere, in the form of a pfunctional state. Meaning, in other words,
is from the sensory surfaces inward for narrow pfunctionalists.
For "wide" functionalists, meaning is bigger than our heads or even our
bodies. If I think "cat," the functional state that corresponds to that
thought is not just inside my head, it includes the distal object that
my thought is about (the cat).
Both versions of pfunctionalism are indeed forms of functionalism, in
that they both abstract from the details of the physical
implementation, believing that meaning is some form of functional
state, but narrow pfunctionalists think the functional state is in the
head, and consists in the functional relations between structures and
states in the head, and wide pfunctionalists think it's wider than the
head, and consists in the functional relations not only within the
head, but also between the head and the distal objects that thought are
> Connectionism, in contrast, uses the interaction of numerous little
> features to make the connections which categorise the object and then
> enable the identification and recognition of its unique properties.
Connectionism is functionalism too: biorobotic functionalism. The
mechanism that learns patterns in real creatures and in robots might be,
or might be in part, neural nets.
> But I'm then stuck with the problem of understanding how functionalism
> (which as a method of processing uses 'empty boxes') is able to
> represent the diversity of objects with similar basic shapes but very
> different insides!
> I seem to have returned to the symbolic-connectionist debate. Help!
All functionalists agree that the same "function" can be accomplished in
many different ways, so the physical details of any specific way of
doing it are irrelevant. Symbolic functionalists, however, think
that all you need to know is the computer programme that all the
different implementations are implementing, whereas the biorobotic
functionalists (who include connectionists) think that important
parts of cognition are not computational, including sensory
transduction, analog representations and processing, and neural
You may wonder who are NOT functionalists at all: Those who think that
the only way to understand cognition is to learn all the details about
the brain. They hold (and they COULD be right) that it is exactly
the implementational details that all the other functionalists are
abstracting away from that will explain the mind.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:50 GMT