Re: Harnad1 Word Origins

From: Stevan Harnad (harnad@coglit.ecs.soton.ac.uk)
Date: Fri Dec 03 1999 - 18:56:37 GMT


On Fri, 3 Dec 1999, Stacho Laszlo Pal wrote:

sh> "words originated as the names of perceptual categories and that two
sh> forms of representation underlying perceptual categorization - iconic
sh> and categorical representations - served to ground a third, symbolic
sh> form of representation".
>
> My question is about the nature of these representational levels. What
> is the meaning of "served to ground" or "to ground something"?

This refers to the symbol grounding problem:

    Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42:
    335-346. [Reprinted in Hungarian Translation as "A
    Szimbolum-Lehorgonyzas Problemaja." Magyar Pszichologiai Szemle
    XLVIII-XLIX (32-33) 5-6: 365-383.]
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html

The idea is that symbols are the names of categories. We can learn the
meaning of new symbols from propositions composed out of combinations of
old symbols, but it cannot continue like that indefinitely. At some
point the meaning has to be grounded in something other than just more
strings of symbols.

My candidate for that grounding is sensorimotor representations. By
"representation" here I just mean some internal structural/functional
change, or "engram."

We learn directly, through sensorimotor interactions with objects,
what is what (we eat mushrooms, we sit on chairs, we ride horses, we
draw stripes -- and we name the categories of entities that "afford"
such sensorimotor interactions "mushrooms," "chairs," "horses" and
"stripes").

For certain kinds of sensorimotor interactions with objects it is useful
to have an analog (or "iconic," or "echoic" or "kinetic") internal
representation. For example, suppose life decides to assign the Shepard task
to us ("Getting your daily bread depends on your being able to judge
correctly whether this random, three-dimensional shape is different from
that one, or the same as that one, but in a different orientation").
Internal analogs of the shapes, and the ability to mentally "rotate"
them would be a useful capacity to have.

    Shepard, Roger N.; Metzler, Jacqueline.
    Mental Rotation of Three-Dimensional Objects.
    Science, 1971 Feb, vVol. 171 (n3972):701-703.

But in addition to these analog sensorimotor representations that are
partly isomorphic with the "shadows" that the objects cast on our
sensorimotor surfaces, it is also necessary to abstract the invariant
features that reliably allow us to identify and name objects as being
of one KIND or another. (Some mushrooms are EDIBLE, some are TOXIC.)

The categorical representation is what is left when one filters out what
is NOT invariant among the members of a sensorimotor category. To do
that we need an internal mechanism that can learn to detect and abstract
these invariants. Neural nets are one possible candidate for doing that.
The result would be a categorical representation.

> In the
> light of connectionist alternatives I suggest to handle the upper
> levels (the categorical and the symbolical ones) as emergent structures
> from the lower levels.

Beware of the weasel-word "emergent." It has no meaning if you don't
know the actual causal mechanism, and it has no usefulness if you do.

> What are in fact the neurobiological grounds of Harnad's hypotheseis
> that could explain the qualitative differences between the
> representational levels?

It's somewhat premature to look for neurobiology here. What we are doing
first is cognitive modeling. This means "reverse-engineering" the
cognitive capacities of the brain by creating models that can DO what
the brain can do. Then those are our candidate causal mechanisms, and if
they are sufficiently powerful and general in their cognitive
capacities, then it might be a good time to look for their
implementation in the brain.

But if you go straight to the brain, you will find no causal mechanisms,
explaining themselves. You'll just find anatomy, and physiology, and
pharmacology, with cognitive capacity "emerging" from it.

And if you look at brain-imaging, you will see pretty, colored
"correlates" of this emerging cognitive function, located in various
regions of the brain, but its causal mechanism will still not be
visible.

http://www.soton.ac.uk/~mfeb/scientific.html#TITLE

> 2. If we accept the connectionist view, an interesting question can be
> raised: why don't these emergent structures (or at least the symbolical
> one) develop in animals? Is there a special mental organ which enables
> the emergence of symbolical representations?

I don't yet know what the connectionist "view" is: I just use neural
nets as possible functional components in a mechanism that learns to
categorize sensory input.

I don't think symbolic functions are "emergent" (in fact, I don't think
anything that is clearly understood is "emergent" in any interesting
way: "emergent" usually means it's still mysterious and we have not yet
successfully reverse-engineered how it works).

Do animals have symbolic representations? First, do they have symbolic
capacities? Perhaps very rudimentary ones, as some of the ape and
dolphin experiments seem to show. But symbols are only symbols if they
are part of a formal symbol system (such as logic, mathematics or
language), and animals don't seem to have those.

So do we have a special symbolic "organ"? That depends what you mean by
"organ." We certainly have capacities that animals do not have (although
the truth may be that they are capacities animals DO have but they are
not capacities that animals have evolved to be motivated to USE).
Whatever causal mechanism underlies those capacities is the symbolic
"organ." Only successfully reverse-engineering can tell us what it is,
and how it works.

> 3. I see as a reading of the Whorf-hypothesis that the language (of
> thought - in the Fodor 1983 sense) = the categorizational possibilities
> influence thought.

My understanding of Fodor's Language-of-Thought (LoT) hypothesis is that
there is an internal ("mental") code, "mentalese," in which thought
takes place, and that mentalese is very like language; in any case it is
a symbolic code.

I am not sure what you mean by "categorizational possibilities" or their
relation to the Whorf Hypothesis.

Perhaps you mean that the categories named in our language influence the
way we think and see the world. They certainly do, but what is the
causality? Fodor thinks that most of those categories are inborn (and
Whorf does not).

> This could be true especially considering that
> categorization (the recognization/extraction of invariant features) is
> perhaps not an obvious procedure that can be modelized by some learning
> algorithms.

It is certainly not obvious, otherwise we would already have
successfully reverse-engineered it by now. But not obvious does not mean
that it cannot be modeled (I hope it can!). In any case, categorization,
and nontrivial category learning ability (in which Fodor does not
believe) certainly have a causal basis in the brain, and, whether
obvious or not, we have to reverse-engineer it if we are to understand
it.

Neural nets do quite well on some category learning tasks, as a first
approximation. Whether their capacity scales up to full-size human
capabilities is still an empirical question.

> I suspect that in a considerable part of our categorization
> tasks sets of invariant features other than in reality may serve to
> ground of forming categories (possibly in some cases of musical
> performance perception, in recognization of handwriting etc. for
> instance).

If I understand correctly what you mean by "other than in reality," I am
not sure why you suspect that this is so:

Do you mean that in recognizing handwriting (say, recognizing what a
hand-written word is, or whose handwriting it is) there is not IN
REALITY (that means, in the specimen of handwriting we are trying to
categorize) the invariant basis for correctly categorizing it? For if
there is not, then we are doing it by magic (if we are doing it
correctly).

I don't even understand what it would mean to say that the invariant
feature on the basis of which I recognize that this handwriting is yours
and that is Orsolya's is "in my head" rather than "in reality". Surely
what has to be in my head is the MEANS to correctly recognize this
handwriting as yours and that handwriting as hers. The means is in my
head, but the data on which it operates is the handwriting sample. And
what the MEANS does is find the invariant in the sample: The invariant
may require a lot of constructive activity in my head (I may have to
find the 2nd derivative of a curve, and apply a boolean algorithm to the
outcome), but the invariant is still in the members of the category and
not in my head.

Otherwise there would be no right or wrong about categorization (which
is the correct part of Wittgenstein's otherwise incorrect
argument that a person cannot invent a private language because then
there would be no wrong/right: the world can supply the error-correcting
feedback even if it is Robinson Crusoe who is classifying the mushrooms
on his island as "edible" or "toxic.").

> According to a Whorfian-like view, categorization schemas
> are primary and innate.

No, I think it's Fodorians who think categories are innate; Whorfians
think they are shaped by social interactions, language, culture.

> It reminds me also an extreme Fodor view, sort
> of Platonian idealism, according to which all concepts we humans CAN
> have are innate and in the head, and the role of perception is
> constrained only to recognization of the of the categories from the
> physical stimuli. This hypothesis is contrary to Harnad's view of
> categorization and symbolization.

It is certainly contrary to my view, but it is also contrary to the
"Whorfian" view that language shapes our view of reality -- but the
social world, and not our inborn structure, shapes our language!

> 4. I think we are lack of the arguments that prove the truth of
> intertranslatability criterion (for the existence of such a criterion
> between languages).

It is not something that can be proved (any more than the Church/Turing
Thesis can be proved). It can only be supported by positive examples
of intertranslatability, or challenged by alleged examples of
untranslatability. So far I have not seen a successful one of the
latter, and I challenge anyone to produce one! If I understand it, I
will translate it into any language I know. (IF it is in a language I do
not understand, then the explanation to me, in English, of exactly what
cannot be translated into what, will usually already be the clue to the
translatability).

I challenge the class to give examples -- from any language, into any
language -- of something that cannot be translated.

Note, though, that it is trivial to show that it cannot be translated
word-for-word (why should all languages lexicalize exactly the same
things?); but a string of words is as good as a word, for translation
the content. The form is as irrelevant as it is in the case of
onomatopoeia, alliteration or rhyme: We are talking about the
intertranslatability of propositional content (prose), not the sensory
and other evocative properties of poetry.

> 5. Why can't the "language" of music for instance - to which
> psychoanalists attribute such an immense symbolical force of expression
> - give us a world description similar to that of the language? Can a
> musical system intertranslatable/equivalent to natural language be the
> tool of understanding?

Please do me a favor. Translate the following sentence into music:

"The cat is on the mat and the cube root of 27 is 3."

Give up. Why? Because music is not a language, if you use the word
"language" literally (and not metaphorically).

Psychoanalysts specialize in hermeneutics -- in interpretation of
experience and texts. We are talking here about scientific explanation
(reverse-engineering, to be exact), not interpretation. Interpretation
is more at home in literary criticism or theological analysis.

The natural languages are all intertranslatable, but not translatable
into music, because music is not an all-purpose symbolic code that can
express propositions. The tonal system is an acoustic system that brings
pleasure to the ear, and musical notation is a why of encoding its
acoustical parameters in frequency and time.

> I doubt. This suggests that innate mechanisms focus on the speech, we
> are predisposed since early infancy to pay attention to the speech
> rather than musics, melodies so often heard, for instance.

We are certainly specially adapted to pay attention to speech. But our
early affinity to music is quite remarkable too. We have special
speech-detectors, but our pitch, rhythm, melody, and harmony perception
seems to be "prepared" too, and not just arbitrary or random.

> Regarding the "choosen" nature of speech we can easily suppose that there
> are several other aspects choosen, too (related to the understanding of
> speech for example). And why is it just the speech concerned?

I'm not sure what you asked here. Do you mean that there could have been
other sensorimotor modalities for language than speech? I agree; in
fact, I think gesture came first. But speech definitely became the
"chosen" one for our species, and there are innate adaptations specific
to it (among them, Wernicke's and Broca's Areas).

> There certainly are innate elements in language. How is a baby able to
> choose the relavant structures such precisely among a few facts/data
> (heard, perceived etc.)? Categorization cues must be genetically
> transmissible.

Some of the baby's sensorimotor capacities are no doubt innate,
including, no doubt, many evolutionarily "prepared" categories. But a
lot of them are learned too. (And even the prepared ones had to be
"learned" -- by evolution. Only Fodor believes that categories can
originate in the "Big Bang".)

> What's more, with our "present" mind it is
> difficult to give a plausible explanation (as those only seem plausible
> for ourselves) for the reason of the categorization serving ground to
> language use. So that our ancestors learn what to categorize, there was
> also a "pre-categorization" (a categorization beforehand) needed.
> Categorization is not at all such an obvious category.

Distinguish inborn categorization from category learning. I focused on
the latter, but the model applies equally to both. It is just that in
inborn categorization the invariance-detectors were "learned" by
evolution.

--------------------------------------------------------------------
Stevan Harnad harnad@cogsci.soton.ac.uk
Professor of Cognitive Science harnad@princeton.edu
Department of Electronics and phone: +44 23-80 592-582
Computer Science fax: +44 23-80 592-865
University of Southampton http://www.cogsci.soton.ac.uk/~harnad/
Highfield, Southampton http://www.princeton.edu/~harnad/
SO17 1BJ UNITED KINGDOM



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:06 GMT