Categorical Perception: Bibliography
This is the Categorical Perception bibliography (in four parts -- this
is the first part). Don't worry! You are
not expected to read all or even most of it, just what your group needs
to prepare the subtopic you have chosen, plus a little background
reading so you sample the other three subtopics too.
The full articles behind each abstract in the file you are in right now are
all retrievable by clicking on the underlined portion (these are
articles by me).
This includes the first and last chapter of the CP book, which you
should all read (whether or not you are doing CP as your project).
If you click here you will leave this file and jump instead to:
For all of the abstracts above you will
unfortunately have to retrieve the full article the old way (by going to
the library). (Imagine if you could get it all on the Web! Soon this
will be possible; agitate for it!) For the abstracts below, however, you
can get the whole article with just one more click.
See also the companion www file
(1) Advanced topic
as well as the companion file
for (1) a summary of the Categorical Perception segment of the Avanced
Topics course and (2) a version of the bibliography coded by subject
(visual, auditory, neural, etc.).
Harnad, S. (1987) Psychophysical and cognitive aspects of categorical
perception: A critical overview. Chapter 1 of: Harnad, S. (ed.) (1987)
Categorical Perception: The Groundwork of Cognition. New York:
Cambridge University Press.
ABSTRACT: Categorization is a very basic cognitive activity. It is
involved in any task that calls for differential responding, from
operant discrimination to pattern recognition to naming and describing
objects and states-of-affairs. Explanations of categorization range
from nativist theories denying that any nontrivial categories are
acquired by learning to inductivist theories claiming that most
categories are learned. "Categorical perception" (CP) is the name given
to a suggestive perceptual phenomenon that may serve as a useful model
for categorization in general: For certain perceptual categories,
within-category differences look much smaller than between-category
differences even when they are of the same size physically. For
example, in color perception, differences between reds and differences
between yellows look much smaller than equal-sized differences that
cross the red/yellow boundary; the same is true of the phoneme
categories /ba/ and /da/. Indeed, the effect of the category boundary
is not merely quantitative, but qualitative.
Harnad, S. (1987) The induction and representation of categories.
In: Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of
Cognition. New York:
Cambridge University Press.
ABSTRACT: A provisional model is presented in which categorical
perception (CP) provides our basic or elementary categories. In
acquiring a category we learn to label or identify positive and
negative instances from a sample of confusable alternatives. Two kinds
of internal representation are built up in this learning by
"acquaintance": (1) an ICONIC representation that subserves our
similarity judgments and (2) an analog/digital feature-filter that
picks out the invariant information allowing us to categorize the
instances correctly. This second, CATEGORICAL representation is
associated with the category name. Category names then serve as the
atomic symbols for a third representational system, the (3) SYMBOLIC
representations that underlie language and that make it possible for us
to learn by "description." Connectionism is one possible mechainsm for
learning the sensory invariants underlying categorization and naming.
Among the implications of the model are (a) the "cognitive identity of
(current) indiscriminables": Categories and their representations can
only be provisional and approximate, relative to the alternatives
encountered to date, rather than "exact." There is also (b) no such
thing as an absolute "feature," only those features that are invariant
within a particular context of confusable alternatives. Contrary to
prevailing "prototype" views, however, (c) such provisionally invariant
features MUST underlie successful categorization, and must be
"sufficient" (at least in the "satisficing" sense) to subserve reliable
performance with all-or-none, bounded categories, as in CP. Finally,
the model brings out some basic limitations of the "symbol-manipulative"
approach to modeling cognition, showing how (d) symbol meanings must be
functionally grounded in nonsymbolic, "shape-preserving"
representations -- iconic and categorical ones. Otherwise, all symbol
interpretations are ungrounded and indeterminate.
This amounts to a principled call for a psychophysical (rather than a
neural) "bottom-up" approach to cognition.
Harnad, S. (1990) The Symbol Grounding Problem.
Physica D 42: 335-346.
ABSTRACT: There has been much discussion recently about the scope and
limits of purely symbolic models of the mind and about the proper role
of connectionism in cognitive modeling. This paper describes the
"symbol grounding problem": How can the semantic interpretation of a
formal symbol system be made INTRINSIC to the system, rather than just
parasitic on the meanings in our heads? How can the meanings of the
meaningless symbol tokens, manipulated solely on the basis of their
(arbitrary) shapes, be grounded in anything but other meaningless
symbols? The problem is analogous to trying to learn Chinese from a
Chinese/Chinese dictionary alone. A candidate solution is sketched:
Symbolic representations must be grounded bottom-up in nonsymbolic
representations of two kinds: (1) "iconic representations," which are
analogs of the proximal sensory projections of distal objects and
events, and (2) "categorical representations," which are learned and
innate feature-detectors that pick out the invariant features of object
and event categories from their sensory projections. Elementary symbols
are the names of these object and event categories, assigned on the
basis of their (nonsymbolic) categorical representations. Higher-order
(3) "symbolic representations," grounded in these elementary symbols,
consist of symbol strings describing category membership relations
(e.g., "An X is a Y that is Z"). Connectionism is one natural candidate
for the mechanism that learns the invariant features underlying
categorical representations, thereby connecting names to the proximal
projections of the distal objects they stand for. In this way
connectionism can be seen as a complementary component in a hybrid
nonsymbolic/symbolic model of the mind, rather than a rival to purely
symbolic modeling. Such a hybrid model would not have an autonomous
symbolic "module," however; the symbolic functions would emerge as an
intrinsically "dedicated" symbol system as a consequence of the
bottom-up grounding of categories' names in their sensory
representations. Symbol manipulation would be governed not just by the
arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes
of the icons and category invariants in which they are grounded.
Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and
the Evolution of Supervised Learning in Neural Nets. In: Working
Papers of the AAAI Spring Symposium on Machine Learning of Natural
Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented
at Symposium on Symbol Grounding: Problems and Practice, Stanford
University, March 1991; also reprinted as Document D91-09, Deutsches
Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.
ABSTRACT: Some of the features of animal and human categorical
perception (CP) for color, pitch and speech are exhibited by neural
net simulations of CP with one-dimensional inputs: When a backprop net
is trained to discriminate and then categorize a set of stimuli, the
second task is accomplished by "warping" the similarity space
(compressing within-category distances and expanding between-category
distances). This natural side-effect also occurs in humans and
animals. Such CP categories, consisting of named, bounded regions of
similarity space, may be the ground level out of which higher-order
categories are constructed; nets are one possible candidate for the
mechanism that learns the sensorimotor invariants that connect
arbitrary names (elementary symbols?) to the nonarbitrary shapes of
objects. This paper examines how and why such compression/expansion
effects occur in neural nets.
Harnad, S. (1992) Connecting Object to Symbol in Modeling
Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context
ABSTRACT: Connectionism and computationalism are currently vying for
hegemony in cognitive modeling. At first glance the opposition seems
incoherent, because connectionism is itself computational, but the form
of computationalism that has been the prime candidate for encoding the
"language of thought" has been SYMBOLIC computationalism,
whereas connectionism is nonsymbolic, or, as some have hopefully
dubbed it, "subsymbolic"). This paper will examine what is and is not a
symbol system. A hybrid nonsymbolic/symbolic system will be sketched in
which the meanings of the symbols are grounded bottom-up in the
system's capacity to discriminate and identify the objects they refer
to. Neural nets are one possible mechanism for learning the invariants
in the analog sensory projection on which successful categorization is
based. "Categorical perception," in which similarity space is "warped"
in the service of categorization, turns out to be exhibited by both
people and nets, and may mediate the constraints exerted by the analog
world of objects on the formal world of symbols.
Harnad, S. (1996) The Origin of Words: A Psychophysical Hypothesis.
Presented at Zif Conference on Biological and Cultural Aspects of
Language Development. January 20 - 22, 1992 University of Bielefeld; to
appear in Durham, W & Velichkovsky B (Eds.) "Naturally Human: Origins
and Destiny of Language." Muenster: Nodus Pub.
ABSTRACT: It is hypothesized that words originated as the names of
perceptual categories and that two forms of representation underlying
perceptual categorization, iconic and categorical representations,
served to GROUND a third, symbolic, form of representation. The third
form of representation made it possible to name and describe our
environment, chiefly in terms of categories, their memberships, and
their invariant features. Symbolic representations can be shared
because they are intertranslatable. Both categorization and translation
are approximate rather than exact, but the approximation can be made as
close as we wish. This is the central property of that universal
mechanism for sharing descriptions that we call natural language.
Harnad, S. (1993) Grounding Symbolic Capacity in Robotic Capacity.
In: Steels, L. and R. Brooks (eds.) The "artificial life" route to
"artificial intelligence." Building Situated Embodied Agents. New
Haven: Lawrence Erlbaum
Harnad, S. Hanson, S.J. & Lubin, J. (1994) Learned Categorical
Perception in Neural Nets: Implications for Symbol Grounding.
In: V. Honavar & L. Uhr (eds) Symbol Processors and Connectionist
Network Models in Artificial Intelligence and Cognitive Modelling:
Steps Toward Principled Integration. Academic Press.
ABSTRACT: After people learn to sort objects into categories they see
them differently. Members of the same category look more alike and
members of different categories look more different. This phenomenon of
within-category compression and between-category separation in
similarity space is called categorical perception (CP). It is exhibited
by human subjects, animals and neural net models. In backpropagation
nets trained first to auto-associate 12 stimuli varying along a
one-dimensional continuum and then to sort them into 3 categories, CP
arises as a natural side-effect because of four factors: (1) Maximal
interstimulus separation in hidden-unit space during auto-association
learning, (2) movement toward linear separability during categorization
learning, (3) inverse-distance repulsive force exerted by the
between-category boundary, and (4) the modulating effects of input
iconicity, especially in interpolating CP to untrained regions of the
continuum. Once similarity space has been "warped" in this way, the
compressed and separated "chunks" have symbolic labels which could then
be combined into symbol strings that constitute propositions about
objects. The meanings of such symbolic representations would be
"grounded" in the system's capacity to pick out from their sensory
projections the object categories that the propositions were about.
To return to the companion www file
or to the
If you click here you will leave this file and jump instead to: