> From: SCHYNS, Philippe <firstname.lastname@example.org>
> "The main point to realize is that all knowledge presents itself within
> a conceptual framework adapted to account for previous experience and
> that any such frame may prove too narrow to comprehend new
> experiences." (Niels Bohr, 1958).
Yes, but what are "concepts" and "conceptual frameworks"? We are
speaking about percepts: concrete perceptual categories. Things you can
see with your eyes. No doubt prior experience -- e.g., passive exposure
history, as in your experiments -- influences how I see something now.
But where do "concepts" come into this, and what are they?
Be careful about making them into the sorts of things we describe in
words. They may well be, but then you must first explain what the
meanings of those words are, and where they come from! For they
certainly could not have PRECEDED percepts.
> Then one note to add to this excellent discussion: Features can be
> flexible in at least two ways to accommodate new experience, both of
> which have been neglected in fixed feature models.
> The first way is easy to accept: Assume that we have innate fixed
> features, but that these features can tune to the objects they detect.
> Then, one can imagine that, say, a detector of ellipses could evolve
> into a detector of circles in a "circle" world. Note that the opposite
> would be harder, because a hardwired circle detector might have only
> one, not two focal points. This conservative view of flexibility
> already stresses the important point that even hardwired features
> should continuously evolve in response to environmental contingencies.
Tuning feature-detectors is a fine potential mechanism, as long as
the metaphoric process of "tuning" can be specified explicitly.
I do, however, notice a certain equivocation in your papers between
"feature-detectors" and "category[-member]-detectors" (whether
individual or kind). Normally we think of features as being (implicitly)
detected and USED in order to (explicitly) detect categories.
Of course, a feature (round), like anything, can also become a category
("round"), but then THAT must in turn be based on other features, the
ones that pick out this feature or kind of feature rather than that
(e.g., points equidistant from a single point).
The other equivocation is between (1) features of distal objects or
their proximal projections, on the one hand, and (2) features of the
internal representations of those objects, or even (3) internal
representations of their features. These are all different things.
One can speak most straightforwardly about features without referring
to internal representations at all: A feature is a property of the
proximal stimulus -- the distal object's shadow on the sensory surface
-- and a feature-detector detects that property (e.g., colour, shape)!
It may of course require considerable processing of the proximal shadow
to extract the feature, and that all takes place internally, but it is
still best to see the process as being one of detecting features in the
input, rather than in the internal representation of the input.
To put it another way: the "internal representation" of the input is
part of the processing of the input, rather than being the "object"
whose features are detected, or worse, "represented." Otherwise we get
into layers and layers of internal objects and features and
> Our proposal embraces a second, more adventurous view (see "Feature
> creation: Form-from-medium rather than form-from-form" in the reply to
> Lifted from the reply:
> We want to distinguish the form of feature newness involved in chunking
> (which we call form-from-form) from another kind (form-from-medium)
> which directly produces features from a medium, not from other
> features. By analogy, imagine a Martian whose visual medium (the output
> of its transducers) is very much like dough. On the first day of its
> existence, the outside world imprints a teddy bear into the dough.
Here's the first step in the equivocation: You see a teddy bear. The
teddy bear "imprints" its "shadow" on your sensory surface. So far so
By the way, the visual medium is the INPUT to the transducers, not the
output, and it consists of photons, both here and on Mars. The only
thing that can vary on Mars is the nature of the transducer surface,
and whatever its output might be. We will now assume that photons have
an effect on it that is very much like imprinting a shape on dough.
> Of course, this object and its parts are unknown to the Martian.
Which object? the teddy bear? The teddy bear's imprint on the Martian's
senses? Or something else?
> He or she
> cannot compose a new representation of the modelled teddy bear from an
> already-existing representation of its component parts.
Who needs to compose a new representation? Presumably, like us, the
Martian needs only to identify objects, in this case, teddy bears.
What "representation" are you speaking of here? The shape in the dough?
That's just transducer activity. Something else?
> creation is a process which can directly imprint on the visual medium:
> It can "cut around" the teddy-bear's silhouette and represent the
> entire object as a new holistic feature.
Here you seem to be talking about identifying teddy bears, not features.
I realise that you are thinking of feature formation as something like
this imprinting process. And of course along with that you must think of
feature detection as matching the imprint of the present object to past
imprints; this is a rather specific template-matching view of
feature-detection that may or may not have the power and generality we
need. But even metaphorically, it is better suited to shape detection than
anything else; and as such, it is better suited to part-features than
to any other kind, because parts are merely sub-shapes. That follows,
tautologically, from the metaphor you have chosen.
> Note that this cutting process
> only separates the entire bear from the medium. At this stage of
> conceptual development, our Martian represents the bear as a unitary,
> holistic feature and its decomposition into subcomponents is
Do you really think we could learn to identify a nontrivial object,
in a context of other, interconfusable, nontrivial objects, using
one feature, namely, the shape of the object as a whole? That is what
this imprint metaphor implies.
> The dough analogy illustrates that forms (i.e., features) can arise
> from the absence of form (i.e., a medium) with proper "cutting
> principles" (i.e., generic perceptual constraints).
I don't see that. The only "cutting principle" you have provided is the
ASSUMPTION that it is the teddy bear, not the teddy bear plus the
background on which it occurs, that is imprinted on the dough!
Otherwise you would have had to add a figure/ground detecting metaphor to
your imprint metaphor.
But let us pass over the figure/ground problem (although it too is a
categorisation problem, and if you are providing general principles,
they ought to apply to that problem as well; most people assume that it
is features that are used to distinguish figure from ground, but your
metaphor is not recursive in this way).
Is it really true that shape-categories (the only ones to which this
metaphor seems to apply), once isolated form their backgrounds, are
best detected, to a first approximation, by matching their outlines to
stored templates of outlines? What about outline features such as
curvature, junctures, zero-crossings, etc.? Any reason to think that in
a suitably challenging context (i.e., one in which the alternatives are
not specifically designed so as to be easily sortable on the basis of
outline alone), other surface features may not prove to be more
critical than outline the template, and perhaps not even part-features,
but analytic operations, such as the 2nd derivative of a curve or the
cotangent of an angle?
And, though it's a hard problem, does it not matter whether the
categorisation task is one of identifying individuals -- which are, by
definition, one of a kind, varying only in position and orientation --
versus one of identifying kinds? Outlines look marginally more
promising for individuals than kinds, although variations in position
and orientation make it seem more profitable to look for
affine-geometric invariants rather than outlines even for them.
> form-from-form dominates compositional approaches (see Dominey and
> Tijsseling), we believe that form can also arise from a
> high-dimensional medium and adequate perceptual constraints.
What is a high-dimensional medium, in cookie-dough terms?
> We must
> start with a rejection of the idea that pixels, or retinal outputs are
> already fixed features (as suggested in Dorffner and Dominey). We
> believe that the proper relationship between pixels and features should
> mirror the relationship between dough and form: The former is the
> medium for the expression of the latter. In other words, individual
> pixels do not represent forms, but together, millions of pixels serve
> as a high-dimensional medium for multiple expressions of form.
There is no problem, and considerable generality, in seeing features as
combinations of pixels rather than individual pixels. But how does the
dimensionality enter into the cookie-dough metaphor?
And how to determine whether the repertoire of pixel-combinations is fixed
> Saying this does not solve the problem of feature creation, of course,
> but it points to another way of thinking about the problem:
> features-from-medium rather than
> complex_features-from-simpler_features, which is a potential difficulty
> of componential approaches. Feature creation, together with feature
> decomposition, expand componential approaches to knowledge
The dough-imprint metaphor does not do quite that much for me; and I have
no idea how it gets us to "knowledge representations" (and what they
> One form of newness is chunking. Chunking (or unitization) requires the
> stimuli to be discretized before being unitized. One implication is
> therefore that people perceive the discrete units before chunking
> them. Following chunking, the holistic unit becomes an isolatable and
> independent information packet. I do not believe that chunking applies
> to discrete pixels. Pixels are a medium, not forms.
Agreed that it is unlikely that chunking operates on pixels. It's that
next step (part-features, or more analytic ones? fixed or flexible?)
that seems less clear.
HARNAD Stevan email@example.com
Professor of Psychology firstname.lastname@example.org
Director, phone: +44 1703 592582
Cognitive Sciences Centre fax: +44 1703 594597
Department of Psychology http://www.cogsci.soton.ac.uk/~harnad/
University of Southampton http://www.princeton.edu/~harnad/
Highfield, Southampton ftp://ftp.princeton.edu/pub/harnad/
SO17 1BJ UNITED KINGDOM ftp://cogsci.soton.ac.uk/pub/harnad/
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:21 GMT