cw> First, I don't really understand how pixels would represent
cw> properties of an object but if they can why can't they represent
cw> certain object regularities such as symmetry. How would Schyns's
cw> flexible features represent these particular object regularities any
First a favourite quote:
"The main point to realize is that all knowledge presents itself within
a conceptual framework adapted to account for previous experience and
that any such frame may prove too narrow to comprehend new
experiences." (Niels Bohr, 1958).
Then one note to add to this excellent discussion: Features can be
flexible in at least two ways to accommodate new experience, both of
which have been neglected in fixed feature models.
The first way is easy to accept: Assume that we have innate fixed
features, but that these features can tune to the objects they detect.
Then, one can imagine that, say, a detector of ellipses could evolve
into a detector of circles in a "circle" world. Note that the opposite
would be harder, because a hardwired circle detector might have only
one, not two focal points. This conservative view of flexibility
already stresses the important point that even hardwired features
should continuously evolve in response to environmental contingencies.
Our proposal embraces a second, more adventurous view (see "Feature
creation: Form-from-medium rather than form-from-form" in the reply to
Lifted from the reply:
"We want to distinguish the form of feature newness involved in chunking
(which we call form-from-form) from another kind (form-from-medium)
which directly produces features from a medium, not from other
features. By analogy, imagine a Martian whose visual medium (the output
of its transducers) is very much like dough. On the first day of its
existence, the outside world imprints a teddy bear into the dough. Of
course, this object and its parts are unknown to the Martian. He or she
cannot compose a new representation of the modelled teddy bear from an
already-existing representation of its component parts. Feature
creation is a process which can directly imprint on the visual medium:
It can "cut around" the teddy-bear's silhouette and represent the
entire object as a new holistic feature. Note that this cutting process
only separates the entire bear from the medium. At this stage of
conceptual development, our Martian represents the bear as a unitary,
holistic feature and its decomposition into subcomponents is
The dough analogy illustrates that forms (i.e., features) can arise
from the absence of form (i.e., a medium) with proper "cutting
principles" (i.e., generic perceptual constraints). Although
form-from-form dominates compositional approaches (see Dominey and
Tijsseling), we believe that form can also arise from a
high-dimensional medium and adequate perceptual constraints. We must
start with a rejection of the idea that pixels, or retinal outputs are
already fixed features (as suggested in Dorffner and Dominey). We
believe that the proper relationship between pixels and features should
mirror the relationship between dough and form: The former is the
medium for the expression of the latter. In other words, individual
pixels do not represent forms, but together, millions of pixels serve
as a high-dimensional medium for multiple expressions of form.
Saying this does not solve the problem of feature creation, of course,
but it points to another way of thinking about the problem:
features-from-medium rather than
complex_features-from-simpler_features, which is a potential difficulty
of componential appraoches. Feature creation, together with feature
decomposition, expand componential approaches to knowledge
>> experiment of Schyns and Rodet (in press) described earlier. This
The experiments have now been published. The exact reference is:
Schyns, P. G., & Rodet, L. (1997). Categorization creates functional
features. Journal of Experimental Psychology: Learning, Memory and
Cognition, 23, 681-696.
sh> To put it even more simply, the first group would form an X-detector,
sh> then a Y-detector, and last, an X-plus-Y detector that detects XYs. The
sh> other group would first form a composite XY detector (not X plus Y,
sh> because they don't know about X's and Y's yet), then a separate X
sh> detector (not XY minus Y, but just X) and finally a Y detector (not XY
sh> minus X, but just Y). Their recognition performance and the way they
sh> analysed things into parts suggested that the first group had a slightly
sh> different set of constructed detectors from the second. For the first
sh> group, XY was a combination of X and Y, whereas for the second group, XY
sh> was seen "as a whole."
Very clear! Perceptually speaking, XY is in fact another Z detector
independent of X and Y.
sh> Now, think about whether and how this would work for (1) non-part features,
sh> and for (2) features learned from trial and error with feedback, rather
sh> than mere repeated exposure. Think also about how this might apply to
sh> (3) learning INDIVIDUALS versus learning KINDS.
Very hard problems!
One aspect of the problem of learning kinds is briefly discussed in the
reply (see section "Feature transformations in the context of a
category.") We discuss the difficult problem of encompassing the
variations of a feature within a category. We limit the discussion (it
is hard enough like that) to variations of parts.
sh> None of these issues is straightforward, and no one has all the answers
I wish I would ;)
ps>> Our proposal for functional feature creation concerns the extraction
ps>> of new structures from perceptual data. How could has_feathers be
ps>> discovered from a training set of pixel arrays, or similarly
ps>> unstructured representations?
>cw> Couldn't the more structured, fixed features be used to represent
>cw> something like has-feathers? If not, how is it done by created
sh> On the one hand, as discussed in the first half of this message, in SOME
sh> way every feature needs to be a combination of the "pixels" that
sh> constitute the nerve endings. So "has-feathers" would be a bit like
sh> "symmetric" in this regard. But Schyns et al. want to say that
sh> nevertheless certain pixel combinations will become automatised, the way
sh> the composite XY was in their experiment.
In the reply (sorry to cite it all the time, but I prefer the reply to
the target), section WHAT DOES "NEW FEATURE" MEAN?, we review different
conceptions of newness. (This short comment is prompted by the word "
automatised" in Stevan's comment).
One form of newness is chunking. Chunking (or unitization) requires the
stimuli to be discretized before being unitized. One implication is
therefore that people perceive the discrete units before chunking
them. Following chunking, the holistic unit becomes an isolatable and
independent information packet. I do not believe that chunking applies
to discrete pixels. Pixels are a medium, not forms.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:21 GMT