Your questions were good enough that I decided to branch them to
Schyns, Goldstone and Thibaut & Rodet (who are invited to join in:
Allez-y les gars!).
> From: Whitehouse Chantal <firstname.lastname@example.org>
> Schyns says that fixed features can be seen in two ways: (1) as
> fine-grained and relatively unstructured or (2) as already
> representing complex structures of the environment. He then points
> out the problems with both views.
>> Functionally important object regularities (e.g. symmetry, serif,
>> beauty, and so forth) are often not captured by simple pixel-based
>> features. Moreover it is not practically feasible (although
>> logically possible) to extract relevant categorisation features from
>> pixel based representations of the input.
> First, I don't really understand how pixels would represent
> properties of an object but if they can why can't they represent
> certain object regularities such as symmetry. How would Schyns's
> flexible features represent these particular object regularities any
You're right to have doubts about pixel-based features; for one, how do
they apply to senses other than vision: hearing, touch, taste, smell?
But let's suppose that by "pixels" we mean the nerve endings, whatever
they are, on the sensory surface of the sense in question. So for vision
this is the rods and cones on which our lens focuses the "shadow"
(image) of what we see. For hearing, it is the cochlea, which is
sensitive to vibrations of different frequency along its surface,
vibrations caused by distal objects and transmitted by the ear-drum, as
the lens transmits the photons bouncing off distal objects. For touch
the "pixels" would be the pressure/heat/pain-sensitive nerve-endings
on the skin, for taste those on the tongue, and smell those in the nose.
Now in the end, ALL sensory information we get has to come to us by the
jangling of those "pixels" in each sensory modality. We may or may not
have further "copies" of this pixel shadow at higher levels of our
nervous systems, but to pick up colour or symmetry or smell, they must
somehow be contained in the pixel pattern to begin with. Assuming they're
in there, either as simple on-off pixels (as it would be if a pin-prick
on the skin activated one and only one pixel) or as combinations of
pixels (as in the edge and moving-edge detectors in vision), the
question becomes whether just being contained in them is enough to make
something a feature, and the system a detector of that feature, or
something more is needed (except when the combination really is a
dedicated inborn detector, like the frog's bug-detector).
You are right to say sensory pixels don't "represent" anything (even
without asking the kid-sib question: "what does 'represent' mean,
anyway?"). They're simply activated by the proximal shadows of distal
objects. Let's say the shadow is transmitted to a higher-level (analog)
copy of the proximal pattern. Symmetry would be there when (say) the
left and right sides of the pattern were the same shape. It's not one
pixel that "detects" that; it would have to be a combination of pixels,
and perhaps also something that detects when that combination of pixels
is active (as with the real edge-detectors in the visual cortex).
You ask how Schyns's "flexible" features would do this job better than
fixed ones: Schyns et al. are arguing (or can be interpreted as arguing)
that the higher-order feature-detectors -- the ones that fire when a
certain pattern, like symmetry, occurs in the lower-order ones -- could
be inborn, in which case we would have symmetry-detectors at birth. Or
they could be "constructed," if it should happen, in our lifetime
category learning experience, that we "need" them. Inborn ones would be
fixed. Constructible ones would be flexible. Then the question is: Are
the constructible ones really acting as a unitary feature-detector,
like the inborn ones? Or are they just and/or/if/not combinations of
fixed inborn ones?
If they are just on-the-fly combinations of simple inborn ones, then the
"flexible" feature detectors are not really feature detectors at all,
but just rules applied to the real feature detectors, the fixed ones.
If, on the other hand, it can be shown that category learning can make
what is at first a slow, sequential, controlled, rule-following
procedure turn into a fast, parallel, automatic, instantaneous detection
of the complex feature, working similarly to the way inborn complex
detectors work, and if this flexible process really is flexible,
allowing us to learn features that we could not possibly have evolved
specific dedicated detectros to detect, then that would favour Schyns
et al.'s view.
>> Any large-scale, highly structured set of primitives is bound to be
>> too coarse to detect all of the distinctions that might be required
>> by different categories of objects.
> Why should it be? If features for lines, curves, sizes, brightness
> etc. exist why can they not be combined in different ways with new
> rules to detect the distinctions necessary to categorise objects?
You're right, and some of the commentators (especially Tijsseling) say
so. Schyns et al.'s point is that the kinds of patterns we are able to
learn are so many and varied (infinite, actually) that it is hard to
see how they could have been specifically anticipated by our repertoire
of primitive features. Of course they will be combinations of them, but
will those combinations function more the way the primitive and
prepared ones do -- automatically and instantly -- once they have been
learned? If so, then they may start out as a rule-based combination of
simpler features, but in the end become an independent complex
feature-detector in their own right. And those people whose brains have
constructed them will see things in a way that those who have not do
not see them. Features and patterns will "leap out" at them that others
may not notice or could only see with time-consuming deliberate
Maybe that does not sound as radical as the "strong" Whorfian view that
our view of reality is "constructed" by our language and experience, but
it is certainly a weak form of Whorfian constructivism.
It is also important to bear in mind that Schyns et al's data are based
on (1) passive exposure to (2) individual objects, rather than kinds,
based on (3) positive evidence only (no "right" and "wrong" with
feedback, just the same individual over and over) consisting of (4)
part-features only (so any feature could also be an object). You will
have to judge for yourself how well their findings are likely to
generalise beyond these special cases.
> I had trouble understanding the following paragraph. Any chance of a
> quick explanation?
>> One way to provide evidence for feature creation would be to show
>> that category learning changes features that participate in the
>> perceptual analysis of identical stimuli. This was the goal of the
>> experiment of Schyns and Rodet (in press) described earlier. This
>> experiment was controlled so that the features x and y were each
>> diagnostic of one category in the two categorisation conditions (X-
>> >Y->XY and XY->X->Y). Hence, they should in principle elicit
>> identical featural analysis and identical perceptions of the same
>> category exemplars--i.e., subjects in the two categorisation
>> conditions should equally see XY exemplars as feature conjunctions.
>> However, the outcome was mutually exclusive perceptions of XY
>> stimuli (a conjunctive and a configural perception), making a
>> feature weighting interpretation of this data difficult to justify.
>> Feature creation as opposed to feature weighting is preferable if
>> category learning induces mutually exclusive perceptual analysis of
>> an objectively identical object property, when the experimental
>> design would predict identical perceptual analysis if the subjects
>> used fixed features.
Yes, that doesn't sound very kid-sibly, does it? It helps to know what
Schyns & Rodet actually did (and I happen to know, because I was one of
the externals for Rodet's thesis!).
They compared subjects with different (passive) exposure histories with
the same stimuli, in different order of acquisiton. (Note that they
were not learning the categories by trial and error with corrective
feedback; they were simply shown things over and over, in different
combinations and orders.)
One important point first, because it is critical: ALL the experiments
Schyns & Rodet talk about are based on part-features; the parts look like
blobs, and the objects are made up of various combinations of those
blobs, in various spatial patterns. It is an open question whether their
conclusions on the basis of part features would generalise to features
in general. (Greenness is not a part; neither is loudness, straightness,
Never mind. Let's pretend as if all features were part-features.
You have two blobs X and Y, and the objects will be combinations of
these. So you have two groups of subjects. (1) One group first sees an X
over and over (so the first object consists of just the X blob), then a
Y (over and over) and then an XY (the object is a combination of the
(2) The other group sees first the XY (over and over) then the X (over and
over) and last the Y. They see the same things, in different order.
Schyns's prediction was that the X-Y-XY and the XY-X-Y group would see
the three objects differently, priming would effect them differently,
and they would analyse the patterns into parts differently. The X-Y-XY
group would form a "conjunctive" representation of (= detector for) the
XY, based on first having seen the parts, X and Y, as separate objects, and
last the combination, XY, as a combination of them. The XY-X-Y group,
in contrast, would form a "composite" (unitary) representation of (=
detector for) the XY (not first learning the X and Y as separate
objects) making them less likely to see the X and Y as parts of the XY.
(They were tested with more complex patterns, including X's Ys and Xys,
to see how they were effect by priming parts, and how they would
analyse them into parts.)
To put it even more simply, the first group would form an X-detector,
then a Y-detector, and last, an X-plus-Y detector that detects XYs. The
other group would first form a composite XY detector (not X plus Y,
because they don't know about X's and Y's yet), then a separate X
detector (not XY minus Y, but just X) and finally a Y detector (not XY
minus X, but just Y). Their recognition performance and the way they
analysed things into parts suggested that the first group had a slightly
different set of constructed detectors from the second. For the first
group, XY was a combination of X and Y, whereas for the second group, XY
was seen "as a whole."
Now, insofar as "primitive" or "fixed" features are concerned, all
subjects have the same brains and the same fixed features. But for group
2 and not group 1, XY has a unitary composite detector, whereas for
group one it has a separable combination of X and Y detectors.
Now, think about whether and how this would work for (1) non-part features,
and for (2) features learned from trial and error with feedback, rather
than mere repeated exposure. Think also about how this might apply to
(3) learning INDIVIDUALS versus learning KINDS.
None of these issues is straightforward, and no one has all the answers
> Schyns then explains about how to test the problem of whether a
> "created" feature is just a combination of pre-existing features.
>> In principle, if a functional feature is the
>> combination of two or more other features, these other features
>> would become active each time the new feature was presented.
>> However, priming tests on these subfeatures would indicate whether
>> or not they participated in the perceptual encoding of the new
> Has this been tested? It seems to be such a fundamental point in
> determining whether the created features are actually "new"- why did
> they not test this?
Yes, it has been tested for this particular set of blobs. How much one
can generalise from the findings is another question.
> Finally in the section "formal models of feature extraction" they
>> Our proposal for functional feature creation concerns the extraction
>> of new structures from perceptual data. How could has_feathers be
>> discovered from a training set of pixel arrays, or similarly
>> unstructured representations?
> Couldn't the more structured, fixed features be used to represent
> something like has-feathers? If not, how is it done by created
On the one hand, as discussed in the first half of this message, in SOME
way every feature needs to be a combination of the "pixels" that
constitute the nerve endings. So "has-feathers" would be a bit like
"symmetric" in this regard. But Schyns et al. want to say that
nevertheless certain pixel combinations will become automatised, the way
the composite XY was in their experiment.
Note, though, that "feathers" are kinds rather than individuals; they
may have many shapes. So it is not clear whether the blob-story will
carry over to them. Moreover, kinds, even more than individuals, cannot
be learned from mere passive exposure to positive instances. Implicit in
learning what KIND of thing something is, is learning what kind of thing
it ISN'T. (You have to learn to sort out kinds of things that can be
confused with one another. Mere passive exposure to them all will not
necessarily allow you to do this.) The way we sort the things correctly
by kind is through corrective feedback somehow guiding our perceptual
system to find the RIGHT features, the ones that will reliably sort
the things into their proper categories. Those features need not
be part-features either; they could be features like green or round.
So it is not clear how well the blob-part story will generalise, but
it's a start.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:21 GMT