I'm going through the Schyns article again (and its making a lot
more sense this time) but there are still a few bits I'm confused
with. Can you help?
Schyns says that fixed features can be seen in two ways: (1) as
fine-grained and relatively unstructured or (2) as already
representing complex structures of the environment. He then points
out the problems with both views.
> Functionally important objecct regularities (e.g. symmetry, serif,
> beauty, and so forth) are often not captured by simple pixel-based
> features. Moreover it is not practically feasible (although
> logically possible) to extract relevant categorization features from
> pixel based representations of the input.
First, I don't really understand how pixels would represent
properties of an object but if they can why can't they represent
certain object regularities such as symmetry. How would Scyns's
flexible features represent these particular object regularities any
> Any large-scale, highly structured set of primitives is bound to be
> too coarse to detect all of the distinctions that might be required
> by different categories of objects.
Why should it be? If features for lines, curves, sizes, brightness
etc. exist why can they not be combined in different ways with new
rules to dtect the distinctions necessary to categorize objects?
I had trouble understanding the following paragraph. Any chance of a
> One way to provide evidence for feature creation would be to show
> that category learning changes features that participate in the
> perceptual analysis of identical stimuli. This was the goal of the
> experiment of Schyns and Rodet (in press) described earlier. This
> experiment was controlled so that the features x and y were each
> diagnostic of one category in the two categorization conditions (X-
> >Y->XY and XY->X->Y). Hence, they should in principle elicit
> identical featural analysis and identical perceptions of the same
> category exemplars--i.e., subjects in the two categorization
> conditions should equally see XY exemplars as feature conjunctions.
> However, the outcome was mutually exclusive perceptions of XY
> stimuli (a conjunctive and a configural perception), making a
> feature weighting interpretation of this data difficult to justify.
> Feature creation as opposed to feature weighting is preferable if
> category learning induces mutually exclusive perceptual analysis of
> an objectively identical object property, when the experimental
> design would predict identical perceptual analysis if the subjects
> used fixed features.
Schyns then explains about how to test the problem of whether a
"created" feature is just a combination of pre-existing features.
> In principle, if a functional feature is the
> combination of two or more other features, these other features
> would become active each time the new feature was presented.
> However, priming tests on these subfeatures would indicate whether
> or not they participated in the perceptual encoding of the new
Has this been tested? It seems to be such a fundamental point in
determining whether the created features are actually "new"- why did
they not test this?
Finally in the section "formal models of feature extraction" they
> Our proposal for functional feature creation concerns the extraction
> of new structures from perceptual data. How could has_feathers be
> discovered from a training set of pixel arrays, or similarly
> unstructured representations?
Couldn't the more structured, fixed features be used to represent
something like has-feathers? If not, how is it done by created
Hope you can help,
Thanks a lot, Chantal.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:20 GMT