On Sat, 28 Feb 1998, Stent, Hannah [h] wrote about Schyns sect 2.6 - 3
sch> "Unfortunately, a nonexistent feature is behaviorally equivalent to
sch> an existing feature with an "attentional weight" of 0. This makes it
sch> difficult to tease apart feature weighting from feature creation
sch> based on simple, direct tests of the existence of a feature in
This is also a hint of Watanabe's "Ugly Duckling Theorem," according
to which everything you could possibly think of is a feature, positive
or negative, of everything.
h > The feature weighting theory is said to be particularly hard to
h > disprove because:
sch> "it is used a posteriori to interpret patterns of data. Feature
sch> weighting is a form of curve fitting with free parameters (the
sch> weights assigned to features). Feature weighting therefore covers not
sch> one, but a potential infinity of models of categorization, and can
sch> potentially accomodate any pattern of experimental data if its
sch> features are not pre-specified."
Just as Watanabe's Ugly Duckling Theorem implies.
sch> "Unlike purely formal models of similarity and categorization,
sch> our approach places constraints on what can count as features:
sch> Features will be incorporated into a system to the extent that they
sch> distinguish between object categories; features should not be limited
sch> to the finite set of a priori features designed by a particular
sch> researcher for a particular domain."
In other words, "created" features are those of the infinitely many
possible "Watanabe" features that happen to turn out to be useful in
categorising things the way we NEED to categorise them (either to
tell apart poisonous and edible food, or to follow the dictates of
some dangerous dictator who has decreed that we must call things "X"
only if they are seen on a Tuesday or do not resemble his mother,
except if that day of the year is a prime number)...
h > Schyns suggests that the fixed feature approach has numerous useless
h > features:
sch> "To the extent that each new feature accommodates at least the
sch> categorization for which the feature was created, the repertoire
sch> should be free of useless features. A fixed feature approach is
sch> necessarily much less parsimonious: Many spurious features must exist
sch> in the feature repertoire to foresee new categorizations. Moreover,
sch> most features of the fixed would never be used--they would keep
sch> waiting for their "Godot category." Fixed features necessarily have
sch> suboptimal fit outside the scope of the stimuli they were designed to
sch> represent. A flexible set of features tuned to specific
sch> categorizations reduces the necessity of complex categorization
Sounds good; but can a "created" feature be brand new? Or is it always
just made up of parts you already have? Suppose we were on a planet where
there were one-headed and two-headed people, and the one-headed ones
were harmlesss but the two-headed ones could zap you with a ray gun if
they could got close enough to you. So it would be very important to detect
from as great a distance as possible whether people were one-headed or
two-headed. Our brains might even learn to detect two-headedness as
quickly and directly and reliably as they detect redness; no counting
would be necessary. But would the two-head detector, "created" for the
occasion, not be just a combination of two one-head detectors we have
[I'm not saying it's so; just introducing the problem so you think of
h > I dont think that Schyns can really state that the fixed theory
h > features are useless. Although it is unlikely, all the features the
h > fixed theory has suggested may be useful sometime in the future as
h > new categorisations are discovered. It cannot be proved that the
h > fixed feature theorists are not actually showing great foresight.
If they were right, it would of course be evolution that had shown the
"foresight", not the fixed-feature theorists: The theorists would just
be correctly reporting what the "Blind Watchmaker" (Darwinian
Evolution) had "created" for us in advance, having selected for those
creatures that could detect those features and against those who could
(This can of course be translated into the following: Those who could
detect the necessary features must have had an advantage in survival
and reproduction over those who could not -- maybe they could find food
or mates more quickly or more often, or they could escape predators
more quickly. If in that original environment (the Environment of
Evolutionary Adaptedness or "EEA") in which our brains evolved to
detect all the features they would ever need there was enough selective
advantage to those who evolved the full set of feature detectors -- and
those features really turned out to be all we needed, even in our
present-day environment, for any categorisation problem we would ever
encounter -- then the fixed-feature theorists would be right.)
Another possibility is that we evolved all of our simple, direct feature
detectors (round, straight, green, parallel, smooth, loud, etc.), but
the later more complicated categorisation problems we were faced with
would require only new COMBINATIONS of those simple features: The
combination might be a series of "ANDs" "ORs" "IF/THENs" and
"NOTs" based on the simple features ("green and not round, or red if
then not small..."), something you could either learn to detect quickly
and directly, like the two-headed creatures, or something that you
detected by simply remembering the rule in the form of a sentence
("green and not round, or if red and then not small...").
h > A flexible set of features is of course the most attractive option.
h > However a lack of complexity should not always make a theory more
h > plausible.
h > Schyns also considers what is the most natural process for a person:
sch> "Concept learning theories have frequently stressed the importance of
sch> learning categories by discovering complex rules that integrate
sch> several distinct stimulus features Concept learning certainly does
sch> sometimes require such integration. However, these problems have
sch> effortful, strategic solutions. They are rather unnatural; people are
sch> not particularly adept at explicitly combining psychologically
sch> separated sources of information."
At first. But as the experimental and theoretical work on the transition
from "controlled" to "automatic" processing shows, slow, conscious
rule-following can eventually be learned so well that it becomes
automatic and extremely fast.
h > Surely if this was the way that categorisation was done then people
h > would be adept at it through constant practice?
h > The flexible approach to learning categories does however sound more
h > logical and it can explain certain developmental phenomena such as
h > the narrowing of lexical categories throughout childhood:
sch> "Our alternative is that new categorizations can be based on
sch> relatively few, specially tailored features.In the flexible feature
sch> approach, categorizations can induce a decomposition of features into
sch> subfeatures. Consider the contrast between glasses and cans. Early in
sch> conceptual development, these objects may be indistinguishable
sch> because their memory representations corresponds to a single,
sch> undifferentiated feature. Now assume that the organism needs to
sch> distinguish between these objects. This can be achieved by
sch> decomposing the undifferentiated feature into two specific features
sch> tailored to glasses and cans."
This adds the interesting possibility that new features are not always
combinations of simpler old ones: sometimes new features arise from
breaking up old features into parts.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:20 GMT