Schyns Comms 01-03 abdi benson braisby

From: Whitehouse Chantal (
Date: Wed Mar 11 1998 - 19:23:36 GMT

Commentry 1- Abdi, Valentin, and Edelman

This commentary doesn't really comment on the target article. The
authors take the opportunity to talk about their own research with
"eigenfeatures" which they propose act like the flexible features
that Schyns et al talk about.

They say that eigenfeatures are created by,

> the principal component approach used on objects described by a low level code (i.e.,
>pixels, Gabor jets).

but I haven't a clue what that is. They later expand on this
"principal component approach" saying,

> The pca approach represents faces by their projections on a set of
> orthogonal features (principal components, eigenvectors, "eigenfaces")
> epitomizing the statistical structure of the set of faces from which
> they are extracted. These orthogonal features are ordered according to
> the amount of variance (or eigenvalue) they explain, and are often
> referred to as "macro-features" (Anderson & Mozer, 1981) or
> eigenfeatures by opposition with the high level features traditionally
> used to describe a face (e.g., nose, eyes, mouth).

Unfortunately this doesn't help me much. How are these eigenfeatures
represented? Are they a set of codes, written descriptions of
features, or an actual visual display of facial features?

> Because they are optimal for the set of faces from which they are
> extracted, eigenfeatures are less efficient for representing faces
> from a different population and thus generate class-specific effects
> such as the other race effect

 This appears to be saying that if we see lots of Caucasian
faces and not many Japanese faces then we won't be able to
efficiently represent Japanese faces and so label them as "other
race" faces. However earlier in the commentary they said that,

> Eigenfeatures are flexible in that they evolve with the faces encountered (Valentin,
> Abdi, & Edelman, 1996).

This seems to imply that if a person sees more Japanese faces over
time then they wil be able to represent them more efficiently and be
less likely to label them as "other race" faces. This doesn't seem to
make sense logically. If we see lots of Japanese faces we don't start
to think of them as more Caucasian.

Commentary 2- Benson

> Assuming primary visual cortex (V1) is necessary for object
> recognition strongly suggests the geniculostriate pathway is
> fundamental in bootstrapping the dimensionality reduction process.

The dimensionality reduction process is the idea that our environment
is made up of hundreds of dimensions that we need to condense in
some way in order to make sense of the, "blooming, buzzing
confusion". Benson is saying that this condensation process is done
in part by the actual visual process. i.e. as the information is
being taken from the retina to the visual cortex, some sort of coding
is occuring which allows the information to be condensed.

> For every relevant (detected) feature of a homogeneous class,
> experience dictates either continuous or discrete measurement. In the
> former, this leads naturally to a feature vector which includes
> population sample variance information (variance may be asymmetric
> about the mean). Identification of a discrete feature immediately
> enhances categorisability.

A feature can be given either a continuous or a discrete measurement,
e.g. either a value from the continuous scale 1-100, or an either or
value such as 0 or 1. Benson is saying that a discrete value for a
feature helps categorization of an object made from many such
features because it already has a discrete category itself. But maybe
the degree of a feature is important for categorization. Imagine
someone was describing two different animals to you in terms of
features such as whether it had fur or not. One animal is very fury
and the other has little fur. If you are giving features discrete
measurements (with 1 being "fur" and 0 being "no fur") then both
animals would be given 1 for the fur feature. If you were using
continuous values, the very fury animal could be given 80, and the
animal with little fur could be given the value 10 for the fur
feature. The second case would help you to categorise the animals
more easily.

Commentary 3- Braisby and Bradley

> Abstract
> Schyns et al. argue that flexibility in categorisation implies
> 'feature creation'. We argue that this notion is flawed, that
> flexibility can be explained by combinations over fixed feature sets,
> and that 'feature creation' would anyway fail to explain
> categorisation. We suggest that flexibility in categorisation is due
> to pragmatic factors influencing feature combination, rendering
> 'feature creation' unnecessary.

> Schyns et al. argue that fixed feature sets limit the representational
> (and classificatory) capacity of a conceptual system. However, they
> incorrectly claim that "Any functionally important difference between
> objects must be representable as differences in their building blocks"
> (Section 1.1, paragraph 3). However, this ignores the modes of
> combination of those building blocks

True. As we know we are born with the ability to identify and make
sence of certain features such as those that make up the human face.
It seems to make moe sense that we are born with a fixed set of
features which we learn to combine in different ways to make sense of
new things rather than somehow actually learn new features. Why
should we not come equiped with all the necessary building blocks?
> Fodor argues that systems cannot increase
> their logical power (acquire wholly new features) via learning: the
> system's vocabulary and mechanisms must already be able to express the
> 'new' feature, and so that feature has not been 'created'.

The whole idea of creating new features provides such a puzzle. It
appears to be much simpler for the system to come ready prepared
with the necessary features and a flexible set of rules for combining
these features. It would be easier and make more sense for the rules
to be developed rather than the actual features.

> Despite this being a critical problem, Schyns et al. fail to address
> it properly. They state that "...categorisations, rather than being
> based on existing perceptual features, also determine the features
> that enter the representation of objects" (Section 1.2.4, paragraph
> 1). Their position appears circular, since they employ 'feature
> creation' to explain categorisation, but claim that categorisation
> itself determines 'feature creation'.

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:20 GMT