> From: Whitehouse Chantal <firstname.lastname@example.org>
> In their commentary Hummel and Kellman explore what is meant by
> nonderivable. They say there are 3 possibilities:
H & K's possibilities are not described very rigorously, but here is how
I would interpret them:
> (1) "the new feature is not derivable (computable) at all from the
> existing features"
If existing features are (better: existing feature-detectors pick out):
Green/Red, Circular/Rectangular, Big/Small
then they canNOT handle Yellow, Elliptical or even Medium-sized.
Or, in terms of pixels, think of a digital clock matrix display that
has too few cells to distinguish a 2 and a Z: they both look the same.
H&K reject this strongest kind of nonderivability, because it means our
senses could not detect the differences involved.
So if option 1 is "not detectable at all, either directly, or by any
computation on what is detected directly," then option 2 is that it is
not detectable by a simple sum of feature weights:
> (2) "new feature is not a simple weighted sum (i.e. linear
> combination) of existing features. Category learning seems to guide
> feature learning"
A weighted ("linear") combination of feature detectors Green, Circular,
Big would be, say,
(1.0 x G) + (0.0 x C) + (-3.5 x B)
Such a linear detector could detect all the numerical combinations of
the features weights.
But if instead of multiplying feature "weights" by a number and adding
them, as above, you have to do something more complicated with them,
such as multiplying a feature weight by another features weight
(G x C), then this combination is "nonlinear."
It's more complicated, but it is still derivable from the features.
But sometimes even this will not be enough to pick out some things, or
to distinguish them from others:
> (3) "new feature is an abstract invariant, which although computable
> from, is not truly definable in the vocabulary of existing features"
The idea here is that the way you would have to combine and weights of
the features is not some simple linear (additive) or even nonlinear
(multiplicative) rule, but requires applying a more "constructive"
analytic operation [such as finding what is "invariant" in all possible
Symmetric shapes: checking that each side has a mirror-image side
facing it, etc.].
If the invariant is a dynamic Gibsonian one, the detector may even have
to "do" something to the shape, such as walking closer or further to
it, and computing what effect that has, to decide whether it's near or
far, and what its real size and shape are.
> I'm a little unsure of what they are saying . Is possibility "2" the
> case where a new feature is initially created from a combination of
> existing fixed feature but through practice becomes a unitary feature
> in its own right (and so new in this senee)?
Actually, both linear and nonlinear feature weightings could become
automatised. Even the "operation" in 3 could become very fast. So it is
not the possibility of becoming an automatic, purpose-learnt detector
that is directly at issue here. The point is is just about how much it
takes to "extract" the features from the object so as to be able to
categorise it correctly every time.
> Is possibility "3" or
> "1" (or neither) talking about the "teddy-bear - cookie dough"
> scenario that Schys spoke about in his reply?
The teddy-bear scenario only applies to part-features learnt through
repeated passive exposure. You might think of that as analogous to 3,
because it does involve an operation, except that all you get is static
"positive instances" from repeated exposure: you see the same thing
over and over again. Whereas to get invariant features, you need to
apply an operation, like moving toward or away from something, and
compute which features vary when you do that, and, even more important,
which ones do NOT vary: which ones movement leaves unchanged
("invariant"), because they are the ones you will need to use to
detect the shape.
The Held & Hein passive/active kitten experiment I started PY104 with
shows that mere passive exposure is not enough. The movement has to be
voluntary and active.
So the passive teddy-bear imprinting is not an example of 3. It works
only for the passive blob world in the X-Y-XY experiments.
> Macdorman's commentary says that flexible features may contribute to
> a solution to the symbol grounding problem. I don't really understand
> how it can solve the problem.
First, the symbol grounding problem is the problem of grounding the
meaning of symbols (like words) in something other than just
definitions, because definitions also depend on the meaning of symbols.
It's the problem of how to get a symbol system started in the first
Some symbols, at least, have to be grounded directly in the capacity to
pick out the objects they refer to. For that you need detectors.
Fixed detectors would presumably limit the number of features, hence
objects you could detect. Flexible ones, that could be "assembled" on
the basis of training in how to categorise things, might be more useful
in grounding symbols, because they would be more general and powerful.
But the symbol grounding problem is indifferent to whether the
detectors are inborn or learned (fixed or flexible); it's just that the
learned kind sounds more promising.
> Finally, in response to Braisby & Franks argument that combinations
> of fixed blocks could represent anything, they say:
> If the representational granularity of fixed blocks is above the
> granularity of the pixel combinations of these blocks, they could
> not represent an object difference that the representational
> resolution of the fixed blocks does not capture.
> I don't understand what they're saying. What's the "representational
I think I mentioned "granularity" in connection with pixels before.
Think of the magnified picture that eventually turns out to be made of
tiny coloured dots. The differences that the picture can represent have
to be bigger than its smallest dots, because if they are smaller, they
will not be detected. That's what's meant by being above or below the
"grain," "granularity" or "resolving power" of a detector or a
HARNAD Stevan email@example.com
Professor of Psychology firstname.lastname@example.org
Director, phone: +44 1703 592582
Cognitive Sciences Centre fax: +44 1703 594597
Department of Psychology http://www.cogsci.soton.ac.uk/~harnad/
University of Southampton http://www.princeton.edu/~harnad/
Highfield, Southampton ftp://ftp.princeton.edu/pub/harnad/
SO17 1BJ UNITED KINGDOM ftp://cogsci.soton.ac.uk/pub/harnad/
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:22 GMT