These are my very last Schyns questions, I promise.
In their commentary Hummel and Kellman explore what is meant by
nonderivable. They say there are 3 possibilities:
(1) "the new feature is not deivable (computable) at all from the
(2) "new feature is not a simple weighted sum (i.e. linear
combination) of existing features. Category learning seems to guide
(3) "new feature is an abstract invariant, which although computable
from, is not truly definable in the vocabulary of existing features"
I'm a little unsure of what they are saying . Is possibility "2" the
case where a new feature is initially created from a combination of
existing fixed feature but through practice becomes a unitary feature
in its own right (and so new in this sence)? Is possibility "3" or
"1" (or neither) talking about the "teddy-bear - cookie dough"
scenario that Schys spoke about in his reply?
Macdorman's commentary says that flexible features may contribute to
a solution to the symbol grounding problem. I don't really understand
how it can solve the problem.
Finally, in response to Braisby & Franks argument that combinations
of fixed blocks could represent anything, they say:
> If the representational granularity of fixed blocks is above the granularity
> of the pixel combinations of these blocks, they could not represent an
> object difference that the representational resolution of the fixed blocks
> does not capture.
I don't understand what they're saying. What's the "representational
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:22 GMT