Re: Over and Underextension

From: HARNAD Stevan (harnad@cogsci.soton.ac.uk)
Date: Thu May 30 1996 - 10:23:05 BST


> From: "Gooding Hilary" <hkg195@soton.ac.uk>
> Date: Tue, 21 May 1996 16:21:18 +0100 (BST)
>
> Overextention in word learning involves the extention of the use of a
> word learnt to cover a wider set of circumstances, events or objects
> than it should be used for. This is very common in young children. An
> example of overextention is when a child uses the word "doggie" to
> label horses, cows, and other 4 legged animals. Some overextentions can
> be understood in terms of perceptual simularities of different objects,
> for example all round objects including cakes and the sun are called
> "ball".

Yes, and overextension occurs because you have chosen the wrong
features, usually, too few features, to be able to pick out what really
is and is not in the category: To "fix" the features, you need negative
instances, things you call "doggie" only to learn that they're not dogs.
Then the feature detector has to be adjusted. (The feature detection, by
the way, can sometimes be conscious and explicit, but often it is
unconscious and implicit; and even when it is conscious and explicit, an
explanation is needed of how you decide which features to try out.)

> The opposite to this is underextention. This is when a word is
> used for a smaller, more select and specialist category than it should
> be. This is also common in children. An example is when a child calls
> the family's cat "kitty", but no other cat is labelled by this name.

This time there are usually too many features. Some have to be dropped,
so you learn what a cat is in general, and not just your particular cat.

> This process only applies to the learning of words and not grammer. The
> processes of over- and underextention of words are methods of
> categorisation. In overextention too many things are categorised
> together, whereas in underextention some members of the category are
> being left out. The mistakes which are made by getting the features
> wrong are a help to learn what should be in each category.

Right, but we still need a mechanism that can successfully DO this with
the same inputs and outputs that we can categorise: We need to reverse
engineer this capacity. Neural nets are candidates.

The reason the case is different with (universal) grammar is that we
never get the negative instances that would be needed to find the right
features and rules: We could not correct over and underextension in our
grammatical rules, because the mistakes never get made, so never get
corrected. The only explanation seems to be that the rules are already
built in, so we never have to learn them at all (except for some
minor parameter settings).

For an A, the over/under extension issue in word learning should be
related to the superset/subset issue in grammar learning, as discussed
by Pinker as well as to neural nets and pattern learning (AND, OR, XOR,
etc.).

>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:42 GMT