Re: Pylyshyn's Critique of Neural Nets

From: HARNAD Stevan (harnad@cogsci.soton.ac.uk)
Date: Fri Jun 07 1996 - 21:49:16 BST


> Date: Mon, 27 May 1996 15:53:44 GMT
> From: "Timmins, Marc" <mdt295@soton.ac.uk>
>
> Firstly let me describe the basic qualities and
> abilities of neural nets. A neural net was an attempt to
> take artificial intelligence a stage further and the
> approach taken was to try and resemble nature more closely
> than before. A basic net or Perceptron consists of two
> layers of neurodes with each and every neurode connected to
> all the others. The net is organized into two layers, the
> input laywer and the output layer. However, these basic
> nets were grounded immediately by not being able to solve
> problems like "exclusive or". However when another or
> multiple layers were added between the input and output
> layers then these problems were quickly solved.
> With neurodes representing neurons and wire being
> analogous to the conecting materials in nature it seemed as
> though neural nets were destined to be a success in
> cognitive psychology.
> The main ability that nets have is to separate and find
> pattens in information and, if you like, to answer questions
> on that data.

What they do is best described as categorisation.

> There are two types of net: the standard net
> that is able to do very well the basic tasks outlined above.
> The other type is those that have either internally or
> xternally a source of feedback which enables the net to
> alter the bias of it's connections via back propagation.
> This results in a net that as it gains more "experience"
> it's probablity of reaching the desired answer is increased.
> Eventually given finite information and long enough the net
> should reach the correct response every time.

For some tasks; not necessarily all.

> This is where I believe that Pylyshyn's critique really
> comes in. Phylyshyn's angle on neural nets is not one of
> caring at all about the physical make-up of the net but of
> How the net arrives at the right answer.

Pylyshyn is not a neural net modeler; he is a computationalist.
He is criticising nets for not being able to so what computation (symbol
systems) can do.

> The back
> propagating nets for example can be seen almost as symbol
> systems.

Why? How?

> The input makes up an arbitary on/off series along
> the input layer and is associated with an output that the
> net has been taught.

Yes, but where are the symbols, and the symbol manipulations?

> This I believe is the critique, if the
> input has no meaning it must be arbitary, therefore there
> is no meaning held in the net and as it "lights" up the
> correct output then Pylyshyn's argument that a neural net is
> just another irrelavent piece of hardware for running a
> symbol system on is therefore correct.

Pylyshyn does say that if a net is just a hardware for running a symbol
system, then it's of no interest; but his main critique is about the
fact unlike symbols, the activation states in nets cannot be taken apart
and recombined in all the systematic and semantically interpretable ways
that symbol combinations can be. Symbol systems give you these
systematic combinatory possibilities for free; with nets, every
combination would have to be separately trained up (rather like trying
to train a parrot to have a conversation rather than just produce fixed
phrases). It seems to make more sense to go straight to a symbol system
rather than trying to train a net up to become one, painfully, case by
case.

> For without meaning,
> the nature of the input is arbitary and it is just the
> manipulation or association that is required to get the
> right answer.

This makes no sense to kid-sib.

> This is in essence computation and as such
> Pylyshyn's belief that neural nets are irrelevant to how we
> actually do the task and their only relavence is in what we
> use in the process of carring out that task.

You have only described half of Pylyshyn's critique.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:47 GMT