Re: Backpropagation

From: HARNAD Stevan (harnad@cogsci.soton.ac.uk)
Date: Mon Jun 03 1996 - 15:28:29 BST


> Date: Wed, 22 May 1996 16:21:23 +0100 (BST)
> From: "McKay M.O." <mom195@soton.ac.uk>
>
> BACK PROPOGATION of errors is a learning rule which is used in
> connectionist network processes.

Backprop is not a learning rule. The learning rule is the rule for
changing the connectivity from one node to another; an example is the
delat rule (see the chapter from Best). Backpropagation is a form of
supervised learning.

> Connectionist networks are a
> relatively new phenomenon which demonstrate the programs learning
> ability through the production of specific outputs in response to
> certain inputs given.

This would have been just as true of any computer programme with inputs
and outputs. What is particular about neural nets is that they learn by
changing the interconnectivity of neuron-like units.

> These networks are made up of elementary units
> which are connected together in order for the units to link to one
> another. These units act on one another by either sending excitory or
> inhibitory signals and this is their means of communication.

Most artificial nets that do behavioral tasks actually have only one
kind of connection, a positive one varying from 0% to 100%; the ones
with inhibitory connections tend to be the ones that model internal
brain function rather than behaviour.

> These
> networks do not have to follow explicit rules;

What does that mean (kid-sib-style)?

> in fact they can model
> cognitive behaviour.

What is "cognitive" behaviour, and what behaviour do they model?
(Categorisation is a prominent one.)

> Patterns of activation can be stored in the network
> and these associate various inputs with certain outputs.

It's not patterns of activation that are stored so much as connection
strengths, so that a given input will produce a certain pattern of
activation.

> Models include
> many layers which deal with complex behaviour. One layer is made up of
> input units which encode a stimulus for pattern activation and the
> other layer produces a response to the stimulus. The networks produce
> specific outputs in response to certain stimuli and because of this,it
> seems that a certain behaviour is produced which demonstrates the
> networks ability to follow learned rules.

What does the last sentence mean?

> Networks are able to learn
> association between different inputs and outputs by adjusting weights
> in links between the units.

> Getting repetitious.

> Many rules can modify these weights and
> this is where back propogation plays it's part.

Inputs modify the weights on the basis of supervision.

> At the very beginning
> of the "learning" process ,random weights are introduced on links
> between units and often the response produced is not correct. Back
> propogation compares the incorrect pattern with the required output
> response. The errors that occur are recorded and then B.P.influences
> the network so weights are modified in order for the required output to
> be produced; this supervised learning technique strengthens connections
> and weakens them when they are wrong; so after a certain learning
> period, the required response is performed due to influence of B.P.

Kid sib wouldn't be able to figure it out from this that after the net
produces an output, this is propagated backwards toward the input,
strengthening all the connections that led to that output if it was
correct, weakening them if it was wrong.

For a better mark, explain more clearly and fully, and link to
categorisation or supervision, or symbols, or unsupervised learning,
etc.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:42 GMT