Neural networks
• Neuralnetworks are made up of many artificial neurons.
• Each input into the neuron has its own weight associated with
it illustrated by the red circle.
• A weight is simply a floating point number and it's these we
adjust when we eventually come to train the network.
2
3.
Neural networks
• Aneuron can have any number of inputs from one to
n, where n is the total number of inputs.
• The inputs may be represented therefore as x1, x2, x3…
xn.
• And the corresponding weights for the inputs as w1,
w2, w3… wn.
• Output a = x1w1+x2w2+x3w3... +xnwn
3
4.
How do weactually use an artificial
neuron?
• Feedforward network: The neurons in each layer feed their
output forward to the next layer until we get the final output
from the neural network.
• There can be any number of hidden layers within a
feedforward network.
• The number of neurons can be completely arbitrary.
4
5.
Neural Networks byan Example
• let's design a neural network that will detect the number '4'.
• Given a panel made up of a grid of lights which can be either on or off, we
want our neural net to let us know whenever it thinks it sees the character
'4'.
• The panel is eight cells square and looks like this:
• the neural net will have 64 inputs, each one representing a particular cell in
the panel and a hidden layer consisting of a number of neurons (more on
this later) all feeding their output into just one neuron in the output layer
5
6.
Neural Networks byan Example
• initialize the neural net with random weights
• feed it a series of inputs which represent, in this example, the
different panel configurations
• For each configuration we check to see what its output is and
adjust the weights accordingly so that whenever it sees
something looking like a number 4 it outputs a 1 and for
everything else it outputs a zero.
• More:
http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/rep
ort.html
6
We will introducethe MLP and the backpropagation
algorithm which is used to train it
MLP used to describe any general feedforward (no
recurrent connections) network
However, we will concentrate on nets with units
arranged in layers
x1
xn
8
9.
Different books referto the above as either 4 layer (no. of
layers of neurons) or 3 layer (no. of layers of adaptive
weights). We will follow the latter convention
1st question:
what do the extra layers gain you? Start with looking at
what a single layer can’t do
x1
xn
9
10.
Perceptron Learning Theorem
•Recap: A perceptron (threshold unit) can
learn anything that it can represent (i.e.
anything separable with a hyperplane)
10
11.
The Exclusive ORproblem
A Perceptron cannot represent Exclusive OR
since it is not linearly separable.
11
Minsky & Papert(1969) offered solution to XOR problem by
combining perceptron unit responses using a second layer of
Units. Piecewise linear classification using an MLP with
threshold (perceptron) units
1
2
+1
+1
3
13
Properties of architecture
•No connections within a layer
y f w x b
i ij j i
j
m
( )
1
Each unit is a perceptron
15
16.
Properties of architecture
•No connections within a layer
• No direct connections between input and output layers
•
y f w x b
i ij j i
j
m
( )
1
Each unit is a perceptron
16
17.
Properties of architecture
•No connections within a layer
• No direct connections between input and output layers
• Fully connected between layers
•
y f w x b
i ij j i
j
m
( )
1
Each unit is a perceptron
17
18.
Properties of architecture
•No connections within a layer
• No direct connections between input and output layers
• Fully connected between layers
• Often more than 3 layers
• Number of output units need not equal number of input units
• Number of hidden units per layer can be more or less than
input or output units
y f w x b
i ij j i
j
m
( )
1
Each unit is a perceptron
Often include bias as an extra weight 18
19.
What do eachof the layers do?
1st layer draws
linear boundaries
2nd layer combines
the boundaries
3rd layer can generate
arbitrarily complex
boundaries 19
20.
Backward pass phase:computes ‘error signal’, propagates
the error backwards through network starting at output units
(where the error is the difference between actual and desired
output values)
Forward pass phase: computes ‘functional signal’, feed forward
propagation of input pattern signals through network
Backpropagation learning algorithm ‘BP’
Solution to credit assignment problem in MLP. Rumelhart, Hinton and
Williams (1986) (though actually invented earlier in a PhD thesis
relating to economics)
BP has two phases:
20
Forward Propagation ofActivity
• Step 1: Initialise weights at random, choose a
learning rate η
• Until network is trained:
• For each training example i.e. input pattern and
target output(s):
• Step 2: Do forward pass through net (with fixed
weights) to produce output(s)
– i.e., in Forward Direction, layer by layer:
• Inputs applied
• Multiplied by weights
• Summed
• ‘Squashed’ by sigmoid activation function
• Output passed to each neuron in next layer
– Repeat above until network output(s) produced
22
23.
Step 3. Back-propagationof error
• Compute error (delta or local gradient) for each
output unit δ k
• Layer-by-layer, compute error (delta or local
gradient) for each hidden unit δ j by backpropagating
errors (as shown previously)
Step 4: Next, update all the weights Δwij
By gradient descent, and go back to Step 2
The overall MLP learning algorithm, involving
forward pass and backpropagation of error (until
the network training completion), is known as the
Generalised Delta Rule (GDR), or more
commonly, the Back Propagation (BP) algorithm
23
Training
• This wasa single iteration of back-prop
• Training requires many iterations with many
training examples or epochs (one epoch is
entire presentation of complete training set)
• It can be slow !
• Note that computation in MLP is local (with
respect to each neuron)
• Parallel computation implementation is also
possible
33
34.
Training and testingdata
• How many examples ?
– The more the merrier !
• Disjoint training and testing data sets
– learn from training data but evaluate
performance (generalization ability) on
unseen test data
• Aim: minimize error on test data
34