Artificial Neural
Network - Basic
Concepts
ISRAR ALI
Neural networks
Neural networks are parallel computing devices, which is
basically an attempt to make a computer model of the brain.
The main objective is to develop a system to perform various
computational tasks faster than the traditional systems. These
tasks include pattern recognition and classification,
approximation, optimization, and data clustering.
What is Artificial Neural Network?
is an efficient computing system whose central theme is borrowed from the analogy of biological neural
networks. ANNs are also named as “artificial neural systems,” or “parallel distributed processing systems,”
or “connectionist systems.” ANN acquires a large collection of units that are interconnected in some
pattern to allow communication between the units. These units, also referred to as nodes or neurons, are
simple processors which operate in parallel.
Every neuron is connected with other neuron through a connection link. Each connection link is associated
with a weight that has information about the input signal. This is the most useful information for neurons to
solve a particular problem because the weight usually excites or inhibits the signal that is being
communicated. Each neuron has an internal state, which is called an activation signal. Output signals,
which are produced after combining the input signals and activation rule, may be sent to other units.
A Brief History of ANN
1940s to 1960s
ANN during 1940s to 1960s
1960s to 1980s
ANN during 1960s to 1980s
1980s till Present
ANN from 1980s till Present
ANN during 1940s to 1960s
• 1943 − It has been assumed that the concept of neural network started with the work of
physiologist, Warren McCulloch, and mathematician, Walter Pitts, when in 1943 they
modeled a simple neural network using electrical circuits in order to describe how
neurons in the brain might work.
• 1949 − Donald Hebb’s book, The Organization of Behavior, put forth the fact that
repeated activation of one neuron by another increases its strength each time they are
used.
• 1956 − An associative memory network was introduced by Taylor.
• 1958 − A learning method for McCulloch and Pitts neuron model named Perceptron was
invented by Rosenblatt.
• 1960 − Bernard Widrow and Marcian Hoff developed models called "ADALINE" and
“MADALINE.”
ANN during 1960s to 1980s
• 1961 − Rosenblatt made an unsuccessful attempt but proposed the “backpropagation” scheme for multilayer
networks.
• 1964 − Taylor constructed a winner-take-all circuit with inhibitions among output units.
• 1969 − Multilayer perceptron MLP was invented by Minsky and Papert.
• 1971 − Kohonen developed Associative memories.
• 1976 − Stephen Grossberg and Gail Carpenter developed Adaptive resonance theory.
ANN from 1980s till Present
• 1982 − The major development was Hopfield’s Energy approach.
• 1985 − Boltzmann machine was developed by Ackley, Hinton, and Sejnowski.
• 1986 − Rumelhart, Hinton, and Williams introduced Generalised Delta Rule.
• 1988 − Kosko developed Binary Associative Memory BAM and also gave the
concept of Fuzzy Logic in ANN.
Biological Neuron
A nerve cell neuron is a special
biological cell that processes
information. According to an estimation,
there are huge number of neurons,
approximately 1011 with numerous
interconnections, approximately 1015.
Working of a Biological Neuron
Dendrites − They are tree-like branches, responsible for receiving the information from
other neurons it is connected to. In other sense, we can say that they are like the ears of
neuron.
Soma − It is the cell body of the neuron and is responsible for processing of information,
they have received from dendrites.
Axon − It is just like a cable through which neurons send the information.
Synapses − It is the connection between the axon and other neuron dendrites.
ANN versus BNN
Model of Artificial Neural Network
Processing of ANN
depends upon the following three building blocks
• Network Topology
• Adjustments of Weights or Learning
• Activation Functions
Network Topology
• A network topology is the arrangement of a network along
with its nodes and connecting lines. According to the topology,
ANN can be classified as the following kinds
• Feedforward Network
• Feedback Network
Feedforward Network
• It is a non-recurrent network having processing units/nodes in
layers and all the nodes in a layer are connected with the
nodes of the previous layers. The connection has different
weights upon them. There is no feedback loop means the
signal can only flow in one direction, from input to output. It
may be divided into the following two types
• Single layer feedforward network
• Multilayer feedforward network
Single layer feedforward network
The concept is of feedforward
ANN having only one weighted
layer. In other words, we can say
the input layer is fully connected
to the output layer.
Multilayer feedforward network
The concept is of feedforward
ANN having more than one
weighted layer. As this network
has one or more layers between
the input and the output layer, it
is called hidden layers.
Feedback Network
As the name suggests, a feedback network has feedback paths,
which means the signal can flow in both directions using loops.
This makes it a non-linear dynamic system, which changes
continuously until it reaches a state of equilibrium. It may be
divided into the following types
• Recurrent networks
• Fully recurrent network
Recurrent
• Recurrent networks − They are
feedback networks with closed
loops. Following are the two
types of recurrent networks.
• Fully recurrent network − It is
the simplest neural network
architecture because all nodes
are connected to all other
nodes and each node works as
both input and output.
Jordan network
It is a closed loop network in
which the output will go to the
input again as feedback as shown
in the following diagram.
Adjustments of Weights or Learning
Learning, in artificial neural network, is the method of modifying
the weights of connections between the neurons of a specified
network. Learning in ANN can be classified into three categories
namely supervised learning, unsupervised learning, and
reinforcement learning.
• Supervised Learning
• Unsupervised Learning
Supervised Learning
• As the name suggests, this type of learning is done under the
supervision of a teacher. This learning process is dependent.
• During the training of ANN under supervised learning, the input
vector is presented to the network, which will give an output vector.
This output vector is compared with the desired output vector. An
error signal is generated, if there is a difference between the actual
output and the desired output vector. On the basis of this error
signal, the weights are adjusted until the actual output is matched
with the desired output.
Unsupervised Learning
• As the name suggests, this type of learning is done without the
supervision of a teacher. This learning process is independent.
• During the training of ANN under unsupervised learning, the input
vectors of similar type are combined to form clusters. When a new
input pattern is applied, then the neural network gives an output
response indicating the class to which the input pattern belongs.
• There is no feedback from the environment as to what should be the
desired output and if it is correct or incorrect. Hence, in this type of
learning, the network itself must discover the patterns and features
from the input data, and the relation for the input data over the
output.
Reinforcement Learning
• As the name suggests, this type of learning is used to reinforce or
strengthen the network over some critic information. This learning
process is similar to supervised learning, however we might have very
less information.
• During the training of network under reinforcement learning, the network
receives some feedback from the environment. This makes it somewhat
similar to supervised learning. However, the feedback obtained here is
evaluative not instructive, which means there is no teacher as in
supervised learning. After receiving the feedback, the network performs
adjustments of the weights to get better critic information in future.
Activation Functions
It may be defined as the extra force or effort applied over the
input to obtain an exact output. In ANN, we can also apply
activation functions over the input to get the exact output.
Followings are some activation functions of interest
• Linear Activation Function
• Sigmoid Activation Function
Linear Activation Function
• It is also called the identity function as it performs no input editing. It can
be defined as
• F(x)=x
Sigmoid Activation Function

Artificial Neural Network - Basic Concepts.pptx

  • 1.
    Artificial Neural Network -Basic Concepts ISRAR ALI
  • 2.
    Neural networks Neural networksare parallel computing devices, which is basically an attempt to make a computer model of the brain. The main objective is to develop a system to perform various computational tasks faster than the traditional systems. These tasks include pattern recognition and classification, approximation, optimization, and data clustering.
  • 3.
    What is ArtificialNeural Network? is an efficient computing system whose central theme is borrowed from the analogy of biological neural networks. ANNs are also named as “artificial neural systems,” or “parallel distributed processing systems,” or “connectionist systems.” ANN acquires a large collection of units that are interconnected in some pattern to allow communication between the units. These units, also referred to as nodes or neurons, are simple processors which operate in parallel. Every neuron is connected with other neuron through a connection link. Each connection link is associated with a weight that has information about the input signal. This is the most useful information for neurons to solve a particular problem because the weight usually excites or inhibits the signal that is being communicated. Each neuron has an internal state, which is called an activation signal. Output signals, which are produced after combining the input signals and activation rule, may be sent to other units.
  • 4.
    A Brief Historyof ANN 1940s to 1960s ANN during 1940s to 1960s 1960s to 1980s ANN during 1960s to 1980s 1980s till Present ANN from 1980s till Present
  • 5.
    ANN during 1940sto 1960s • 1943 − It has been assumed that the concept of neural network started with the work of physiologist, Warren McCulloch, and mathematician, Walter Pitts, when in 1943 they modeled a simple neural network using electrical circuits in order to describe how neurons in the brain might work. • 1949 − Donald Hebb’s book, The Organization of Behavior, put forth the fact that repeated activation of one neuron by another increases its strength each time they are used. • 1956 − An associative memory network was introduced by Taylor. • 1958 − A learning method for McCulloch and Pitts neuron model named Perceptron was invented by Rosenblatt. • 1960 − Bernard Widrow and Marcian Hoff developed models called "ADALINE" and “MADALINE.”
  • 6.
    ANN during 1960sto 1980s • 1961 − Rosenblatt made an unsuccessful attempt but proposed the “backpropagation” scheme for multilayer networks. • 1964 − Taylor constructed a winner-take-all circuit with inhibitions among output units. • 1969 − Multilayer perceptron MLP was invented by Minsky and Papert. • 1971 − Kohonen developed Associative memories. • 1976 − Stephen Grossberg and Gail Carpenter developed Adaptive resonance theory.
  • 7.
    ANN from 1980still Present • 1982 − The major development was Hopfield’s Energy approach. • 1985 − Boltzmann machine was developed by Ackley, Hinton, and Sejnowski. • 1986 − Rumelhart, Hinton, and Williams introduced Generalised Delta Rule. • 1988 − Kosko developed Binary Associative Memory BAM and also gave the concept of Fuzzy Logic in ANN.
  • 8.
    Biological Neuron A nervecell neuron is a special biological cell that processes information. According to an estimation, there are huge number of neurons, approximately 1011 with numerous interconnections, approximately 1015.
  • 10.
    Working of aBiological Neuron Dendrites − They are tree-like branches, responsible for receiving the information from other neurons it is connected to. In other sense, we can say that they are like the ears of neuron. Soma − It is the cell body of the neuron and is responsible for processing of information, they have received from dendrites. Axon − It is just like a cable through which neurons send the information. Synapses − It is the connection between the axon and other neuron dendrites.
  • 11.
  • 13.
    Model of ArtificialNeural Network
  • 15.
    Processing of ANN dependsupon the following three building blocks • Network Topology • Adjustments of Weights or Learning • Activation Functions
  • 16.
    Network Topology • Anetwork topology is the arrangement of a network along with its nodes and connecting lines. According to the topology, ANN can be classified as the following kinds • Feedforward Network • Feedback Network
  • 17.
    Feedforward Network • Itis a non-recurrent network having processing units/nodes in layers and all the nodes in a layer are connected with the nodes of the previous layers. The connection has different weights upon them. There is no feedback loop means the signal can only flow in one direction, from input to output. It may be divided into the following two types • Single layer feedforward network • Multilayer feedforward network
  • 18.
    Single layer feedforwardnetwork The concept is of feedforward ANN having only one weighted layer. In other words, we can say the input layer is fully connected to the output layer.
  • 19.
    Multilayer feedforward network Theconcept is of feedforward ANN having more than one weighted layer. As this network has one or more layers between the input and the output layer, it is called hidden layers.
  • 20.
    Feedback Network As thename suggests, a feedback network has feedback paths, which means the signal can flow in both directions using loops. This makes it a non-linear dynamic system, which changes continuously until it reaches a state of equilibrium. It may be divided into the following types • Recurrent networks • Fully recurrent network
  • 21.
    Recurrent • Recurrent networks− They are feedback networks with closed loops. Following are the two types of recurrent networks. • Fully recurrent network − It is the simplest neural network architecture because all nodes are connected to all other nodes and each node works as both input and output.
  • 22.
    Jordan network It isa closed loop network in which the output will go to the input again as feedback as shown in the following diagram.
  • 23.
    Adjustments of Weightsor Learning Learning, in artificial neural network, is the method of modifying the weights of connections between the neurons of a specified network. Learning in ANN can be classified into three categories namely supervised learning, unsupervised learning, and reinforcement learning. • Supervised Learning • Unsupervised Learning
  • 24.
    Supervised Learning • Asthe name suggests, this type of learning is done under the supervision of a teacher. This learning process is dependent. • During the training of ANN under supervised learning, the input vector is presented to the network, which will give an output vector. This output vector is compared with the desired output vector. An error signal is generated, if there is a difference between the actual output and the desired output vector. On the basis of this error signal, the weights are adjusted until the actual output is matched with the desired output.
  • 26.
    Unsupervised Learning • Asthe name suggests, this type of learning is done without the supervision of a teacher. This learning process is independent. • During the training of ANN under unsupervised learning, the input vectors of similar type are combined to form clusters. When a new input pattern is applied, then the neural network gives an output response indicating the class to which the input pattern belongs. • There is no feedback from the environment as to what should be the desired output and if it is correct or incorrect. Hence, in this type of learning, the network itself must discover the patterns and features from the input data, and the relation for the input data over the output.
  • 28.
    Reinforcement Learning • Asthe name suggests, this type of learning is used to reinforce or strengthen the network over some critic information. This learning process is similar to supervised learning, however we might have very less information. • During the training of network under reinforcement learning, the network receives some feedback from the environment. This makes it somewhat similar to supervised learning. However, the feedback obtained here is evaluative not instructive, which means there is no teacher as in supervised learning. After receiving the feedback, the network performs adjustments of the weights to get better critic information in future.
  • 30.
    Activation Functions It maybe defined as the extra force or effort applied over the input to obtain an exact output. In ANN, we can also apply activation functions over the input to get the exact output. Followings are some activation functions of interest • Linear Activation Function • Sigmoid Activation Function
  • 31.
    Linear Activation Function •It is also called the identity function as it performs no input editing. It can be defined as • F(x)=x
  • 32.