SOFT COMPUTING
PRESENTED BY:-SUSHREE SAMIKSHYA PATTANAIK
C.V.RAMAN GLOBAL UNIVERSITY,BHUBANESWAR
INTRODUCTION TO SOFT COMPUTING
 Concept of Computing
 Hard Computing
 Soft Computing
 How Soft Computing ?
 Hard Computing Vs Soft Computing
 Hybrid Computing
CONCEPT OF COMPUTATION
Input/Antecedent Output/Consequent
y=f(x) is called a mapping function
f= formal method or an algorithm to
solve a problem
Computing
y=f(x)
Control action
SOFT COMPUTING
Applications
Techniques
IMPORTANT CHARACTERISTICS OF SOFT
COMPUTING
 Should provide precise solution.
 Control action should unambiguous and accurate.
 Suitable for problem, which is easy to model
mathematically.
HARD COMPUTING
 In 1996, L.A.Zadhe (LAZ) introduced the term hard
computing.
 According to LAZ: We term a computing as Hard
computing, if
Precise result is guaranteed
Control action is unambiguous
Control action is formally defined(n i.e, with mathematical
model or algorithm)
EXAMPLES OF HARD COMPUTING
 Solving numerical problems( e.g, roots of polynomials ,
integration ,etc.).
 Searching and shorting techniques.
 Solving computational geometry problems( e.g, shortest
tour in a graph, finding closest pair of points given a set of
points , etc.).
• Soft computing is a collection of methodologies that
aim to exploit the tolerance for imprecision and
uncertainty to achieve tractability, robustness, and low
solution cost.
• Its principal constituents are fuzzy logic,
neurocomputing, and probabilistic reasoning. Soft
computing is likely to play an increasingly important
role in many application areas, including software
engineering. The role model for soft computing the
human mind.”
ROLE OF SOFT COMPUTING
CHARACTERISTICS OF SOFT COMPUTING
• It does not required any mathematical modeling of problem
solving.
• It may not yield the precise solution.
• Algorithms are adaptive (i.e., it can adjust to the change of
dynamic environment).
• Use some biological inspired methodologies such as genetics,
evolution, ant’s behaviors ,particles swarming ,human nervous
system , etc.
ANN
Learning and
adaptation
Fuzzy Set Theory
Knowledge representation
Via
Fuzzy if-then RULE
Genetic Algorithms
Systematic
Random Search
AI
Symbolic
Manipulation
AI AND SOFT COMPUTING
cat
cut
knowledge
Animal? cat
Neural character
recognition
SOFT COMPUTING
Animal character recognition
EXAMPLE OF SOFT COMPUTING
Money allocation problem
SOFT COMPUTING
Bank with
maximum return
(Evolutionary computing)
EXAMPLES OF SOFT COMPUTING
HOW SOFT COMPUTING?
How a student learns from teacher?
• Teacher asks questions and tell the answers then.
• Teacher puts questions and hints answers and asks
whether the answers are correct or not.
• Student thus learn a topic and store in his memory.
• Based on the knowledge he solves new problems.
This is the way how human brain works.
Based on this concept Artificial Neural Network is
used to solve problems
HOW SOFT COMPUTING?
How world selects the best?
• It starts with a population(random).
• Reproduces another population(next generation).
• Rank the population and selects the superior
individuals.
 Genetic algorithm is based on this natural phenomena.
• Population is synonymous to solution.
• Selection of superior solution is synonymous to
exploring the optimal solution.
HOW SOFT COMPUTING?
How a doctor treats his patient?
• Doctor asks the patient about suffering.
• Doctor find the symptoms of diseases.
• Doctor prescribed tests and medicines.
 This is exactly the way fuzzy logic works.
• Symptoms are correlated with diseases with
uncertainty.
• Doctor prescribes tests/medicines fuzzily.
DIFFERENCE BETWEEN HARD AND SOFT
COMPUTING
Hard Computing
 It requires a precisely stated analytical
model and often a set of computation
time.
 It is based on binary logic, crisp system,
neural analysis and crisp software.
 It has the characteristics of precision and
categoricity
 Deterministic
 Requires exact input data
 Strictly sequential
 Produces precise answer
Soft Computing
 It is tolerant of imprecision uncertainty,
partial truth and approximation.
 It is based on fuzzy logic, neural
networks, probabilistic reasoning etc.
 It has the characteristics of
approximation and disposition ability.
 Stochastics.
 Can deal with ambiguous and noisy data
 Parallel computations
 Yields approximation answer
Current Applications using Soft Computing
• Handwriting recognition
• Automotive systems and manufacturing
• Image processing and data compression
• Architecture
• Decision-support systems
• Data Mining
• Power systems
• Control Systems
Soft Computing in a Different Perspective
predicate logic and symbol manipulation techniques
Inference
Engine
Explanation
Facility
Knowledge
Acquisition
Knowledge
Acquisition
Expert Systems
•Fact
•rules
Global
Database
Knowledge
Engineer
User
Interface
Question
Response
User
Unique Property of Soft computing
• Learning from experimental data  generalization
• Soft computing techniques derive their power of generalization
from approximating or interpolating to produce outputs from
previously unseen inputs by using outputs from previous learned
inputs
• Generalization is usually done in a high dimensional space.
HYBRID COMPUTING
• It is a combination of the conventional hard computing and emerging
soft computing.
• So, few portion of the problem can be solved using hard concepting
for which we have a mathematical formulation for that particular
part and there are some part of the same problem which can’t be
solved in real time for which no good algorithm is available.
HYBRID COMPUTING
A= HARD COMPUTING
B= SOFT COMPUTING
AUB= HYBRID COMPUTING
List of Experiment:
1. Study of different Activation Functions.
a. Identity Function
b. Binary Sigmoidal Function
c. Ramp Function
d. Binary Step Function
2. Design and working of Neural Networks in MATLAB Toolbox
3. Generation of Logic Gates using McCulloch-Pitts neuron network.
4.Generation of Logic gates using MP pattern for multilayer field forward
system.
5.Designing of logic gates using PERCEPTRON neuron network.
6.Designing of logic gates using PERCEPTRON network for multilayer field
forward system.
7.Generation of Hebb’s network using MATLAB.
8.Generation of logic function using ADALINE network using MATLAB.
9.Generation of logic function using MADALINE network using MATLAB.
10.Introduction to Fuzzy Logic and its application.
Course Outcome:
At the end of the Course, the students will be able to
CO1: To study the characteristics of biological neurons, perceptron models and
algorithms.
CO2: To study the modelling of non-linear systems using ANN.
CO3: To learn the fuzzy set theories, its operation.
CO4: To apply the knowledge of fuzzy logic for modelling systems.
CO5: To apply the knowledge of GA and PSO.
THANK YOU
ARTIFICIAL NEURAL NETWORK (ANN)
• A network is a processing device, either an algorithm
or an actual hardware where design was inspired by
the design and functioning of animal brain.
• An artificial neural network (ANN) may be defined as
an information-processing model that is inspired by
the way biological nervous systems, such as brain
process information.
• An ANN is composed of a large number of highly
interconnected processing elements(neurons) working
in union to solve specific problems.
ARTIFICIAL NEURAL NETWORK
(ANN)
ADVANTAGES:-
a. Adaptive learning
b. Self Organization
c. Real time Operation
d. Fault tolerance
X1
X2
Y
y
x1
x2
yin=x1ω1+x2ω2
Y=f(yin)
o/p=function (net i/p calculated)
This is called activation function
The Brain vs. Computer
1. 10 billion neurons
2. 60 trillion synapses
3. Distributed processing
4. Parallel processing
5. Nonlinear processing
5. Parallel processing
1. Faster than neuron (10-9
sec)
cf. neuron: 10-3
sec
3. Central processing
4.Arithmetic operation
(linearity)
5. Sequential processing
BRAIN COMPUTER
What is Neural Networks (NN)?
From Biological Neuron to Artificial Neuron
Dendrite Cell Body Axon
From Biology to Artificial Neural Networks
Comparison between BN & AN
 Speed- Cycle time of execution in ANN(few ms) where as Cycle time of execution in BNN(few
ms)
 Processing- Both can perform massive parallel operations simultaneously but ANN is faster.
 Size and complexity- Complexity of brain is comparetively higher. Size and compelxity of ANN is
based on choosen application and Design.
 Storage capacity- Biological neuron stores the information in it's interconnections or in synapse
but in an AN it is stored in contigomemory Locations.
 In artificial neuron, continuous loading of new information may lead to overloading and loss of
older information.
 In Biological Neuron new inforation can be added by adjusting stregnth without destroying
older informations.
 Tolerance- iological Neuron posesses fault tolerance capability whereas ANN has no fault
tolerance.
Basic Models of Artificial Neural Network
 The arrangement of neurons to form layers and the connection pattern formed withinh and b/w layers
is called Network Architecture
 Types are :-
1. Single Layer Feed forward Network
2. Multi Layer Feed forward Network
3. Single node with its own feedback
4. Single layer Recurrent Network
5. Multi layer Recurrent Network
 A Network is said to be a feedforward N/W if no neuron
in the O/P layer is an I/P to a node in the same layer or
in the preceeding layer.
 When O/P can be directed bak as I/Ps to same
preceeding layer nodes then it results to feedback
network.
 Any layer that is formed b/w the I/P and O/P layer is
called Hidden Layer.
 If the feedback of the o/p of the processing element is
directed back and I/P to te processing element in the
same layer then it is called Lateral Feedback.
 Recurrent N/w are feedback network with closed loop.
 Competitive intercnnections having fixed weight of -
8,type net is called maxnet.
Single layer feedbackforward N/w
Single Layer Feed forward Network
Multi Layer Feed forward Netwrok
Multi Layer Feed forward Netwrok
Multi-layer feed forward N/w
SINGLE NODE WITH OWN
FEEDBACK
Multi-layer recurrent N/w
Learning
 Learning or Training is a process by means of which a neural N/w adapts itself to a environment by
making prepare parametric adjustments ; resulting in the production of desired responses.
1. Parametric learning:- Updates connecting weights in a neural net.
2. Structure learning:- Focuses on change in N/w structure (which includes the no of processing element
as well as their connection types).
 Another category;
1. Supervised Learning
2. Unsupervised Learning
3. Reinforcement Learning
1- Supervised Learning :-
 Performed with the help of a Teacher.
 Each I/P vector required corresponding target vector.
 The I/P vector along with target vector is called Training Pair.
1- Supervised Learning :-
2-Unsupervised Learning
 Learning without help of teacher.
 The I/P vectors of similar type are guped without
the use of training dat to specify how a member of
each group or to which group a number belong.
 When a new I/P pattern is applied, the neural N/w
gives an O/P response indicating she class to which
the I/P pattern belongs.
 Self Organizing is the process in which exact studies
will be formed by discovering similarities and
dissimilarities among the objects.
3-Reinforcement Learning
 It is a type of supervised learning in which the correct target o/p vaues anot known for each I/p
pattern.Only less informative is assosiate.
Activation Functions :-
 Activation function helps to achieve the exact O/P
.
 An integrating function (say f) is associated with the I/P of a processing element.
 Non-linear activation function is used to ensure that a nueron response is bounded i.e. actual
response of the neuron is conditional or dampened as a result of large small activating stimuli and is
thus controllable.
 When a signal is fed through a multilayern/w with linear activation functions, the o/p obtained
remains same as that could be obtained using a single layer network. So now linear actiavation
functions are widely used in multilayer N/w as compared to linear function.
1---Identity function:
 it is a linear function which is defined as f(x) =x for all x
 The output is same as the input.
2---Binary step function
 it is defined as------->
 where θ represents thresh hold value. It is used in single layer nets to convert the net input to an output
that is bianary. ( 0 or 1)
3---Bipolar step function
 It is defined as---------> f(x)=
 where θ represents threshold value, used in single layer nets to convert the net input to an output that
is bipolar (+1 or -1).
4---Sigmoid function
 Used in Back propagation nets.
 Two types:
a) binary sigmoid function
 logistic sigmoid function or unipolar sigmoid function.
 it is defined as
 where λ – steepness parameter.
 The derivative of this function is f’(x) = λ f(x)[1-f(x)]. The range of sigmoid function is 0 to 1.
b) Bipolar sigmoid function
 where λ- steepness parameter and the sigmoid range is between -1 and +1.
 It is closely related to hyberbolic tangent function, which is written as
 The derivative of the hyberbolic tangent function is
 h’(x)= [1+h(x))][1-h(x)]
5---Ramp function
Weights :-
 Each number is connected to the other neutron by means of directed communication links, and each
communication link is associated with weights.
 Weights contains information about the input signal.
 Weight matrix is also called as the connection matrix.
Bias :-
 Bias has an impact in calculating net input.
 Bias is included by adding x0 to the input vector x.
 The net output is calculated by
 The bias is of two types
 Positive bias-Increase the net input
 Negative bias-Decrease the net input
Threshold
 It is a set value based upon which the final output is calculated.
 Calculated net input and threshold is compared to get the network output.
 The activation function of threshold is defined as
 where θ is the fixed threshold value
Learning rate
 Denoted by α.
 Control the amount of weight adjustment at each step of training.
 The learning rate range from 0 to 1.
 Determine the rate of learning at each step
General Notation
 xi =Activation of Unit xi , I/P signal
Mcculloch-Pitts Neuron
 Discovered in 1943.
 Usually called as M-P neuron.
 M-P neurons are connected by directed weighted paths.
 Activation of M-P neurons is binary (i.e) at any time step the neuron may fire
or may not fire.
 Weights associated with communication links may be excitatory(wgts are
positive)/inhibitory(wgts are negative).
 Threshold plays major role here. There is a fixed threshold for each neuron
and if the net input to the neuron is greater than the threshold then the
neuron fires.
 They are widely used in logic functions.
 A simple M-P neuron is shown in the figure.
 The o/p will fire if threshold sisfies the following θ >nw –p
 No particular training Algorithm is available.
 An analysis is performed to determine the weights and the
threshold.
 Performs a simple logic function.
Linear separability
 ANN does not give an exact solution for a non-linear problem, only provind possible
approximate solutions.
 A decision line is drawn to separate positive or negative response.
 The decision line is also called as decision-making line or decision-support line or
linear-separable line.
 The net input calculation to the output unit is given as
 (Net I/P is o/p unit) i.e there exist a loundary b/w the regions where Yin>0 and
Yin<0. This region is called decision boundary and is determined as
 If there exists weights for which training I/p vectors having
+ve responses +1 lie on one side of the decision boundary and all other vectors
having -ve ,-1 lies on the other side of the decision boundary .
 Then it is called Linearly Separable.
Hebb Network
 Donald Hebb stated in 1949 that “ In brain, the learning is performed by the
change in the synaptic gap”.
 When an axon of cell A is near enough to excite cell B, and repeatedly or
permanently takes place in firing it, some growth process or metabolic
change takes place in one or both the cells such than A’s efficiency, as one of
the cells firing B, is increased.
 According to Hebb rule, the weight vector is found to increase
proportionately to the product of the input and the learning signal.
 In Hebb learning, two interconnected neurons are ‘on’ simultaneously.
 The weight update in Hebb rule is given by
 Wi(new) = wi(old)+ xiy.
 It is suited more for bipolar data.
Intro to Soft Computing with a focus on AI

Intro to Soft Computing with a focus on AI

  • 1.
    SOFT COMPUTING PRESENTED BY:-SUSHREESAMIKSHYA PATTANAIK C.V.RAMAN GLOBAL UNIVERSITY,BHUBANESWAR
  • 2.
    INTRODUCTION TO SOFTCOMPUTING  Concept of Computing  Hard Computing  Soft Computing  How Soft Computing ?  Hard Computing Vs Soft Computing  Hybrid Computing
  • 3.
    CONCEPT OF COMPUTATION Input/AntecedentOutput/Consequent y=f(x) is called a mapping function f= formal method or an algorithm to solve a problem Computing y=f(x) Control action
  • 4.
  • 5.
    IMPORTANT CHARACTERISTICS OFSOFT COMPUTING  Should provide precise solution.  Control action should unambiguous and accurate.  Suitable for problem, which is easy to model mathematically.
  • 6.
    HARD COMPUTING  In1996, L.A.Zadhe (LAZ) introduced the term hard computing.  According to LAZ: We term a computing as Hard computing, if Precise result is guaranteed Control action is unambiguous Control action is formally defined(n i.e, with mathematical model or algorithm)
  • 7.
    EXAMPLES OF HARDCOMPUTING  Solving numerical problems( e.g, roots of polynomials , integration ,etc.).  Searching and shorting techniques.  Solving computational geometry problems( e.g, shortest tour in a graph, finding closest pair of points given a set of points , etc.).
  • 9.
    • Soft computingis a collection of methodologies that aim to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness, and low solution cost. • Its principal constituents are fuzzy logic, neurocomputing, and probabilistic reasoning. Soft computing is likely to play an increasingly important role in many application areas, including software engineering. The role model for soft computing the human mind.” ROLE OF SOFT COMPUTING
  • 10.
    CHARACTERISTICS OF SOFTCOMPUTING • It does not required any mathematical modeling of problem solving. • It may not yield the precise solution. • Algorithms are adaptive (i.e., it can adjust to the change of dynamic environment). • Use some biological inspired methodologies such as genetics, evolution, ant’s behaviors ,particles swarming ,human nervous system , etc.
  • 11.
    ANN Learning and adaptation Fuzzy SetTheory Knowledge representation Via Fuzzy if-then RULE Genetic Algorithms Systematic Random Search AI Symbolic Manipulation AI AND SOFT COMPUTING
  • 12.
  • 13.
    EXAMPLE OF SOFTCOMPUTING Money allocation problem SOFT COMPUTING Bank with maximum return (Evolutionary computing)
  • 14.
  • 15.
    HOW SOFT COMPUTING? Howa student learns from teacher? • Teacher asks questions and tell the answers then. • Teacher puts questions and hints answers and asks whether the answers are correct or not. • Student thus learn a topic and store in his memory. • Based on the knowledge he solves new problems. This is the way how human brain works. Based on this concept Artificial Neural Network is used to solve problems
  • 16.
    HOW SOFT COMPUTING? Howworld selects the best? • It starts with a population(random). • Reproduces another population(next generation). • Rank the population and selects the superior individuals.  Genetic algorithm is based on this natural phenomena. • Population is synonymous to solution. • Selection of superior solution is synonymous to exploring the optimal solution.
  • 17.
    HOW SOFT COMPUTING? Howa doctor treats his patient? • Doctor asks the patient about suffering. • Doctor find the symptoms of diseases. • Doctor prescribed tests and medicines.  This is exactly the way fuzzy logic works. • Symptoms are correlated with diseases with uncertainty. • Doctor prescribes tests/medicines fuzzily.
  • 18.
    DIFFERENCE BETWEEN HARDAND SOFT COMPUTING Hard Computing  It requires a precisely stated analytical model and often a set of computation time.  It is based on binary logic, crisp system, neural analysis and crisp software.  It has the characteristics of precision and categoricity  Deterministic  Requires exact input data  Strictly sequential  Produces precise answer Soft Computing  It is tolerant of imprecision uncertainty, partial truth and approximation.  It is based on fuzzy logic, neural networks, probabilistic reasoning etc.  It has the characteristics of approximation and disposition ability.  Stochastics.  Can deal with ambiguous and noisy data  Parallel computations  Yields approximation answer
  • 19.
    Current Applications usingSoft Computing • Handwriting recognition • Automotive systems and manufacturing • Image processing and data compression • Architecture • Decision-support systems • Data Mining • Power systems • Control Systems
  • 20.
    Soft Computing ina Different Perspective predicate logic and symbol manipulation techniques Inference Engine Explanation Facility Knowledge Acquisition Knowledge Acquisition Expert Systems •Fact •rules Global Database Knowledge Engineer User Interface Question Response User
  • 21.
    Unique Property ofSoft computing • Learning from experimental data  generalization • Soft computing techniques derive their power of generalization from approximating or interpolating to produce outputs from previously unseen inputs by using outputs from previous learned inputs • Generalization is usually done in a high dimensional space.
  • 22.
    HYBRID COMPUTING • Itis a combination of the conventional hard computing and emerging soft computing. • So, few portion of the problem can be solved using hard concepting for which we have a mathematical formulation for that particular part and there are some part of the same problem which can’t be solved in real time for which no good algorithm is available.
  • 23.
    HYBRID COMPUTING A= HARDCOMPUTING B= SOFT COMPUTING AUB= HYBRID COMPUTING
  • 24.
    List of Experiment: 1.Study of different Activation Functions. a. Identity Function b. Binary Sigmoidal Function c. Ramp Function d. Binary Step Function 2. Design and working of Neural Networks in MATLAB Toolbox 3. Generation of Logic Gates using McCulloch-Pitts neuron network. 4.Generation of Logic gates using MP pattern for multilayer field forward system.
  • 25.
    5.Designing of logicgates using PERCEPTRON neuron network. 6.Designing of logic gates using PERCEPTRON network for multilayer field forward system. 7.Generation of Hebb’s network using MATLAB. 8.Generation of logic function using ADALINE network using MATLAB. 9.Generation of logic function using MADALINE network using MATLAB. 10.Introduction to Fuzzy Logic and its application.
  • 26.
    Course Outcome: At theend of the Course, the students will be able to CO1: To study the characteristics of biological neurons, perceptron models and algorithms. CO2: To study the modelling of non-linear systems using ANN. CO3: To learn the fuzzy set theories, its operation. CO4: To apply the knowledge of fuzzy logic for modelling systems. CO5: To apply the knowledge of GA and PSO.
  • 34.
  • 36.
    ARTIFICIAL NEURAL NETWORK(ANN) • A network is a processing device, either an algorithm or an actual hardware where design was inspired by the design and functioning of animal brain. • An artificial neural network (ANN) may be defined as an information-processing model that is inspired by the way biological nervous systems, such as brain process information. • An ANN is composed of a large number of highly interconnected processing elements(neurons) working in union to solve specific problems.
  • 37.
    ARTIFICIAL NEURAL NETWORK (ANN) ADVANTAGES:- a.Adaptive learning b. Self Organization c. Real time Operation d. Fault tolerance X1 X2 Y y x1 x2 yin=x1ω1+x2ω2 Y=f(yin) o/p=function (net i/p calculated) This is called activation function
  • 38.
    The Brain vs.Computer 1. 10 billion neurons 2. 60 trillion synapses 3. Distributed processing 4. Parallel processing 5. Nonlinear processing 5. Parallel processing 1. Faster than neuron (10-9 sec) cf. neuron: 10-3 sec 3. Central processing 4.Arithmetic operation (linearity) 5. Sequential processing BRAIN COMPUTER
  • 39.
    What is NeuralNetworks (NN)?
  • 40.
    From Biological Neuronto Artificial Neuron Dendrite Cell Body Axon
  • 41.
    From Biology toArtificial Neural Networks
  • 42.
    Comparison between BN& AN  Speed- Cycle time of execution in ANN(few ms) where as Cycle time of execution in BNN(few ms)  Processing- Both can perform massive parallel operations simultaneously but ANN is faster.  Size and complexity- Complexity of brain is comparetively higher. Size and compelxity of ANN is based on choosen application and Design.  Storage capacity- Biological neuron stores the information in it's interconnections or in synapse but in an AN it is stored in contigomemory Locations.  In artificial neuron, continuous loading of new information may lead to overloading and loss of older information.  In Biological Neuron new inforation can be added by adjusting stregnth without destroying older informations.  Tolerance- iological Neuron posesses fault tolerance capability whereas ANN has no fault tolerance.
  • 43.
    Basic Models ofArtificial Neural Network  The arrangement of neurons to form layers and the connection pattern formed withinh and b/w layers is called Network Architecture  Types are :- 1. Single Layer Feed forward Network 2. Multi Layer Feed forward Network 3. Single node with its own feedback 4. Single layer Recurrent Network 5. Multi layer Recurrent Network
  • 44.
     A Networkis said to be a feedforward N/W if no neuron in the O/P layer is an I/P to a node in the same layer or in the preceeding layer.  When O/P can be directed bak as I/Ps to same preceeding layer nodes then it results to feedback network.  Any layer that is formed b/w the I/P and O/P layer is called Hidden Layer.  If the feedback of the o/p of the processing element is directed back and I/P to te processing element in the same layer then it is called Lateral Feedback.  Recurrent N/w are feedback network with closed loop.  Competitive intercnnections having fixed weight of - 8,type net is called maxnet. Single layer feedbackforward N/w Single Layer Feed forward Network
  • 45.
    Multi Layer Feedforward Netwrok Multi Layer Feed forward Netwrok
  • 46.
    Multi-layer feed forwardN/w SINGLE NODE WITH OWN FEEDBACK
  • 48.
  • 50.
    Learning  Learning orTraining is a process by means of which a neural N/w adapts itself to a environment by making prepare parametric adjustments ; resulting in the production of desired responses. 1. Parametric learning:- Updates connecting weights in a neural net. 2. Structure learning:- Focuses on change in N/w structure (which includes the no of processing element as well as their connection types).  Another category; 1. Supervised Learning 2. Unsupervised Learning 3. Reinforcement Learning 1- Supervised Learning :-  Performed with the help of a Teacher.  Each I/P vector required corresponding target vector.  The I/P vector along with target vector is called Training Pair.
  • 51.
  • 52.
    2-Unsupervised Learning  Learningwithout help of teacher.  The I/P vectors of similar type are guped without the use of training dat to specify how a member of each group or to which group a number belong.  When a new I/P pattern is applied, the neural N/w gives an O/P response indicating she class to which the I/P pattern belongs.  Self Organizing is the process in which exact studies will be formed by discovering similarities and dissimilarities among the objects.
  • 53.
    3-Reinforcement Learning  Itis a type of supervised learning in which the correct target o/p vaues anot known for each I/p pattern.Only less informative is assosiate. Activation Functions :-  Activation function helps to achieve the exact O/P .  An integrating function (say f) is associated with the I/P of a processing element.  Non-linear activation function is used to ensure that a nueron response is bounded i.e. actual response of the neuron is conditional or dampened as a result of large small activating stimuli and is thus controllable.  When a signal is fed through a multilayern/w with linear activation functions, the o/p obtained remains same as that could be obtained using a single layer network. So now linear actiavation functions are widely used in multilayer N/w as compared to linear function.
  • 54.
    1---Identity function:  itis a linear function which is defined as f(x) =x for all x  The output is same as the input. 2---Binary step function  it is defined as------->  where θ represents thresh hold value. It is used in single layer nets to convert the net input to an output that is bianary. ( 0 or 1) 3---Bipolar step function  It is defined as---------> f(x)=  where θ represents threshold value, used in single layer nets to convert the net input to an output that is bipolar (+1 or -1). 4---Sigmoid function  Used in Back propagation nets.  Two types: a) binary sigmoid function  logistic sigmoid function or unipolar sigmoid function.  it is defined as
  • 55.
     where λ– steepness parameter.  The derivative of this function is f’(x) = λ f(x)[1-f(x)]. The range of sigmoid function is 0 to 1. b) Bipolar sigmoid function  where λ- steepness parameter and the sigmoid range is between -1 and +1.  It is closely related to hyberbolic tangent function, which is written as  The derivative of the hyberbolic tangent function is  h’(x)= [1+h(x))][1-h(x)] 5---Ramp function
  • 56.
    Weights :-  Eachnumber is connected to the other neutron by means of directed communication links, and each communication link is associated with weights.  Weights contains information about the input signal.  Weight matrix is also called as the connection matrix.
  • 57.
    Bias :-  Biashas an impact in calculating net input.  Bias is included by adding x0 to the input vector x.  The net output is calculated by  The bias is of two types  Positive bias-Increase the net input  Negative bias-Decrease the net input
  • 58.
    Threshold  It isa set value based upon which the final output is calculated.  Calculated net input and threshold is compared to get the network output.  The activation function of threshold is defined as  where θ is the fixed threshold value Learning rate  Denoted by α.  Control the amount of weight adjustment at each step of training.  The learning rate range from 0 to 1.  Determine the rate of learning at each step
  • 59.
    General Notation  xi=Activation of Unit xi , I/P signal
  • 60.
    Mcculloch-Pitts Neuron  Discoveredin 1943.  Usually called as M-P neuron.  M-P neurons are connected by directed weighted paths.  Activation of M-P neurons is binary (i.e) at any time step the neuron may fire or may not fire.  Weights associated with communication links may be excitatory(wgts are positive)/inhibitory(wgts are negative).  Threshold plays major role here. There is a fixed threshold for each neuron and if the net input to the neuron is greater than the threshold then the neuron fires.  They are widely used in logic functions.
  • 61.
     A simpleM-P neuron is shown in the figure.  The o/p will fire if threshold sisfies the following θ >nw –p  No particular training Algorithm is available.  An analysis is performed to determine the weights and the threshold.  Performs a simple logic function.
  • 62.
    Linear separability  ANNdoes not give an exact solution for a non-linear problem, only provind possible approximate solutions.  A decision line is drawn to separate positive or negative response.  The decision line is also called as decision-making line or decision-support line or linear-separable line.  The net input calculation to the output unit is given as  (Net I/P is o/p unit) i.e there exist a loundary b/w the regions where Yin>0 and Yin<0. This region is called decision boundary and is determined as  If there exists weights for which training I/p vectors having +ve responses +1 lie on one side of the decision boundary and all other vectors having -ve ,-1 lies on the other side of the decision boundary .  Then it is called Linearly Separable.
  • 63.
    Hebb Network  DonaldHebb stated in 1949 that “ In brain, the learning is performed by the change in the synaptic gap”.  When an axon of cell A is near enough to excite cell B, and repeatedly or permanently takes place in firing it, some growth process or metabolic change takes place in one or both the cells such than A’s efficiency, as one of the cells firing B, is increased.  According to Hebb rule, the weight vector is found to increase proportionately to the product of the input and the learning signal.  In Hebb learning, two interconnected neurons are ‘on’ simultaneously.  The weight update in Hebb rule is given by  Wi(new) = wi(old)+ xiy.  It is suited more for bipolar data.