Overview
of
Adaptive Signal Processing
by
Hajira Fathima
Assistant Professor &
I/c Head of ECE Department
Maulana Azad National Urdu University
Hyderabad
Content and Figures are from Adaptive Filter Theory, 4e by Simon Haykin, ©2002 Prentice Hall Inc.
The Filtering Problem
• Filters may be used for three information-processing tasks
– Filtering
– Smoothing
– Prediction
• Given an optimality criteria we often can design optimal filters
– Requires a priori information about the environment
– Example: Under certain conditions the so called Wiener filter is optimal in
the mean-squared sense
• Adaptive filters are self-designing using a recursive algorithm
– Useful if complete knowledge of environment is not available a priori
Applications of Adaptive Filters: Identification
• Used to provide a linear model of an unknown plant
• Parameters
– u=input of adaptive filter=input to plant
– y=output of adaptive filter
– d=desired response=output of plant
– e=d-y=estimation error
• Applications:
– System identification
Applications of Adaptive Filters: Inverse Modeling
• Used to provide an inverse model of an unknown plant
• Parameters
– u=input of adaptive filter=output to plant
– y=output of adaptive filter
– d=desired response=delayed system input
– e=d-y=estimation error
• Applications:
– Equalization
Applications of Adaptive Filters: Prediction
• Used to provide a prediction of the present value of a random
signal
• Parameters
– u=input of adaptive filter=delayed version of random signal
– y=output of adaptive filter
– d=desired response=random signal
– e=d-y=estimation error=system output
• Applications:
– Linear predictive coding
Applications of Adaptive Filters: Interference Cancellation
• Used to cancel unknown interference from a primary signal
• Parameters
– u=input of adaptive filter=reference signal
– y=output of adaptive filter
– d=desired response=primary signal
– e=d-y=estimation error=system output
• Applications:
– Echo cancellation
Stochastic Gradient Approach
• Most commonly used type of Adaptive Filters
• Define cost function as mean-squared error
• Difference between filter output and desired response
• Based on the method of steepest descent
– Move towards the minimum on the error surface to get to
minimum
– Requires the gradient of the error surface to be known
• Most popular adaptation algorithm is LMS
– Derived from steepest descent
– Doesn’t require gradient to be know: it is estimated at every
iteration
• Least-Mean-Square (LMS) Algorithm

















 
































signal
error
vector
input
tap
parameter
rate
-
learning
vector
weight
-
tap
of
value
old
vector
weigth
-
tap
of
value
update
Least-Mean-Square (LMS) Algorithm
• The LMS Algorithm consists of two basic processes
– Filtering process
• Calculate the output of FIR filter by convolving input and taps
• Calculate estimation error by comparing the output to desired signal
– Adaptation process
• Adjust tap weights based on the estimation error
LMS Algorithm Steps
• Filter output
• Estimation error
• Tap-weight adaptation
     





1
M
0
k
*
k n
w
k
n
u
n
y
     
n
y
n
d
n
e 

       
n
e
k
n
u
n
w
1
n
w *
k
k 




Stability of LMS
• The LMS algorithm is convergent in the mean square if and only
if the step-size parameter satisfy
• Here max is the largest eigenvalue of the correlation matrix of
the input data
• More practical test for stability is
• Larger values for step size
– Increases adaptation rate (faster adaptation)
– Increases residual mean-squared error
• Demos
– http://www.eas.asu.edu/~dsp/grad/anand/java/ANC/ANC.html
– http://www.eas.asu.edu/~dsp/grad/anand/java/AdaptiveFilter/Zero.html
max
2
0




power
signal
input
2
0 


Questions
?

lecture2forelectronics and communication.ppt

  • 1.
    Overview of Adaptive Signal Processing by HajiraFathima Assistant Professor & I/c Head of ECE Department Maulana Azad National Urdu University Hyderabad Content and Figures are from Adaptive Filter Theory, 4e by Simon Haykin, ©2002 Prentice Hall Inc.
  • 2.
    The Filtering Problem •Filters may be used for three information-processing tasks – Filtering – Smoothing – Prediction • Given an optimality criteria we often can design optimal filters – Requires a priori information about the environment – Example: Under certain conditions the so called Wiener filter is optimal in the mean-squared sense • Adaptive filters are self-designing using a recursive algorithm – Useful if complete knowledge of environment is not available a priori
  • 3.
    Applications of AdaptiveFilters: Identification • Used to provide a linear model of an unknown plant • Parameters – u=input of adaptive filter=input to plant – y=output of adaptive filter – d=desired response=output of plant – e=d-y=estimation error • Applications: – System identification
  • 4.
    Applications of AdaptiveFilters: Inverse Modeling • Used to provide an inverse model of an unknown plant • Parameters – u=input of adaptive filter=output to plant – y=output of adaptive filter – d=desired response=delayed system input – e=d-y=estimation error • Applications: – Equalization
  • 5.
    Applications of AdaptiveFilters: Prediction • Used to provide a prediction of the present value of a random signal • Parameters – u=input of adaptive filter=delayed version of random signal – y=output of adaptive filter – d=desired response=random signal – e=d-y=estimation error=system output • Applications: – Linear predictive coding
  • 6.
    Applications of AdaptiveFilters: Interference Cancellation • Used to cancel unknown interference from a primary signal • Parameters – u=input of adaptive filter=reference signal – y=output of adaptive filter – d=desired response=primary signal – e=d-y=estimation error=system output • Applications: – Echo cancellation
  • 7.
    Stochastic Gradient Approach •Most commonly used type of Adaptive Filters • Define cost function as mean-squared error • Difference between filter output and desired response • Based on the method of steepest descent – Move towards the minimum on the error surface to get to minimum – Requires the gradient of the error surface to be known • Most popular adaptation algorithm is LMS – Derived from steepest descent – Doesn’t require gradient to be know: it is estimated at every iteration • Least-Mean-Square (LMS) Algorithm                                                    signal error vector input tap parameter rate - learning vector weight - tap of value old vector weigth - tap of value update
  • 8.
    Least-Mean-Square (LMS) Algorithm •The LMS Algorithm consists of two basic processes – Filtering process • Calculate the output of FIR filter by convolving input and taps • Calculate estimation error by comparing the output to desired signal – Adaptation process • Adjust tap weights based on the estimation error
  • 9.
    LMS Algorithm Steps •Filter output • Estimation error • Tap-weight adaptation            1 M 0 k * k n w k n u n y       n y n d n e           n e k n u n w 1 n w * k k     
  • 10.
    Stability of LMS •The LMS algorithm is convergent in the mean square if and only if the step-size parameter satisfy • Here max is the largest eigenvalue of the correlation matrix of the input data • More practical test for stability is • Larger values for step size – Increases adaptation rate (faster adaptation) – Increases residual mean-squared error • Demos – http://www.eas.asu.edu/~dsp/grad/anand/java/ANC/ANC.html – http://www.eas.asu.edu/~dsp/grad/anand/java/AdaptiveFilter/Zero.html max 2 0     power signal input 2 0   
  • 11.