Skip to main content

2013 | Buch

A Rapid Introduction to Adaptive Filtering

insite
SUCHEN

Über dieses Buch

In this book, the authors provide insights into the basics of adaptive filtering, which are particularly useful for students taking their first steps into this field. They start by studying the problem of minimum mean-square-error filtering, i.e., Wiener filtering. Then, they analyze iterative methods for solving the optimization problem, e.g., the Method of Steepest Descent. By proposing stochastic approximations, several basic adaptive algorithms are derived, including Least Mean Squares (LMS), Normalized Least Mean Squares (NLMS) and Sign-error algorithms. The authors provide a general framework to study the stability and steady-state performance of these algorithms. The affine Projection Algorithm (APA) which provides faster convergence at the expense of computational complexity (although fast implementations can be used) is also presented. In addition, the Least Squares (LS) method and its recursive version (RLS), including fast implementations are discussed. The book closes with the discussion of several topics of interest in the adaptive filtering field.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
The area of adaptive filtering is a very importantone in the vast field of Signal Processing. Adaptive filters areubiquitous in current technology. System identificaction, equalization for communication systems, active noise cancellation,speech processing, sonar, seismology, beamforming, etc, are a few examples from a large set of applications were adaptive filters are used to solve different kinds of problems. In this chapter we provide a short introduction to the adaptive filtering problem and to the different aspects that should be taken into account when choosing or designing an adaptive filter for a particular application.
Leonardo Rey Vega, Hernan Rey
Chapter 2. Wiener Filtering
Abstract
Before moving to the actual adaptive filtering problem, we need to solve the optimum linear filtering problem (particularly, in the mean-square-error sense). We start by explaining the analogy between linear estimation and linear optimum filtering. We develop the principle of orthogonality, derive the Wiener–Hopf equation (whose solution lead to the optimum Wiener filter) and study the error surface. Finally, we applied the Wiener filter to the problem of linear prediction (forward and backward).
Leonardo Rey Vega, Hernan Rey
Chapter 3. Iterative Optimization
Abstract
In this chapter we introduce iterative search methods for minimizing cost functions, and in particular, the \(J_{\mathrm MSE}\) function. We focus on the methods of Steepest Descent and Newton-Raphson, which belong to the family of deterministic gradient algorithms. Although these methods still require knowledge of the second order statistics as does the Wiener filter, they find this solution iteratively. We also study the convergence of both algorithms and include simulation results to provide more insights on their performance. Understanding their functioning and convergence properties is very important as they will be the basis for the development of stochastic gradient adaptive filters in the next chapter.
Leonardo Rey Vega, Hernan Rey
Chapter 4. Stochastic Gradient Adaptive Algorithms
Abstract
One way to construct adaptive algorithms leads to the so called Stochastic Gradient algorithms which will be the subject of this chapter. The most important algorithm in this family, the Least Mean Square algorithm (LMS), is obtained from the SD algorithm, employing suitable estimators of the correlation matrix and cross correlation vector. Other important algorithms as the Normalized Least Mean Square (NLMS) or the Affine Projection (APA) algorithms are obtained from straightforward generalizations of the LMS algorithm. One of the most useful properties of adaptive algorithms is the ability of tracking variations in the signals statistics. As they are implemented using stochastic signals, the update directions in these adaptive algorithms become subject to random fluctuations called gradient noise. This will lead to the question regarding the performance (in statistical terms) of these systems. In this chapter we will try to give a succinct introduction to this kind of adaptive filter and to its more relevant characteristics.
Leonardo Rey Vega, Hernan Rey
Chapter 5. Least Squares
Abstract
In this chapter we will cover the basics of the celebrated method of Least Squares (LS). The approach to this method is different from the stochastic gradient approach from the previous chapter. As always, the idea will be to obtain an estimation of a given system using input-output measured pairs (and no statistical information), and assuming a model in which the input and output pairs are linearly related. We will also present the Recursive Least Squares (RLS) algorithm, which will be a recursive and a more computational efficient implementation of the LS method. One of its advantage is that it can be used in real time as the input-output pairs are received. In this sense, it will be very similar to the adaptive filters obtained in the previous chapter. Several important properties of LS and RLS will be discussed.
Leonardo Rey Vega, Hernan Rey
Chapter 6. Advanced Topics and New Directions
Abstract
In this final chapter we provide a concise and brief discussion of other topics not covered in the previous chapters. These topics are more advanced or are the object of active research in the area of adaptive filtering. A brief introduction to each topic and several relevant references for the interested reader are provided.
Leonardo Rey Vega, Hernan Rey
Backmatter
Metadaten
Titel
A Rapid Introduction to Adaptive Filtering
verfasst von
Leonardo Rey Vega
Hernan Rey
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-30299-2
Print ISBN
978-3-642-30298-5
DOI
https://doi.org/10.1007/978-3-642-30299-2

Neuer Inhalt