main-content

## Über dieses Buch

Convolution is the most important operation that describes the behavior of a linear time-invariant dynamical system. Deconvolution is the unraveling of convolution. It is the inverse problem of generating the system's input from knowledge about the system's output and dynamics. Deconvolution requires a careful balancing of bandwidth and signal-to-noise ratio effects. Maximum-likelihood deconvolution (MLD) is a design procedure that handles both effects. It draws upon ideas from Maximum Likelihood, when unknown parameters are random. It leads to linear and nonlinear signal processors that provide high-resolution estimates of a system's input. All aspects of MLD are described, from first principles in this book. The purpose of this volume is to explain MLD as simply as possible. To do this, the entire theory of MLD is presented in terms of a convolutional signal generating model and some relatively simple ideas from optimization theory. Earlier approaches to MLD, which are couched in the language of state-variable models and estimation theory, are unnecessary to understand the essence of MLD. MLD is a model-based signal processing procedure, because it is based on a signal model, namely the convolutional model. The book focuses on three aspects of MLD: (1) specification of a probability model for the system's measured output; (2) determination of an appropriate likelihood function; and (3) maximization of that likelihood function. Many practical algorithms are obtained. Computational aspects of MLD are described in great detail. Extensive simulations are provided, including real data applications.

## Inhaltsverzeichnis

### 1. Introduction

Abstract
Convolution is by far the most important operation that describes the behavior of a linear time-invariant (LTI) dynamical system. It is the operation of convolution that tells us how to compute the output of a LTI system from its input and impulse response (IR), i.e.,
$${\text{output = input*IR}}$$
(1-1)
where * denotes the mathematical operation of convolution. Convolution is associated with the “forward problem” of generating the responce of a LTI system from known values of its input and IR.
Jerry M. Mendel

### 2. Convolutional Model

Abstract
The basic convolutional model is (see Figure 2-1)
$${\text{measured}}\,{\text{output = output + noise = input*IR + noise}}$$
In this chapter we describe the three components of this model, i.e., input, IR, and noise so that we can compute a formula for the likelihood function. Before doing this, we pause briefly to relate the reflection seismology experiment to the convolutional model.
Jerry M. Mendel

### 3. Likelihood

Abstract
Now that we have established what the unknown parameters are in the deconvolution problem (i.e., a, b, s, q, r, uB), we create a likelihood function. R. A. Fisher (1922 and 1925) developed the method of maximum likelihood for problems that are characterized just by deterministic parameters. Another method, associated with the name of Thomas Bayes, called the Maximum a Posteriori Method, i.e., MAP (e.g., Sorenson, 1980, and Mendel, 1987a), was developed for problems that are characterized just by random parameters. The known probability models for these random parameters are used in the MAP likelihood function. Our deconvolution problem is a mixture between Fisher and Bayesian likelihood methods, because it contains both deterministic and random parameters (Mendel, 1983). Our approach is to account for both types of parameters in a correct way, i.e., our likelihood function will treat a, b, s, as deterministic, and r, q, uB ,as random. Recall that we do indeed have probability models for r, q, and uB: r is multivariate Gaussian, q is multivariate Bernoulli, and uB is multivariate Gaussian. The resulting likelihood function is called an unconditional likelihood function (Nahi, 1969), because the random parameters have been properly accounted for. Because the phrase “unconditional likelihood function” is such a mouthful, we shorten it to “likelihood function.”
Jerry M. Mendel

### 4. Maximizing Likelihood

Abstract
We have shown that the deconvolution problem can be viewed as an optimization problem. The maximum-likelihood (ML) values of a, b, s, q, r, and uB are the values where either the likelihood function L{a, b, s, q, r, uBz} or the loglikelihood function L{a, b, s, q, r, uBz} attains its maximum. Because of the exponential nature of our likelihood function, we shall focus on maximizing L {}. Maximum-likelihood values of the parameter vectors are denoted with a superscript ML, e.g., aML, uB ML. There are many different methods one can use to maximize L{}. We shall examine some of these in this chapter. First, however, we must be convinced that maximizing L{} is a meaningful thing to do.
Jerry M. Mendel

### 5. Properties and Performance

Abstract
In Chapter 4 we introduced three major types of signal processing: minimum-variance deconvolution (MVD), which is a form of linear signal processing; detection, which is a form of nonlinear signal processing; and, optimization, which is also a form of nonlinear signal processing. In this chapter we describe everything that is known to-date about the properties and performance of these three types of signal processing. We do this in order to gain a better appreciation and understanding of these techniques.
Jerry M. Mendel

### 6. Examples

Abstract
This chapter is filled with examples that hopefully illuminate much of the material that has been described in Chapters 1 through 5, especially the material in Chapters 4 and 5. Real and synthetic data cases are presented, because much can be learned about MLD from both types of data. Rather than collect all of the real data examples in one section, at the end of the chapter, as is customarily done in journal articles, we shall weave them in with the synthetic data examples. In fact, we begin with some real data examples that illustrate the high resolution processing capabilities of MLD.
Jerry M. Mendel

### 7. Mathematical Details for Chapter 4

Abstract
This chapter is for the mathematically serious reader. In it we quantify many of the previous qualitative statements that were made in Chapter 4. For the convenience of the reader, we restate many of the Chapter 4 results, prior to their derivations.
Jerry M. Mendel

### 8. Mathematical Details for Chapter 5

Abstract
This chapter is similar in spirit to Chapter 7. It provides the mathematical details for many of the statements that were made, without proof, in Chapter 5. For convenience of the reader, we again state many of the Chapter 5 results, prior to their derivations.
Jerry M. Mendel

### 9. Computational Considerations

Abstract
The entire theory of Maximum-Likelihood Deconvolution has been developed in the context of the familiar convolutional model. While this has expedited the development of all its results, it has unfortunately led us to algorithms that are very impractical to use on today’s digital computers. This was already mentioned, in part, in Chapter 7.
Jerry M. Mendel

### Backmatter

Weitere Informationen