Skip to main content

Über dieses Buch

Our understanding of nature is often through nonuniform observations in space or time. In space, one normally observes the important features of an object, such as edges. The less important features are interpolated. History is a collection of important events that are nonuniformly spaced in time. Historians infer between events (interpolation) and politicians and stock market analysts forecast the future from past and present events (extrapolation). The 20 chapters of Nonuniform Sampling: Theory and Practice contain contributions by leading researchers in nonuniform and Shannon sampling, zero crossing, and interpolation theory. Its practical applications include NMR, seismology, speech and image coding, modulation and coding, optimal content, array processing, and digital filter design. It has a tutorial outlook for practising engineers and advanced students in science, engineering, and mathematics. It is also a useful reference for scientists and engineers working in the areas of medical imaging, geophysics, astronomy, biomedical engineering, computer graphics, digital filter design, speech and video processing, and phased array radar.



1. Introduction

After nearly 14 years I could not have come up with a better introduction than the one I wrote in my previous book on zero-crossings and nonuniform sampling [1]:
Man’s understanding of nature is often through nonuniform observations in space or time. In space, one normally observes the important features of an object (such as edges); the less important features are interpolated…History is a collection of important events that are nonuniformly spaced in time. Historians infer between events (interpolation) and politicians, similar to stock market analysts, forecast the future from past and present events (extrapolation). This fact can be generalized to other realms of human endeavor. Indeed most readers of this book may read the introduction, the conclusion, and possibly a few sections of a few chapters of this book and then try to figure out the rest by interpolation!
Coming more down to earth, I wanted to write another book which is more practical with more relevant theories as opposed to [1] which was more comprehensive with cursory treatment of each subject. I realized the theory and application in the field of nonuniform sampling is so vast that it is impossible to master diversified topics such as seismology, magnetic resonance imaging, array signal processing, filter design, modulation and coding, sampled data control, and multimedia applications [31]; see the accompanying CD-ROM file: Chapterl\Sampta95.txt. Hence I had to ask for help from the experts.
F. Marvasti

2. An Introduction to Sampling Analysis

A basic property of algebraic polynomials \({{P}_{n}}\left( t \right): = \sum\nolimits_{{k = 0}}^{n} {{{a}_{k}}{{t}^{k}}}\) of degree ≤ n is that any such polynomial is fully and uniquely determined by its values P n (t v ) at an arbitrary given set of n + 1 distinct points t 0, t 1,…, t v,…, t n .The converse question, whether it is always possible to find a suitable polynomial P n (x) which takes on the function (polynomial) values y 0, y 1,…, y n associated with any n + 1 distinct abscissas (interpolation or nodal points) t 0, t 1, …, t n , is answered by the Lagrangian interpolation formula:
$$ P_n \left( t \right): = \sum\limits_{k = 0}^n {y_k l_k \left( t \right) = \sum\limits_{k = 0}^n {P_n \left( {t_k } \right)l_k \left( t \right)} } $$
where, for k = 0, 1,…, n,
$$\begin{array}{*{20}{c}} {{{l}_{k}}\left( t \right): = {{l}_{{k,n}}}\left( t \right) = \frac{{{{\omega }_{{n + 1}}}\left( t \right)}}{{\left( {t - {{t}_{k}}} \right){{{\omega '}}_{{n + 1}}}\left( {{{t}_{k}}} \right)}},} & {{{\omega }_{{n + 1}}}\left( t \right): = \left( {t - {{t}_{0}}} \right) \ldots \left( {t - {{t}_{n}}} \right).} \\ \end{array}$$
Indeed, l k (t) is the only algebraic polynomial of degree n which possesses the property
$${{l}_{k}}\left( {{{t}_{\nu }}} \right) = {{\delta }_{{k,\nu }}} = \left\{ {\begin{array}{*{20}{c}} 0 & {for{\mkern 1mu} \nu \ne \kappa } \\ 1 & {for{\mkern 1mu} \nu \ne \kappa .} \\ \end{array} } \right.$$
Hence the polynomial P n (t) of degree n, thus determined, is unique.
P. L. Butzer, G. Schmeisser, R. L. Stens

3. Lagrange Interpolation and Sampling Theorems

The aim of this chapter is to discuss the relationship between Lagrange interpolation and sampling theorems. The first three sections can be regarded as an alternative introduction to sampling theory, avoiding the Fourier analysis of Chapter 2, at least to begin with. Because Lagrange interpolation is the central theme of the chapter, it is natural to start off with an introduction to the Lagrange interpolation method and then proceed to a more general form of it, which we call Lagrange-type interpolation. Having done that, we can then investigate the relationship between Lagrange-type interpolation and sampling theorems, in particular, the Whittaker-Shannon-Kotel’nikov (WSK) sampling theorem.
A. I. Zayed, P. L. Butzer

4. Random Topics in Nonuniform Sampling

In this chapter, topics that are important but are not fully covered in other chapters are discussed. We shall consider the theorems related to the uniqueness of nonuniform sampling set for 1-D and 2-D signals. The emphasis will be on 2-D signals since the 1-D signals were extensively treated by the author in [1]–[2]. Section 4.2 will be on the Lagrange interpolation of 2-D signals; the Lagrange interpolation for the 1-D signals was discussed in Chapter 3. Section 4.3 will cover the nonuniform sampling for non-band-limited signals and jittered sampling where exact recovery for a certain class of non-band-limited signals is possible. Section 4.4 is on the stability of the nonuniform sampling set. Section 4.5 discusses Parseval relation and spectral analysis of both 1-D and 2-D signals. This spectral analysis leads to some practical methods of recovery such as the time varying and the iterative methods—Section 4.6; variation of these methods will be used in Chapters 5–6 and 16–18. Section 4.7 discusses the relation among the Papoulis generalized sampling theorem, wavelets, sub-band coding, and Quadrature Mirror Filters (QMF). Under certain conditions, it is possible to sample a low-pass or a band-pass signal at lower than the Nyquist rate and be able to recover the signal; these topics are discussed in Sections 7.8–7.9. A new and interesting topic on sampling at the maxima and minima (extrema) points is covered in Section 4.10. This section is based on the Extremum sampling of finite trigonometric polynomials. Discrete and modified Lagrange along with some other novel techniques are discussed in this section. Extremum sampling combined with zero-crossings is another topic covered in this section. The last section (Section 4.11) is a discussion on recovery of deterministic or random signals from random samples, a topic that will be more extensively discussed in Chapter 8.
F. Marvasti

5. Iterative and Noniterative Recovery of Missing Samples for 1-D Band-Limited Signals

This chapter discusses a class of iterative and noniterative methods for the reconstruction of partially known band-limited signals. Band-limiting is equivalent to the vanishing of a known subset of the samples of the Discrete Fourier Transform (DFT) of the signals. The signals are partially known in the sense that only a subset of its samples is supposed to be available for measurement. The problem is to determine the signal from the available samples. Although this is in essence a nonuniform sampling problem, the methods employed differ considerably from the “infinite series” approaches. This is due to the nature of the finite-dimensional spaces in which we work, for which the matrix formulations are the most natural tool.
P. J. S. G. Ferreira

6. Numerical and Theoretical Aspects of Nonuniform Sampling of Band-Limited Images

This chapter presents recent developments and progress in nonuniform sampling of band-limited functions. Since the theory and practice of nonuniform sampling of 1-D signals are well understood and already treated in many articles and surveys, the emphasis will be on the nonuniform sampling of images. Both numerical, theoretical, and applied aspects of the sampling problem will be considered. For Matlab© simulations, see the CD-ROM: ‘Chapter6άtlab’.
K. Gröchenig, T. Strohmer

7. The Nonuniform Discrete Fourier Transform

In many applications, when the representation of a discrete-time signal or a system in the frequency domain is of interest, the Discrete-Time Fourier Transform (DTFT) and the z-transform are often used. In the case of a discrete-time signal of finite length, the most widely used frequency-domain representation is the Discrete Fourier Transform (DFT), which is simply composed of samples of the DTFT of the sequence at equally spaced frequency points, or equivalently, samples of its z-transform at equally spaced points on the unit circle. A generalization of the DFT, introduced in this chapter, is the Nonuniform Discrete Fourier Transform (NDFT), which can be used to obtain frequency domain information of a finite-length signal at arbitrarily chosen frequency points. We provide an introduction to the NDFT and discuss its applications in the design of 1-D and 2-D FIR digital filters. We begin by introducing the problem of computing frequency samples of the z-transform of a finite-length sequence. We develop the basics of the NDFT, including its definition, properties and computational aspects. The NDFT is also extended to two dimensions. We propose NDFT-based nonuniform frequency sampling techniques for designing 1-D and 2-D FIR digital filters, and present design examples. The resulting filters are compared with those designed by other existing methods.
S. Bagchi, S. K. Mitra

8. Reconstruction of Stationary Processes Sampled at Random Times

The reconstruction of a deterministic function from a finite or infinite number of its values is an old problem initially studied by I. Newton, E. Waring and J. L. Lagrange. This chapter addresses the problem of reconstructing a stationary process (rather than deterministic function) from observations at known or unknown instants. In the first case, the reconstruction depends explicitly on the known instants. In the second case, the unknown instants are modelled by random variables whose joint distributions allow to define interpolation formulas.
B. Lacaze

9. Zero-Crossings of Random Processes with Application to Estimation and Detection

The origin of Rice’s formula for the average level-crossing rate of a general class of random processes can be traced back to his 1936 notes on “Singing Transmission Lines,” [35]. For a stationary, ergodic, random process {X(t)} , -∞ < t < ∞, with sufficiently smooth sample paths, the average number of crossings about the level u, per unit time, is given by Rice’s formula [36]
$$ E\left[ {D_u } \right] = \int_{ - \infty }^\infty {\left| {x'} \right|p\left( {u,x'} \right)dx'} $$
where p(x, x’) is the joint probability density of the X(t) and its mean-square derivative X’(t), and Du is the number of u level-crossings of {X(t)} for t in the unit interval [0,1]. Rice’s formula is quite general in nature, and as we shall see, has a simplified form when the process is Gaussian.
J. T. Barnett

10. Magnetic Resonance Image Reconstruction from Nonuniformly Sampled k-space Data

Medical doctors exert perennial pressure on scanner manufacturers to reduce the measurement time of Magnetic Resonance Imaging (MRI). Applications requiring measurement time reduction are, among others, real-time imaging of the heart, bolus tracking, contrast-agent uptake, and functional imaging. To achieve this goal, one can either devise faster measurement techniques or skip substantial numbers of sample points, or both. Often, the adopted tactics amount to nonuniform undersampling. Unfortunately, this complicates reconstruction of the MR image.
F. T. A. W. Wajer, G. H. L. A. Stijnman, M. Bourgeois, D. Graveron-Demilly, D. van Ormondt

11. Irregular and Sparse Sampling in Exploration Seismology

The aim of exploration seismology is to obtain an image of the subsurface by probing it with seismic waves. These waves are generated at the surface, by using a suitably powerful source, at various locations. The waves propagate downwards through the subsurface, and are reflected at interfaces between geological layers. They subsequently propagate upwards to the surface, where they are detected and recorded. The seismic response depends of course on the source and the receiver location. In practice, the response for one source location is detected by hundreds of receivers, typically arranged along several receiver lines. In common use today is three-dimensional (3-D) surface seismology, where a volume image is obtained, rather than a cross-section image as in 2-D seismology. The sources and receivers can, for the sake of simplicity, be assumed to be located in a horizontal plane, at the surface. The recorded seismic wavefield then is a function of five variables, p(x s , y s , x r , y r , t), with two horizontal coordinates, x s and y s , for the source position, two horizontal positions, x r and y r , for the receiver positions and the variable time, denoted by t. In 2-D seismology the signal P x s , x r , t) is a function of two spatial coordinates and the time coordinate. For readers, not familiar with exploration seismology, we will give a brief overview of some notions, required to understand this chapter, in the next few subsections.
A. J. W. Duijndam, M. A. Schonewille, C. O. H. Hindriks

12. Randomized Digital Optimal Control

In sampled data systems the sampling periods are almost always assumed to be deterministic, i.e., known in advance. However, in practice, the sampling periods should often be conceived as stochastic variables. This so called stochastic sampling phenomena occurs in many areas of sampled data systems such as in biological control systems or when a human being selects the sampling instants. Data may be absent at sampling instants when the sampling is governed by a stochastic process like a radar or sonar echo, or when the sampling mechanism fails. Also technical imperfections in the instrumentation may cause stochastic sampling.
W. L. De Koning, L. G. van Willigenburg

13. Prediction of Band-Limited Signals from Past Samples and Applications to Speech Coding

This chapter is concerned with the prediction of the next value of a bandlimited signal from its past, uniformly-sampled or periodic nonuniformlysampled discrete values. The idea of a next value assumes a pattern of sampling that is not completely arbitrary, but has a periodic pattern. In addition to past samples that are uniformly-spaced, nonuniform but periodic sampling will be considered here, and each of these patterns will be related to the Nyquist criterion.
D. H. Mugler, Y. Wu

14. Frames, Irregular Sampling, and a Wavelet Auditory Model

A Wavelet Auditory Model (WAM) is constructed in terms of wavelet frames and an irregular sampling algorithm for Fourier frames. Its theoretical effectiveness is demonstrated in the context of speech coding, and its original formulation is found in [8–9]. The presentation of WAM in this chapter emphasizes its underlying mathematical ideas, and, in particular, develops the notions from the theory of frames and irregular sampling that arise naturally in constructing WAM.
J. J. Benedetto, S. Scott

15. Application of the Nonuniform Sampling to Motion Compensated Prediction for Video Compression

Motion Compensation (MC) is an essential component in most of today’s video compression algorithms. It is a predictive process where a current frame is predicted from one or more previously coded frames in a sequence (also known as interframe prediction). In general, it helps to reduce the overall amount of data required to code a video sequence compared to using intraframe coding alone (where frames are coded with no reference to each other) [1].
A. Sharaf, F. Marvasti, M. Hasan

16. Applications of Nonuniform Sampling to Nonlinear Modulation, A/D and D/A Techniques

One of the important applications of nonuniform sampling is in the area of telecommunications, specifically in the field of modulation and coding. The modulation aspect is treated in this chapter, and the coding part will be discussed in Chapter 17.
F. Marvasti, M. Sandler

17. Applications to Error Correction Codes

Sampling a band-limited signal above the Nyquist rate could be an alternative to error correction codes [1]–[2], and [33]–[34]. In fact we can show the equivalence of the sampling theorem and the fundamental theorem of information theory [1]. Essentially, over-sampling is equivalent to a convolutional code using fields of real numbers as opposed to finite Galois fields. The encoder is a simple interpolator (a low pass filter) as shown in Fig. 1.
F. Marvasti

18. Application of Nonuniform Sampling to Error Concealment

Despite the huge advances in the communication networks industry, the new breed of multimedia applications and services is pushing the existing networks to their bandwidth limits. In ATM networks, the bandwidth is dynamically allocated according to the requirements of the transmitted services. ATM cells are usually stored in the switch buffers before being routed to the destination node. In practice, ATM buffers have limited sizes and they may overflow in the case of congested traffic. When ATM buffers overflow, low priority cells are dropped to ensure transmission continuity of the service using the rest of the cells. Also, ATM cells may be mis-routed due to non-correctable bit errors in the routing information of the cells. When ATM cells carry in their payload a compressed image, cell losses due to buffer overflow or cell mis-routing can cause major degradation in the quality of the transported image. This degradation is usually in the form of corrupted blocks or macroblocks. Losing one cell may result into the loss of several blocks of the compressed image.
M. Hasan, F. Marvasti

19. Sparse Sampling in Array Processing

Sparsely sampled irregular arrays and random arrays have been used or proposed in several fields such as radar, sonar, ultrasound imaging, and seismics. We start with an introduction to array processing and then consider the combinatorial problem of finding the best layout of elements in sparse 1-D and 2-D arrays. The optimization criteria are then reviewed: creation of beampatterns with low mainlobe width and low sidelobes, or as uniform as possible coarray. The latter case is shown here to be nearly equivalent to finding a beampattern with minimal peak sidelobes.
We have applied several optimization methods to the layout problem, including linear programming, genetic algorithms and simulated annealing. The examples given here are both for 1-D and 2-D arrays. The largest problem considered is the selection of K = 500 elements in an aperture of 50 by 50 elements. Based on these examples we propose that an estimate of the achievable peak level in an algorithmically optimized array is inversely proportional to K and is close to the estimate of the average level in a random array.
Active array systems use both a transmitter and receiver aperture and they need not necessarily be the same. This gives additional freedom in design of the thinning patterns, and favorable solutions can be found by using periodic patterns with different periodicity for the two apertures, or a periodic pattern in combination with an algorithmically optimized pattern with the condition that there be no overlap between transmitter and receiver elements. With the methods given here one has the freedom to choose a design method for a sparse array system using either the same elements for the receiver and the transmitter, no overlap between the receiver and transmitter or partial overlap as in periodic arrays.
S. Holm, A. Austeng, K. Iranpour, J.-F. Hopperstad

20. Fractional Delay Filters—Design and Applications

In numerous applications, such as communications, audio and music technology, speech coding and synthesis, antenna and transducer arrays, and time delay estimation, not only the sampling frequency but the actual sampling instants are of crucial importance. Digital fractional delay (FD) filters provide a useful building block that can be used for fine-tuning the sampling instants, i.e., implement the required bandlimited interpolation. The FD filters can be designed and implemented flexibly using various established techniques that suit best for the particular application. In this review article, the generic problem of designing digital filters to approximate a fractional delay is addressed. We compare FIR and all-pass filter approaches to FD approximation. Time- and frequency-domain characteristics of various designs are shown to illustrate the nature of different approaches. Special attention is paid to time-varying FD filters and the elimination of induced transients. Also, nonuniform signal reconstruction using polynomial filtering techniques is discussed. Several applications, ranging from synchronization in digital communications to music synthesis, are described in detail. An extensive list of references is provided.
V. Välimäki, T. I. Laakso


Weitere Informationen