Skip to main content

Über dieses Buch

Despite its short history, wavelet theory has found applications in a remarkable diversity of disciplines: mathematics, physics, numerical analysis, signal processing, probability theory and statistics. The abundance of intriguing and useful features enjoyed by wavelet and wavelet packed transforms has led to their application to a wide range of statistical and signal processing problems. On November 16-18, 1994, a conference on Wavelets and Statistics was held at Villard de Lans, France, organized by the Institute IMAG-LMC, Grenoble, France. The meeting was the 15th in the series of the Rencontres Pranco-Belges des 8tatisticiens and was attended by 74 mathematicians from 12 different countries. Following tradition, both theoretical statistical results and practical contributions of this active field of statistical research were presented. The editors and the local organizers hope that this volume reflects the broad spectrum of the conference. as it includes 21 articles contributed by specialists in various areas in this field. The material compiled is fairly wide in scope and ranges from the development of new tools for non parametric curve estimation to applied problems, such as detection of transients in signal processing and image segmentation. The articles are arranged in alphabetical order by author rather than subject matter. However, to help the reader, a subjective classification of the articles is provided at the end of the book. Several articles of this volume are directly or indirectly concerned with several as­ pects of wavelet-based function estimation and signal denoising.



Thresholding of Wavelet Coefficients as Multiple Hypotheses Testing Procedure

Given noisy signal, its finite discrete wavelet transform is an estimator of signal’s wavelet expansion coefficients. An appropriate thresholding of coefficients for further reconstruction of de-noised signal plays a key-role in the wavelet decomposition/reconstruction procedure. [DJ1] proposed a global threshold\( \lambda = \sigma \sqrt {{2\log n}} \) and showed that such a threshold asymptotically reduces the expected risk of the corresponding wavelet estimator close to the possible minimum. To apply their threshold for finite samples they suggested to always keep coefficients of the first coarse j0 levels.
We demonstrate that the choice of j0 may strongly affect the corresponding estimators. Then, we consider the thresholding of wavelet coefficients as a multiple hypotheses testing problem and use the False Discovery Rate (FDR) approach to multiple testing of [BH1]. The suggested procedure controls the expected proportion of incorrectly kept coefficients among those chosen for the wavelet reconstruction. The resulting procedure is inherently adaptive, and responds to the complexity of the estimated function. Finally, comparing the proposed FDR-threshold with that fixed global of Donoho and Johnstone by evaluating the relative Mean-Square-Error across the various test-functions and noise levels, we find the FDR-estimator to enjoy robustness of MSE-efficiency.
Felix Abramovich, Yoav Benjamini

Wavelets, spectrum analysis and 1/f processes

The purpose of this paper is to evidence why wavelet-based estimators are naturally matched to the spectrum analysis of 1/f processes. It is shown how the revisiting of classical spectral estimators from a time-frequency perspective allows to define different wavelet-based generalizations which are proved to be statistically and computationally efficient. Discretization issues (in time and scale) are discussed in some detail, theoretical claims are supported by numerical experiments and the importance of the proposed approach in turbulence studies is underlined.
Patrice Abry, Paulo Gonçalvès, Patrick Flandrin

Variance Function Estimation in Regression by Wavelet Methods

The objective of this paper is to contribute to the methodology available for dealing with a very common statistical problem, the estimation of the variance function in heteroscedastic multiple linear regression problems. The variance function is recovered by means of a smoothing nonparametric method, based on wavelet decompositions. The proposed method does not require preliminary or simultaneous estimation of the mean function. The resulting wavelet estimator is shown to be consistent, and is used to improve the estimation of the mean function itself. The method is illustrated with real and simulated data.
Anestis Antoniadis, Christian Lavergne

Locally Self Similar Gaussian Processes

The Fractional Brownian Motion has been proposed as a model in scientific domains as different as meteorology, economy, turbulence theory and texture of medical images. In this lecture, we first describe the class of Self Similar Gaussian Processes (SSGP) and give (in one dimension) a multiresolution analysis of the Fractional Brownian Motion of indexα(FBMα). We then enlarge the SSGP setting to the elliptic gaussian processes setting.
A. Benassi

WaveLab and Reproducible Research

Wavelab is a library of wavelet-packet analysis, cosine-packet analysis and matching pursuit. The library is available free of charge over the Internet. Versions are provided for Macintosh, UNIX and Windows machines.
Wavelab makes available, in one package, all the code to reproduce all the figures in our published wavelet articles. The interested reader can inspect the source code to see exactly what algorithms were used, how parameters were set in producing our figures, and can then modify the source to produce variations on our results. WAVELAB has been developed, in part, because of exhortations by Jon Claerbout of Stanford that computational scientists should engage in “really reproducible” research.
Jonathan B. Buckheit, David L. Donoho

Extrema Reconstructions and Spline Smoothing: Variations on an Algorithm of Mallat & Zhong

The purpose of this note is to revisit the algorithm introduced by Mallat and Zhong to reconstruct a signal from the extrema of its wavelet transform. These authors construct an approximation of the wavelet transform of the signal via an alternate projection iteration procedure and they obtain an approximation of the original signal by inverting the approximate wavelet transform. We explain how to solve the same problem by directly constructing the approximation of the original signal as the critical point of a constrained optimization problem. This new approach is very elementaty. The numerical calculations are limited to the solution of aN xN linear system whereN is the number of extremas of the wavelet transform. Finally we explain how this reconstruction can be recast as a particular case of statistical spline smoothing.
René A. Carmona

Identification of Chirps with Continuous Wavelet Transform

Chirps are signals (or sums of signals) that may be characterized by a local (i.e. time-dependent) amplitude and a local frequency. Time-frequency representations such as wavelet representations are well adapted to the characterization problem of such chirps. Ridges in the modulus of the transform determine regions in the transform domain with a high concentration of energy, and are regarded as natural candidates for the characterization and the reconstruction of the original signal. A couple of algorithmic procedures for the estimation of ridges from the modulus of the (continuous) wavelet transform of one-dimensional signals are described, together with a new reconstruction procedure, using only information of the restriction of the wavelet transform to a sample of points from the ridge. This provides with a very efficient way to code the information contained in the signal.
René Carmona, Wen Liang Hwang, Bruno Torrésani

Nonlinear Approximation of Stochastic Processes

In signal and image compression, the choice of a suitable representation is frequently related to the second order statistical informations: one tries to approximate the Karhunen—Loève decomposition with an easily implementable transform. This decomposition is optimal in terms of linear approximation but it may not be optimal for nonlinear approximation process, i.e. approaching a vector ϰ by keeping only itsN largest coordinates and letN go to infinity. In this paper, we refine the second order information by considering “piecewise stationary processes” that describe functions which are smooth except at isolated points. We show that the nonlinear approximation in a suitable wavelet basis is optimal in terms of mean square error and that this optimality is lost either by using the trigonometric system or by using any type of linear approximation method, i.e. keeping theN first coordinates.
Albert Cohen, Jean-Pierre d’Ales

Translation-Invariant De-Noising

De-Noising with the traditional (orthogonal, maximally-decimated) wavelet transform sometimes exhibits visual artifacts; we attribute some of these—for example, Gibbs phenomena in the neighborhood of discontinuities—to the lack of translation invariance of the wavelet basis. One method to suppress such artifacts, termed “cycle spinning” by Coifman, is to “average out” the translation dependence. For a range of shifts, one shifts the data (right or left as the case may be), De-Noises the shifted data, and then unshifts the de-noised data. Doing this for each of a range of shifts, and averaging the several results so obtained, produces a reconstruction subject to far weaker Gibbs phenomena than thresholding based De-Noising using the traditional orthogonal wavelet transform.
Cycle-Spinning over the range ofall circulant shifts can be accomplished in ordernlog2(n) time; it is equivalent to de-noising using the undecimated or stationary wavelet transform.
Cycle-spinning exhibits benefits outside of wavelet de-noising, for example in cosine packet denoising, where it helps suppress ‘clicks’. It also has a counterpart in frequency domain de-noising, where the goal of translation-invariance is replaced by modulation invariance, and the central shift-De-Noise-unshift operation is replaced by modulate-De-Noise-demodulate.
We illustrate these concepts with extensive computational examples; all figures presented here are reproducible using the WaveLab software
R. R. Coifman, D. L. Donoho

Estimating Wavelet Coefficients

We consider fast algorithms of wavelet decomposition of functionf when discrete observations of \( f(\operatorname{supp} f \subseteq [0,1]) \) are available. The properties of the algorithms are studied for three types of observation design: the regular design, when the observationsf(x i) are taken on the regular grid \( {{\chi }_{i}} = i/N,i = 1, \ldots ,N; \) the case of jittered regular grid, when it is only known that for all \( 1{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ < }}i{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ < }}N,i/N{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ < }}{{\chi }_{i}} < (i + 1)/N; \) the random design case: \( {{\chi }_{i}}i = 1, \ldots ,N \)are independent and identically distributed random variables on [0,1]. We show that these algorithms are in certain sense efficient when the accuracy of approximation is concerned.
The proposed algorithms are computationally straightforward: the whole effort to compute the decomposition is orderN for the sample sizeN.
Bernard Delyon, Anatoli Juditsky

Nonparametric Supervised Image Segmentation by Energy Minimization using Wavelets

Energy models, like the Mumford and Shah model, have been introduced for segmenting images. The boundary, defined as the minimum of the energy, is projected onto a wavelet basis. We assume a white noise model on the observed image. The aim of this paper is to study the asymptotic behavior of non-parametric estimators of the boundary when the number of pixels grows to infinity.
Jacques Istas

On the Statistics of Best Bases Criteria

Wavelet packets are a useful extension of wavelets providing an adaptive time- scale analysis. In using noisy observations of a signal of interest, the criteria for best bases representation are random variables. The search may thus be very sensitive to noise. In this paper, we characterize the asymptotic statistics of the criteria to gain insight which can in turn, be used to improve on the performance of the analysis. By way of a well-known information-theoretic principle, namely the Minimum Description Length, we provide an alternative approach to Minimax methods for deriving various attributes of nonlinear wavelet packet estimates.
H. Krim, J.-C. Pesquet

Discretized Wavelet Density Estimators for Continuous Time Stochastic Processes

We consider a strictly stationary continuous time stochastic process (X t ) such that the law of Xhas a bounded densityf. Under certain assumptions which are satisfied for rather general diffusion processes, theL 2 error of the linear wavelet estimator off constructed from the observation \( ({{X}_{t}},0{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ < }}t{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ < }}T) \) converges with the rate \( \sqrt {{1/T}} \) when \( f \in B_{{p,q.}}^{s} \) In this work we study two discretized versions of this estimator, constructed from the dicrete observations \( {{X}_{t}}_{{_{1}}}, \ldots ,{{X}_{t}}_{{_{n}}},0{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ < }}{{t}_{i}}{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ < }}T. \) We show that there is a minimal sampling size n that we should take to preserve the upper bound \( \sqrt {{1/T}} \) for the L 2-error.
Frédérique Leblanc

Wavelets and Markov Random Fields in a Bayesian Framework

The paper introduces a Bayesian framework for wavelet coefficients. Particularly aimed at efficient implementations and at higher-dimensional wavelet transforms, the method is based on a Markov Random Field description of the coefficients. The Bayesian approach allows to impose various types of constraints on the interactions of coefficients that are neighbours in the MRF. Several applications that are based on a manipulation of wavelet coefficients can benefit from this approach. This is illustrated with an example.
Maurits Malfait, Dirk Roose

Micronde: a Matlab Wavelet Toolbox for Signals and Images

The purpose of this paper is to present Micronde, a Matlab wavelet and wavelet packet toolbox for signals and images. Micronde’s capabilities and organization are described and its use both in command line and interface mode are illustrated. Real or synthetic signals as well as images are used to present wavelet-based analysis, de-noising and compression.
Yves Misiti, Michel Misiti, Georges Oppenheim, Jean-Michel Poggi

Choice of the Threshold Parameter in Wavelet Function Estimation

The procedures of Donoho, Johnstone, Kerkyacharian and Picard [DJKP] estimate functions by inverting thresholded wavelet transform coefficients of the data. The choice of threshold is crucial to the success of the method and is currently subject to an intense research effort. We describe how we have applied the statistical technique of cross-validation to choose a threshold and we present results that indicate that its performance for correlated data. Finally, to illustrate the techniques, we apply various wavelet-based estimation methods to some noisy one- and two-dimensional signals and display the results.
G. P. Nason

The Stationary Wavelet Transform and some Statistical Applications

Wavelets are of wide potential use in statistical contexts. The basics of the discrete wavelet transform are reviewed using a filter notation that is useful subsequently in the paper. A ‘stationary wavelet transform’, where the coefficient sequences are not decimated at each stage, is described. Two different approaches to the construction of an inverse of the stationary wavelet transform are set out. The application of the stationary wavelet transform as an exploratory statistical method is discussed, together with its potential use in nonparametric regression. A method of local spectral density estimation is developed. This involves extensions to the wavelet context of standard time series ideas such as the periodogram and spectrum. The technique is illustrated by its application to data sets from astronomy and veterinary anatomy.
G. P. Nason, B. W. Silverman

Wavelet Thresholding: Beyond the Gaussian I.I.D. Situation

With this article we first like to a give a brief review on wavelet thresholding methods in non-Gaussian and non-i.i.d. situations, respectively. Many of these applications are based onGaussian approximations of the empirical coefficients. For regression and density estimation with independent observations, we establish joint asymptotic normality of the empirical coefficients by means of strong approximations. Then we describe how one can prove asymptotic normality under mixing conditions on the observations by cumulant techniques.
In the second part, we apply these non-linear adaptive shrinking schemes to spectral estimation problems for both a stationary and a non-stationary time series setup. For the latter one, in a model of Dahlhaus ([Da93]) on theevolutionary spectrum of a locally stationary time series, we present two different approaches. Moreover, we show that in classes of anisotropic function spaces an appropriately chosen wavelet basis automatically adapts to possibly different degrees of regularity for the different directions. The resulting fully-adaptive spectral estimator attains the rate that is optimal in the idealized Gaussian white noise model up to a logarithmic factor.
Michael H. Neumann, Rainer von Sachs

L 2(0,1) Weak Convergence of the Empirical Process for Dependent Variables

We consider the empirical process induced by dependent variables as a random element in L 2(0,1). Using some special properties of the Haar basis, we obtain a general tightness condition. In the strong mixing case, this allows us to improve on the well known result of Yoshihara (of course for theL 2 continuous functionals). In the same spirit, we give also an application to associated variables which improves a recent result of Yu. Some statistical applications are presented.
Paulo Oliveira, Charles Suquet

Top-Down and Bottom-Up Tree Search Algorithms for Selecting Bases in Wavelet Packet Transforms

Search algorithms for finding signal decompositions called near-best bases using decision criteria called non-additive information costs have recently been proposed by Taswell [12] for selecting bases in wavelet packet transforms represented as binary trees. These methods are extended here to distinguish between top-down and bottom-up tree searches. Other new non-additive information cost functions are also proposed. In particular, the near-best basis with the non-additive cost of the Shannon entropy on probabilities is compared against the best basis with the additive cost of the Coifman-Wickerhauser entropy on energies [3]. All wavelet packet basis decompositions are also compared with the nonorthogonal matching pursuit decomposition of Mallat and Zhang [7] and the orthogonal matching pursuit decomposition of Patiet al [8]. Monte Carlo experiments using a constant-bit-rate variable-distortion paradigm for lossy compression suggest that the statistical performance of top-down near-best bases with non-additive costs is superior to that of bottom-up best bases with additive costs. Top-down near-best bases provide a significant increase in computational efficiency with reductions in memory, flops, and time while nevertheless maintaining similar coding efficiency with comparable re-construction errors measured by ℓ p -norms. Finally, a new compression scheme called parameterized model coding is introduced and demonstrated with results showing better compression than standard scalar quantization coding at comparable levels of distortion.
Carl Taswell

WavBox 4: A Software Toolbox for Wavelet Transforms and Adaptive Wavelet Packet Decompositions

WavBox provides both a function library and a computing environment for wavelet transforms and adaptive wavelet packet decompositions. WavBox contains a collection of these transforms, decompositions, and related functions that perform multiresolution analyses of 1-D multichannel signals and 2-D images. The current version 4.1c includes overscaled pyramid transforms, discrete wavelet transforms, and adaptive wavelet and cosine packet decompositions by best level, best basis, and matching pursuit as described by Mallat, Coifman, Wickerhauser, and other authors. WavBox also implements Taswell’s new search algorithms with decision criteria, called near-best basis and non-additive information costs respectively, for selecting bases in wavelet packet transforms, as well as Donoho and Johnstone’s wavelet shrinkage denoising methods. Various choices of filter classes (orthogonal, biorthogonal, etc), filter families (Daubechies, Vetterli, etc), and convolution versions (interval, circular, extended, etc) exist for each transform and decomposition. The software has been designed for efficient automated computation, interactive exploratory data analysis, and pedagogy. Essential features of the design include: perfect reconstruction for multiresolution decomposition of data of arbitrary size not restricted to powers of 2; both command line and graphical user interfaces with a comprehensive set of plots and visual displays; an object property expert system with artificial intelligence for configuring valid property combinations; heirarchical modules and switch-driven function suites; vector-filter and matrix-operator implementations of convolutions; extensibility for the inclusion of other wavelet filters, convolution versions, and transforms; optional arguments with built-in defaults for most m-files; and extensive on-line help and self-running tutorial demos.
Carl Taswell

Using Wavelets for Classifying Human in vivo Magnetic Resonance Spectra

Traditional methods for quantifying magnetic resonance spectra which rely on identifying and quantifying peaks in individual spectra often prove problematic and unsuccessful when the data is acquiredin vivo. A different approach is reported in which pattern recognition techniques were used to classify successfully a set of 75 spectra (according to the subject’s dietary group) without the need for identifying or measuring the peaks. A discrete wavelet transform was performed on each spectrum and combinations of the first 64 wavelet coefficients were used as the features for classification.
Rosemary Tate, Des Watson, Stephen Eglen

Adaptive Density Estimation

This paper presents some results concerning the adaptive estimation of density with wavelet methods. We explain three procedures, each one having its own advantages (see [DJKP], [KPT], [TRIB]). The first one is an empirical method based on simulations; the bandwidth is chosen with a cross validated criteria. The second one is the Donoho-Johnstone-Kerkyacharian-Picard procedure: the estimate is constructed by thresholding each detail. The third one is constructed by thresholding the total energy of the details of each level. For the last two procedures, we investigate their minimax properties.
K. Tribouley

Wavelets and Regression Analysis

Applications of the rapidly developing wavelet theory are usually limited to small dimensional cases, due to practical restrictions on the implementation of large dimensional wavelet bases. In this paper, an approach is proposed to combine wavelets and techniques of regression analysis. Regression analysis is a widely applied method for examining data and assessing relationships among variables. The resulting wavelet regression estimator is well suited for regression estimation of moderately large dimension, in particular for regressions with localized irregularities and sparse data.
Qinghua Zhang


Weitere Informationen