Skip to main content

2013 | Buch

Long-Memory Processes

Probabilistic Properties and Statistical Methods

verfasst von: Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

Long-memory processes are known to play an important part in many areas of science and technology, including physics, geophysics, hydrology, telecommunications, economics, finance, climatology, and network engineering. In the last 20 years enormous progress has been made in understanding the probabilistic foundations and statistical principles of such processes. This book provides a timely and comprehensive review, including a thorough discussion of mathematical and probabilistic foundations and statistical methods, emphasizing their practical motivation and mathematical justification. Proofs of the main theorems are provided and data examples illustrate practical aspects. This book will be a valuable resource for researchers and graduate students in statistics, mathematics, econometrics and other quantitative areas, as well as for practitioners and applied researchers who need to analyze data in which long memory, power laws, self-similar scaling or fractal properties are relevant.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Definition of Long Memory
Abstract
A long time before suitable stochastic processes were available, deviations from independence that were noticeable far beyond the usual time horizon were observed, often even in situations where independence would have seemed a natural assumption. For instance, the Canadian–American astronomer and mathematician Simon Newcomb (Astronomical constants (the elements of the four inner planets and the fundamental constants of astronomy), 1895) noticed that in astronomy errors typically affect whole groups of consecutive observations and therefore drastically increase the “probable error” of estimated astronomical constants so that the usual \(\sigma/\sqrt{n}\)-rule no longer applies. Although there may be a number of possible causes for Newcomb’s qualitative finding, stationary long-memory processes provide a plausible “explanation”. Similar conclusions were drawn before by Peirce (Theory of errors of observations, pp. 200–204, 1873) (see also the discussion of Peirce’s data by Wilson and Hilferty (Proc. Natl. Acad. Sci. USA 15(2):120–125, 1929) and later in the book by Mosteller and Tukey (Data analysis and regression: a second course in statistics, 1977) in a section entitled “How \(\sigma/\sqrt{n}\) can mislead”). Newcomb’s comments were confirmed a few years later by Pearson (Philosophical transactions of the royal society of London, pp. 235–299, 1902), who carried out experiments simulating astronomical observations. Using an elaborate experimental setup, he demonstrated not only that observers had their own personal bias, but also each individual measurement series showed persisting serial correlations.
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 2. Origins and Generation of Long Memory
Abstract
In this chapter we discuss typical methods for constructing long-memory processes. Many models are motivated by probabilistic and statistical principles. On the other hand, sometimes one prefers to be lead by subject specific considerations. Typical for the first approach is the definition of linear processes with long memory, or fractional ARIMA models. Subject specific models have been developed for instance in physics, finance and network engineering. Often the occurrence of long memory is detected by nonspecific, purely statistical methods, and subject specific models are then developed to explain the phenomenon. For example, in economics aggregation is a possible reason for long-range dependence, in computer networks long memory may be due to certain distributional properties of interarrival times. Often long memory is also linked to fractal structures.
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 3. Mathematical Concepts
Abstract
In this chapter we present some mathematical concepts that are useful when deriving limit theorems for long-memory processes.
We start with a general description of univariate orthogonal polynomials in Sect. 3.1, with particular emphasis on Hermite polynomials in Sect. 3.1.2. Under suitable conditions, a function G can be expanded into a series
$$G(x)=\sum_{j=0}^{\infty}g_j H_j(x) $$
with respect to an orthogonal basis consisting of Hermite polynomials H j (⋅) (\(j\in\mathbb{N}\)). Such expansions are used to study sequences G(X t ) where X t (\(t\in\mathbb{Z}\)) is a Gaussian process with long memory (see Sect. 4.​2.​3). Hermite polynomials can also be extended to the multivariate case. This is discussed in Sect. 3.2.
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 4. Limit Theorems
Abstract
Most statistical procedures in time series analysis (and in fact statistical inference in general) are based on asymptotic results. Limit theorems are therefore a fundamental part of statistical inference. Here we first review very briefly a few of the basic principles and results needed for deriving limit theorems in the context of long-memory and related processes.
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 5. Statistical Inference for Stationary Processes
Abstract
This chapter deals with statistical inference for long-range dependent linear and subordinated processes. Some of the tools will also be used in Chaps. 6 and 7 when we shall consider corresponding problems for nonlinear and nonstationary long-memory time series.
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 6. Statistical Inference for Nonlinear Processes
Abstract
In this section, we consider nonlinear processes with long memory. We will mainly focus on volatility models: stochastic volatility (see Definitions 2.3–2.4 and Sect. 4.​2.​6 for limit theorems), ARCH(∞) processes (see Definition 2.1 and Sect. 4.​2.​7) and LARCH(∞) models (see (2.​47) and (2.​48), and Sect. 4.​2.​8). Statistical inference for traffic models is not well developed yet (see Faÿ et al. in Queueing Syst. 54(2):121–140, 2006, Bernoulli 13(2):473–491, 2007; Hsieh et al. in J. Econom. 141(2):913–949, 2007 for some results in this direction).
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 7. Statistical Inference for Nonstationary Processes
Abstract
In this chapter, statistical inference for nonstationary processes is discussed. For long-memory, or, more generally, fractional stochastic processes this is of particular interest because long-range dependence often generates sample paths that mimic certain features of nonstationarity. It is therefore often not easy to distinguish between stationary long-memory behaviour and nonstationary structures. For statistical inference, including estimation, testing and forecasting, the distinction between stationary and nonstationary, as well as between stochastic and deterministic components, is essential.
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 8. Forecasting
Abstract
Here we briefly recall some basic results from forecasting. For details, see standard time series books such as Priestley (Spectral analysis and time series, 1981) and Brockwell and Davis (Time series: theory and methods, 1991).
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 9. Spatial and Space-Time Processes
Abstract
Spatial data play an important role in many areas such as ecology, biology, environmental monitoring, agronomy, remote sensing, geology, to name a few. Sometimes observations are obtained on a regular lattice (see, e.g. Whittle in Biometrika 49:305–314, 1962; Bartlett in Adv. Appl. Prob. 6(2):336–358, 1974; Besag in J. R. Stat. Soc., Ser. B 36(2):192–236, 1974; Cressie in Statistics for spatial data, 1993; Christakos in Random field models in Earth sciences, 1992; Modern spatiotemporal geostatistics, 2000; Benson et al. in Water Resour. Res. 42(W01415):1–18, 2006; Sain and Cressie in J. Econom. 140(1):226–259, 2007, and references therein). This leads to considering spatial processes on a grid, or more specifically, random fields X t with index \(t\in\mathbb{Z}^{2}\). On the other hand, if the spatial points are not on a regular grid, then random fields with \(t\in\mathbb{R}^{2}\) are used.
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Chapter 10. Resampling
Abstract
Resampling or bootstrap methods refer to techniques where statistical inference is based on a simulated distribution of a statistic T n obtained by resampling from an observed sample X 1,…,X n . Inference of this type is always conditional on the sample. In the most general version, no model assumptions are used except for global conditions such as stationarity, existence of some moments, etc. In the most restricted version, a parametric model is specified and resampling is used only as a simple way of obtaining an approximate distribution of T n . Note that different terms such as ‘bootstrap’, ‘resampling’, ‘subsampling’, etc. are used in the literature for different variations of the same general idea. Since there does not seem to be a unified terminology, we use ‘resampling’ and ‘bootstrap’ as synonyms.
Jan Beran, Yuanhua Feng, Sucharita Ghosh, Rafal Kulik
Backmatter
Metadaten
Titel
Long-Memory Processes
verfasst von
Jan Beran
Yuanhua Feng
Sucharita Ghosh
Rafal Kulik
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-35512-7
Print ISBN
978-3-642-35511-0
DOI
https://doi.org/10.1007/978-3-642-35512-7

Premium Partner