Skip to main content
main-content

Über dieses Buch

This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering.

Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few.

This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.

Inhaltsverzeichnis

Frontmatter

Erratum to: Detection of Essential Changes in Spatio-Temporal Processes with Applications to Camera Based Quality Control

Ewaryst Rafajłowicz

Plenary Papers

Frontmatter

Chapter 1. Large Deviations of χ 2 Divergence Errors on Partitions

We discuss Chernoff-type large deviation results for

χ

2

divergence errors on partitions. In contrast to the total variation and the I-divergence, the

χ

2

-divergence has an unconventional large deviation rate. In this paper we extend the result of Quine and Robinson in Ann. Stat. 13:727–742,

1985

from uniform distribution to arbitrary distribution.

László Györfi

Chapter 2. Detection of Changes in INAR Models

In the present paper we develop on-line procedures for detecting changes in the parameters of integer valued autoregressive models of order one. Tests statistics based on probability generating functions are constructed and studied. The asymptotic behavior of the tests under the null hypothesis as well as under certain alternatives is derived.

Šárka Hudecová, Marie Hušková, Simos Meintanis

Chapter 3. Visualizing Association Structure in Bivariate Copulas Using New Dependence Function

Measuring a strength of dependence of random variables is an important problem in statistical practice. We propose a new function valued measure of dependence of two random variables. It allows one to study and visualize explicit dependence structure, both in some theoretical models and empirically, without prior model assumptions. This provides a comprehensive view of association structure and makes possible much detailed inference than based on standard numeric measures of association. In this contribution, we focus on copula-based variant of the measure. We present theoretical properties of the new measure of dependence and discuss estimation of it. Some artificial and real data examples illustrate the behavior and practical utility of the measure and its estimator.

Teresa Ledwina

Theory and Related Topics

Frontmatter

Chapter 4. Smoothed Nonparametric Derivative Estimation Based on Weighted Difference Sequences

We present a simple but effective fully automated framework for estimating derivatives nonparametrically based on weighted difference sequences. Although regression estimation is often studied more, derivative estimation is of equal importance. For example in the study of exploration of structures in curves, comparison of regression curves, analysis of human growth data, etc. Via the introduced weighted difference sequence, we approximate the true derivative and create a new data set which can be smoothed by any nonparametric regression estimator. However, the new data sets created by this technique are no longer independent and identically distributed (i.i.d.) random variables. Due to the non-i.i.d. nature of the data, model selection methods tend to produce bandwidths (or smoothing parameters) which are too small. In this paper, we propose a method based on bimodal kernels to cope with the non-i.i.d. data in the local polynomial regression framework.

Kris De Brabanter, Yu Liu

Chapter 5. Model Selection Using Cramér–von Mises Distance

In this paper we consider a model selection problem for the distribution function of lifetimes in the presence of covariates. We propose a new model selection method by defining the closeness between two distribution functions by the Cramér–von Mises distance. This distance is used mostly in the literature to conduct goodness of fit tests. Given a set of data and two competing classes of parametric distribution functions, we define a test statistic, to decide which class approximates the underlying distribution better. With increasing sample size the asymptotic normality property of our test statistic is shown under suitable conditions. As an example, we apply our method to a real data set of lifetimes of DC-motors, which depend on the covariate

load

.

Hong Chen, Maik Döring, Uwe Jensen

Chapter 6. Rate of Convergence of a Change Point Estimator in a Misspecified Regression Model

A parametric estimation problem is considered in a misspecified regression model, where the regression function has a smooth change point. The focus lies on regression functions, which are continuous at the change point. Here, it is not assumed that the true regression function belongs to the model class. However, there exists a pseudo change point, such that the related regression function gives a reasonable approximation. With increasing sample size the asymptotic behavior is investigated of the least squares estimates of the change point. The consistency of the change point estimator for the pseudo estimator is shown. It turns out that the rate of convergence depends on the order of smoothness of the regression function at the change point.

Maik Döring

Chapter 7. An Exact Formula for the Average Run Length to False Alarm of the Generalized Shiryaev–Roberts Procedure for Change-Point Detection under Exponential Observations

We derive analytically an exact closed-form formula for the standard minimax Average Run Length (ARL) to false alarm delivered by the Generalized Shiryaev–Roberts (GSR) change-point detection procedure devised to detect a shift in the baseline mean of a sequence of independent exponentially distributed observations. Specifically, the formula is found through direct solution of the respective integral (renewal) equation, and is a general result in that the GSR procedure’s nonnegativ headstart is not restricted to a bounded range, nor is there a “ceiling” value for the detection threshold. Apart from the theoretical significance (in change-point detection, exact closed-form performance formulae are typically either difficult or impossible altogether to get, especially for the GSR procedure), the obtained formula is also useful to a practitioner: in cases of practical interest, the formula is a function linear in both the detection threshold and the headstart, and, therefore, the ARL to false alarm of the GSR procedure can be easily computed.

Wenyu Du, Grigory Sokolov, Aleksey S. Polunchenko

Chapter 8. Adaptive Density Estimation from Data Containing Bounded Measurement Errors

We consider the problem of density estimation using noisy data containing small measurement errors. The only assumption on these errors is that the maximal measurement error is bounded by some real number converging to zero for sample size tending to infinity. We estimate the density by a standard kernel density estimate applied to the noisy data and propose data-dependent method for choosing its bandwidth. We derive an adaptation result for this estimate and analyze the expected L1 error of our density estimate depending on the smoothness of the density and the size of the maximal measurement error.

Tina Felber, Michael Kohler, Adam Krzyżak

Chapter 9. Poisson Model with Three Binary Predictors: When are Saturated Designs Optimal?

In this paper, Poisson regression models with three binary predictors are considered. These models are applied to rule-based tasks in educational and psychological testing. To efficiently estimate the parameters of these models locally

D

-optimal designs will be derived. Eight out of all 70 possible saturated designs are proved to be locally

D

-optimal in the case of active effects. Two further saturated designs which are the classical fractional factorial designs turn out to be locally

D

-optimal for vanishing effects.

Ulrike Graßhoff, Heinz Holling, Rainer Schwabe

Chapter 10. Computing D-Optimal Experimental Designs for Estimating Treatment Contrasts Under the Presence of a Nuisance Time Trend

We prove a mathematical programming characterization of approximate partial D-optimality under general linear constraints. We use this characterization with a branch-and-bound method to compute a list of all exact D-optimal designs for estimating a pair of treatment contrasts in the presence of a nuisance time trend up to the size of 24 consecutive trials.

Radoslav Harman, Guillaume Sagnol

Chapter 11. Variable Inspection Plans for Continuous Populations with Unknown Short Tail Distributions

The ordinary variable inspection plans are sensitive to deviations from the normality assumption. A new variable inspection plan is constructed that can be used for arbitrary continuous populations with short tail distributions. The peaks over threshold method is used, the tails are approximated by a generalized Pareto distribution, their parameters and the fraction defective are estimated by a moment method proposed in a similar form by Smith and Weissman in J. R. Stat. Soc. B 47:285–298,

1985

. The estimates of the fraction defective are asymptotically normal. It turns out that their asymptotic variances do not differ very much for the various distributions. Therefore we may fix the variance and use the known asymptotic distribution for the construction of the inspection plans. The sample sizes needed to satisfy the two-point conditions are much less than that for attribute plans.

Wolfgang Kössler

Chapter 12. Goodness-of-Approximation of Copulas by a Parametric Family

In the paper we introduce a measure for goodness of approximation based on the Cramér von Mises-statistic. In place of the unknown parameter of interest, a minimum-distance estimator of the parameter is plugged in. We prove asymptotic normality of this statistic and establish a test on goodness-of-approximation.

Eckhard Liebscher

Chapter 13. Selection Consistency of Generalized Information Criterion for Sparse Logistic Model

We consider selection rule for small-

n

-large-

P

logistic regression which consists in choosing a subset of predictors minimizing Generalized Information Criterion over all subsets of variables of size not exceeding

k

. We establish consistency of such rule under weak conditions and thus generalize results of Chen and Chen in Biometrika, 95:759-771,

2008

to much broader regression scenario which also allows for a more general criterion function than considered there and

k

depending on a sample size. The results are valid for number of predictors of exponential order of sample size.

Jan Mielniczuk, Hubert Szymanowski

Chapter 14. Kernel Estimation of Wiener–Hammerstein System Nonlinearity

The paper addresses the problem of non-parametric estimation of the static characteristic in Wiener–Hammerstein (sandwich) system excited and disturbed by random processes. Two kernel-based methods are presented and compared. The proposed estimates are consistent under small amount of a priori information. An IIR dynamics, non-invertible static non-linearity, and non-Gaussian excitations are admitted. The convergence of the estimates is proved for each continuity point of the static characteristic and the asymptotic rate of convergence is analysed. The results of computer simulation example are included to illustrate the behaviour of the estimates for moderate number of observations.

Grzegorz Mzyk

Chapter 15. Monitoring Changes in RCA Models

In the paper a sequential monitoring scheme is proposed to detect instability of parameters in a random coefficient autoregressive (RCA) time series model of general order p. A given set of historical stable observations is available that serves as a training sample. The proposed monitoring procedure is based on the quasi-likelihood scores and the quasi-maximum likelihood estimators of the respective parameters computed from the training sample, and it is designed so that the sequential test has a small probability of a false alarm and asymptotic power one as the size of the training sample is sufficiently large. The asymptotic distribution of the detector statistic is established under both the null hypothesis of no change as well as under the alternative that a change occurs.

Zuzana Prášková

Chapter 16. Detecting Changes in Spatial-Temporal Image Data Based on Quadratic Forms

We consider the problem to monitor a sequence of images that may be affected by spatial as well as temporal dependencies. In order to detect a change, we consider a detector based on linear combinations of quadratic forms, thus allowing to consider linear contrasts of subimages in terms of their average grey value. We derive the asymptotic distribution of the proposed detector and the underlying empirical processes under the no-change null hypothesis and general alternatives.

Annabel Prause, Ansgar Steland

Chapter 17. Optimal Designs for Steady-State Kalman Filters

We consider a stationary discrete-time linear process that can be observed by a finite number of sensors. The experimental design for the observations consists of an allocation of available resources to these sensors. We formalize the problem of selecting a design that maximizes the information matrix of the steady-state of the Kalman filter, with respect to a standard optimality criterion, such as

D

- or

A

-optimality. This problem generalizes the optimal experimental design problem for a linear regression model with a finite design space and uncorrelated errors. Finally, we show that under natural assumptions, a steady-state optimal design can be computed by semidefinite programming.

Guillaume Sagnol, Radoslav Harman

Chapter 18. On the Impact of Correlation on the Optimality of Product-Type Designs in SUR Models

For multivariate observations with seemingly unrelated variables product-type designs often turn out to be optimal which are generated by their univariate optimal counterparts. This is, in particular, the case when all variables contain an intercept term. If these intercepts are missing, the product-type designs may lose their optimality when the correlation between the components becomes stronger.

Moudar Soumaya, Rainer Schwabe

Chapter 19. On the Time-Reversibility of Integer-Valued Autoregressive Processes of General Order

Integer-valued autoregressive processes of a general order

p

≥1 (INAR(

p

) processes) are considered, and the focus is put on the time-reversibility of these processes. It is shown that for the case

p

=1 the time-reversibility of such a process already implies that the innovations are Poisson distributed. For the case of a general

p

≥2, two competing formulations for the INAR(

p

) process of Alzaid and Al-Osh (in J. Appl. Prob. 27(2):314–324,

1990

) and Du and Li (in J. Time Ser. Anal. 12(2):129–142,

1991

) are considered. While the INAR(

p

) process as defined by Alzaid and Al-Osh behaves analogously to the INAR(1) process, the INAR(

p

) process of Du and Li is shown to be time-irreversible in general.

Sebastian Schweer

Chapter 20. Change-Point Detection of the Mean Vector with Fewer Observations than the Dimension Using Instantaneous Normal Random Projections

Our aim in this paper is to propose a simple method of a change-point detection of mean vector when the number of samples (historical data set) is smaller than the dimension. We restrict here our attention to the problem of monitoring independent individual observations under normality assumption. The presented approach is based on the Hotelling statistic. This statistic is applied to the data set projected onto a randomly chosen subspace of a sufficiently smaller dimension. We propose the procedure of normal random projection of data (historical data set and a new observation) instantaneously, just after a new observation appears. Next, we provide a model of the changes in the mean vector and derive the distribution of noncentrality parameter values. Further, a non-local power of the Hotelling test performed on projected samples is defined, which is the criterion for selecting the dimensionality of a projection subspace. Finally, simulation results are provided.

Ewa Skubalska-Rafajłowicz

Chapter 21. On Some Distributed Disorder Detection

Multivariate data sources with components of different information value seem to appear frequently in practice. Models in which the components change their homogeneity at different times are of significant importance. The fact whether any changes are influential for the whole process is determined not only by the moments of the change, but also depends on which coordinates. This is particularly important in issues such as reliability analysis of complex systems and the location of an intruder in surveillance systems. In this paper we developed a mathematical model for such sources of signals with discrete time having the Markov property given the times of change. The research also comprises a multivariate detection of the transition probabilities changes at certain sensitivity level in the multidimensional process. Additionally, the observation of the random vector is depicted. Each chosen coordinate forms the Markov process with different transition probabilities before and after some unknown moment. The aim of statisticians is to estimate the moments based on the observation of the process. The Bayesian approach is used with the risk function depending on measure of chance of a false alarm and some cost of overestimation. The moment of the system’s disorder is determined by the detection of transition probabilities changes at some coordinates. The overall modeling of the critical coordinates is based on the simple game.

Krzysztof Szajowski

Chapter 22. Changepoint Inference for Erdős–Rényi Random Graphs

We formulate a model for the off-line estimation of a changepoint in a network setting. The framework naturally allows the parameter space (network size) to grow with the number of observations. We compute the signal-to-noise ratio detectability threshold, and establish the dependence of the rate of convergence and asymptotic distribution on the network size and parameters. In addition, we show that inference can be adaptive, i.e. asymptotically correct confidence intervals can be computed based on the data. We apply the method to the question of whether US Congress has abruptly become more polarized at some point in recent history.

Elena Yudovina, Moulinath Banerjee, George Michailidis

Chapter 23. Quasi-maximum Likelihood Estimation of Periodic Autoregressive, Conditionally Heteroscedastic Time Series

We consider a general multivariate periodically stationary and ergodic causal time series model. We prove consistency and asymptotic normality of the quasi-maximum likelihood (QML) estimator of it. Applications to the multivariate nonlinear periodic AR(∞)–ARCH(∞) process are shown.

Florian Ziel

Stochastic Models, Methods and Simulations

Frontmatter

Chapter 24. Mixture and Non-mixture Cure Rate Model Considering the Burr XII Distribution

This paper presents estimates for the parameters included in long-term mixture and non-mixture lifetime models, applied to analyze survival data when some individuals may never experience the event of interest. We consider the case where the lifetime data have a three-parameter Burr XII distribution, which includes the popular Weibull mixture model as a special case.

Emílio Augusto Coelho-Barros, Jorge Alberto Achcar, Josmar Mazucheli

Chapter 25. Obtaining Superior Wind Power Predictions from a Periodic and Heteroscedastic Wind Power Prediction Tool

The Wind Power Prediction Tool (WPPT) has successfully been used for accurate wind power forecasts in the short to medium term scenario (up to 12 hours ahead). Since its development about a decade ago, a lot of additional stochastic modeling has been applied to the interdependency of wind power and wind speed. We improve the model in three ways: First, we replace the rather simple Fourier series of the basic model by more general and flexible periodic Basis splines (B-splines). Second, we model conditional heteroscedasticity by a threshold-GARCH (TGARCH) model, one aspect that is entirely left out by the underlying model. Third, we evaluate several distributional forms of the model’s error term. While the original WPPT assumes gaussian errors only, we also investigate whether the errors may follow a Student’s t-distribution as well as a skew t-distribution. In this article we show that our periodic WPPT-CH model is able to improve forecasts’ accuracy significantly, when compared to the plain WPPT model.

Daniel Ambach, Carsten Croonenbroeck

Chapter 26. Stochastic Dynamics of G-Protein-Coupled Cell-Surface Receptors

The field of bio-medicine has seen immense increase in single particle tracking techniques and experimental results. We analyze here the data obtained from experiment described by D. Calebiro et al. in Proc. Natl. Acad. Sci. 110: 743–748,

2013

describing the motion of fluorescently labeled G-protein-coupled cell-surface receptors. Our study revealed that some proteins’ trajectories do not have Gaussian increments. We tried to determine distribution of such increments. Also, by using various techniques like:

p

-variation analysis (Burnecki and Weron in Phys. Rev. E 82:021130,

2010

; Magdziarz et al. in Phys. Rev. Lett. 103:180602,

2009

), dynamical functional analysis (Burnecki et al. in Biophys. J. 103:1839–1847,

2012

; Magdziarz and Weron in Ann. Phys. 326:2431–2443,

2011

; Magdziarz and Weron in Phys. Rev. E 84:051138,

2011

), MSD analysis (Burnecki and Weron in Phys. Rev. E 82:021130,

2010

; Burnecki et al. in Biophys. J. 103:1839–1847,

2012

; Burnecki et al. in Phys. Rev. E 86:041912,

2012

), we attempt to narrow down possible models of particles in biological system. For more methods used in analysis (and their description), yet not included in this paper, see Burnecki and Weron in J. Stat. Mech.,

2014

, to appear.

Michał Balcerek, Aleksander Weron

Chapter 27. Novel Methodology of Change-Points Detection for Time Series with Arbitrary Generating Mechanisms

A novel approach to the change-point detection problem is proposed. This approach is based on the concept of the

ε

-complexity of continuous functions introduced recently by the authors, and the non-parametric change-point detection methodology. We show that, for a function satisfying Hölder condition, the

ε

-complexity can be characterized by a pair of real numbers called here the

ε

-complexity coefficients. These coefficients are used as diagnostic sequences to detect changes in the generating mechanism. The proposed methodology is model-free and does not depend on the data generating mechanisms. The results of simulations, and application to stock market data, demonstrate the efficiency of the proposed methodology.

Boris Darkhovsky, Alexandra Piryatinska

Chapter 28. Self-concordant Profile Empirical Likelihood Ratio Tests for the Population Correlation Coefficient: A Simulation Study

We present results of a simulation study regarding the finite-sample type I error behavior of the self-concordant profile empirical likelihood ratio (ELR) test for the population correlation coefficient. Three different families of bivariate elliptical distributions are taken into account. Uniformly over all considered models and parameter configurations, the self-concordant profile ELR test does not keep the significance level for finite sample sizes, albeit the level exceedance monotonously decreases to zero as the sample size increases. We discuss some potential modifications to address this problem.

Thorsten Dickhaus

Chapter 29. Risk-Averse Equilibrium Modeling and Social Optimality of Cap-and-Trade Mechanisms

We present and explore a link between social optimality and risk-neutral dynamics satisfied in the equilibrium of emission markets. Our contribution addresses market modeling in the setting of risk-averse market players and goes beyond all existing models in this field, which neglect risk-aversion aspects at the cost of having a wide range of singularities.

Paolo Falbo, Juri Hinz, Cristian Pelizzari

Chapter 30. Simultaneous Surveillance of Means and Covariances of Spatial Models

This paper deals with the problem of statistical process control applied to multivariate spatial models. After introducing the target process that coincides with the spatial white noise, we concentrate on the out-of-control behavior taking into account both changes in means and covariances. Moreover, we propose conventional multivariate control charts either based on exponential smoothing or cumulative sums to monitor means and covariances simultaneously. Via Monte Carlo simulation the proposed control schemes are calibrated. Moreover, their out-of-control behavior is studied for specific mean shifts and scale transformation.

Robert Garthoff, Philipp Otto

Chapter 31. Risk Modelling of Energy Futures: A Comparison of RiskMetrics, Historical Simulation, Filtered Historical Simulation, and Quantile Regression

Prices of energy commodity futures often display high volatility and changes in return distribution over time, making accurate risk modelling both important and challenging. Non-complex risk measuring methods that work quite well for financial assets perform worse when applied to energy commodities. More advanced approaches have been developed to deal with these issues, but either are too complex for practitioners or do not perform consistently as they work for one commodity but not for another. The goal of this paper is to examine, from the viewpoint of a European energy practitioner, whether some non-estimation complex methods for calculating Value-at-Risk can be found to provide consistent results for different energy commodity futures. We compare RiskMetrics™, historical simulation, filtered historical simulation and quantile regression applied to crude oil, gas oil, natural gas, coal, carbon and electricity futures.

We find that historical simulation filtered with an exponential weighted moving average (EWMA) for recent trends and volatility performs best and most consistent among the commodities in this paper.

Kai Erik Dahlen, Ronald Huisman, Sjur Westgaard

Chapter 32. Periodic Models for Hydrological Storage Reservoir Levels. Case Study of New Zealand

Many electricity markets across the world are strongly hydro-generation-dependent, and ability to predict hydrological storage levels is of key importance in generation planning and risk management. The purpose of this work is to introduce models reproducing periodic and irregular behavior of reservoir levels in New Zealand. The case study covers the period from January 2002 until July 2008. Two approaches are proposed here, namely, continuous time random walk with periodic probability of jumps and periodic autoregressive model. Results show that both models are capable of reproducing statistical features of the original data and provide a supporting tool for market analysts and generation planners.

Matylda Jabłońska-Sabuka, Agnieszka Wyłomańska

Chapter 33. Dynamic Price Linkage and Volatility Structure Model Between Carbon Markets

This paper investigates the dynamic price linkage and volatility structure between two leading carbon markets of EU allowance (EUA) and secondary certified emission reduction (sCER). We propose a correlation model between EUA and sCER price returns using the marginal abatement cost (MAC) curve and the emission reduction volume. The model reflects twohold market observations: financial players’ EUA-sCER swap transaction in carbon price boom periods and stronger energy price impacts on EUA prices than sCER prices. The model demonstrates that the volatilities are affected by the MAC curve shape and the emission reduction volume while the correlations are indifferent from the MAC curve shape and affected by the emission reduction behavior. The model also suggests that the EUA-sCER price correlations increase when the swap transaction increases or energy prices fall, translated into the opposite EUA price movements of EUA price rise or fall, respectively.

Takashi Kanamura

Chapter 34. Combining Time Series Forecasting Methods for Internet Traffic

The aim of this work is to explore whether forecasts from individual forecasting models can be improved with the use of combination rules. Working with Internet traffic data, first we use FARIMA, FARIMA with student-

t

innovations and Artificial Neural Networks as individual forecasting models, since each one of them explains some statistical characteristic of our data, and next we combine the forecasts using three different combination rules. Based on our experimental work simple combination rules may improve individual models. Finally, we consider a scheme where the selection of the model is based on the White’s Neural Network test for non-linearity and compare with the results from the combination of forecasts.

C. Katris, S. Daskalaki

Chapter 35. Stochastic Model of Cognitive Agents Learning to Cross a Highway

We describe a stochastic model of simple cognitive agents (“creatures”) learning to cross a highway. The creatures are capable of experiencing fear and/or desire to cross and they use an observational learning mechanism. Our simulation results are consistent with real life observations and are affected by the creatures’ fears and desires, and the conditions of the environment. The transfer of the knowledge base acquired by creatures in one environment to the creatures operating in another one improves creatures’ success of crossing a highway.

Anna T. Lawniczak, Bruno N. Di Stefano, Jason B. Ernst

Chapter 36. Threshold Models for Integer-Valued Time Series with Infinite or Finite Range

Threshold models are very popular in research and application. We survey threshold models for integer-valued time series with an infinite range and compare two of them in a real data example. In particular, we propose and briefly discuss two new models for count data time series with a finite range.

Tobias Möller, Christian H. Weiß

Chapter 37. A Study on Robustness in the Optimal Design of Experiments for Copula Models

Copulas are a very flexible tool to highlight structural properties of the design for a wide range of dependence structures. In this work we introduce a procedure for checking the robustness of the D-optimal design with respect to slight changes of the marginal distributions in the case of copula models. To this end, we first provide a clear insight for the concept of “robustness” in our domain. Then, we define a stepwise method for the investigation of the design robustness. Finally, by reporting an example focused on comparison between the use of logistic margins and Gaussian margins, we put the usefulness of the analysis up.

Elisa Perrone

Chapter 38. Use of a Generalized Multivariate Gamma Distribution Based on Copula Functions in the Average Bioequivalence

Bioequivalence studies have been generally used to compare a test formulation with a reference, in order to validate the interchangeability between them. Some pharmacokinetic (PK) parameters are compared in this type of study, typically using a model which assumes independence among PK parameters, the same variance for the different formulations, logarithmic transformation for the data and normal distribution for the residuals. We propose an alternative model based on a generalized gamma distribution, which permits the presence of positive asymmetry for the data and possible differences in the variances for the different formulations which could have more flexibility in this case. For the multivariate structure, we use a Gaussian copula function to capture the possible dependence between the PK parameters. We use Bayesian inference methods to obtain the results of interest. We also introduce a real data example from where we observe a good fit of the proposed model for the dataset. From this study, we conclude that the proposed model could be a good alternative in some applications where the distribution of the bioequivalence data presents a positive asymmetric distribution.

Roberto Molina de Souza, Jorge Alberto Achcar, Edson Zangiacomi Martinez, Josmar Mazucheli

Chapter 39. The Marginal Distribution of Compound Poisson INAR(1) Processes

A compound Poisson distribution is a natural choice for the innovations of an INAR(1) model. If the support of the compounding distribution is finite (Hermite-type distributions), the observations’ marginal distribution belongs to the same family and it can be computed exactly. In the infinite case, however, which includes the popular choice of negative binomial innovations, this is not so simple. We propose two types of Hermite approximations for this case and investigate their quality in a numerical study.

Christian H. Weiß, Pedro Puig

Algorithms and Applications

Frontmatter

Chapter 40. Monitoring Euro Area Real Exchange Rates

We apply the stationarity and cointegration monitoring procedure of Wagner and Wied in (Monitoring stationarity and cointegration. SFB823 Discussion Paper 23/14.

http://hdl.handle.net/2003/33430

,

2014

) to monthly real exchange rate indices, vis-

$\grave{a}$

-vis Germany, of the first round Euro area member states. For all countries except Portugal structural breaks are detected prior to the onset of the Euro area crisis triggered in turn by the global financial crisis. The results indicate that a more detailed investigation of RER behavior in the Euro area may be useful for understanding the unfolding of the deep crisis currently plaguing many countries in the Euro area.

Philipp Aschersleben, Martin Wagner, Dominik Wied

Chapter 41. Approximating Markov Chains for Bootstrapping and Simulation

In this work we develop a bootstrap method based on the theory of Markov chains. The method moves from the two competing objectives that a researcher pursues when performing a bootstrap procedure: (i) to preserve the structural similarity – in statistical sense – between the original and the bootstrapped sample; (ii) to assure a diversification of the latter with respect to the former. The original sample is assumed to be driven by a Markov chain. The approach we follow is to implement an optimization problem to estimate the memory of a Markov chain (i.e. its order) and to identify its relevant states. The basic ingredients of the model are the transition probabilities, whose distance is measured through a suitably defined functional. We apply the method to the series of electricity prices in Spain. A comparison with the Variable Length Markov Chain bootstrap, which is a well established bootstrap method, shows the superiority of our proposal in reproducing the dependence among data.

Roy Cerqueti, Paolo Falbo, Gianfranco Guastaroba, Cristian Pelizzari

Chapter 42. Statistical Method to Estimate a Regime-Switching Lévy Model

A regime-switching Lévy model combines jump-diffusion under the form of a Lévy process, and Markov regime-switching where all parameters depend on the value of a continuous time Markov chain. We start by giving general stochastic results. Estimation is performed following a two-step procedure. The EM-algorithm is extended to this new class of jump-diffusion regime-switching models. An empirical application is dedicated to the study of Asian equity markets.

Julien Chevallier, Stéphane Goutte

Chapter 43. Wavelet Algorithm for Hierarchical Pattern Recognition

The idea, presented in this article, is based on a combination of hierarchical classifier with multiresolution representation of signals in the Daubechies wavelet bases. The paper concerns a multi-class recognition of random signals. It presents a multistage classifier with a hierarchical tree structure, based on a multiscale representation of signals in wavelet bases. Classes are hierarchically grouped in macro-classes and the established aggregation defines a decision tree. In each macro-class, the existence of deterministic pattern of signals is assumed. A global loss function with reject option is proposed for the multistage classifier and two strategies for the choice of loss function parameters are discussed. An analysis of risk is performed for a local (binary) attraction-limited minimum distance classifier for wavelet approximation of signals. This leads to proposals, relating to the upper estimate of the risk, called the guaranteed risk. Its value depends on the several parameters as the wavelet scale of signal representation, the support length of wavelet function, or the variance of the random noise in the macro-class. Finally, the guaranteed risk of the multistage classifier is derived.

Urszula Libal, Zygmunt Hasiewicz

Chapter 44. Risk of Selection of Irrelevant Features from High-Dimensional Data with Small Sample Size

In this work we demonstrate the effect of small sample size on the risk that feature selection algorithms will select irrelevant features when dealing with high-dimensional data. We develop a simple analytical model to quantify this risk; we verify this model by the means of simulation. These results (i) explain the inherent instability of feature selection from high-dimensional, small sample size data and (ii) can be used to estimate the minimum required sample size which leads to good stability of features. Such results are useful when dealing with data from high-throughput studies.

Henryk Maciejewski

Chapter 45. Fundamental and Speculative Shocks – Structural Analysis of Electricity Market

In the paper, Structural Vector Autoregressive models (SVAR) are used to analyze effects of structural shocks on the electricity prices in UK. The shocks are identified via short run restrictions, which are imposed on the matrix of instantaneous effects. Two main types of shocks are considered: fundamental shocks, identified as demand and wind generation shocks and speculative shocks, which are associated solely with electricity prices. The results indicate that speculative shocks play an important role in the price setting process and account for more than 90 % of the unexpected electricity price variability. Moreover, wind generation shocks have larger input to the electricity price variance than demand shocks, particularly when peak hours are considered.

Katarzyna Maciejowska

Chapter 46. Decentralized Time-Constrained Scheduling for Sensor Network in Identification of Distributed Parameter Systems

An efficient approach to determine an activation policy for scanning sensor network monitoring a distributed process over some spatial domain is proposed. The scheduling problem is defined so as to maximize a criterion defined on the Fisher information matrix associated with the estimated parameters. Then, adopting pairwise communication schemes, the multi-exchange procedure is developed, which distributes the configuration process between the network nodes and take account to power consumption constraints. The approach is illustrated through an example on a sensor network scheduling problem for a convective diffusion process.

Maciej Patan, Adam Romanek

Chapter 47. Least Squares Estimators of Peptide Species Concentrations Based on Gaussian Mixture Decompositions of Protein Mass Spectra

In this paper we propose to use Gaussian mixture decompositions of protein mass spectral signals to construct least squares estimators of peptide species concentrations in proteomic samples and further to use these estimators as spectral features in cancer versus normal spectral classifiers. For a real dataset we compare variances of least squares estimators to variances of analogous estimators based on spectral peaks. We also evaluate performance of spectral classifiers with features defined by either least squares estimators or by spectral peaks by their power to differentiate between patterns specific for case and control samples of head and neck cancer patients. Cancer/normal classifiers based on spectral features defined by Gaussian components achieved lower average error rates than classifiers based on spectral peaks.

Andrzej Polanski, Michal Marczyk, Monika Pietrowska, Piotr Widlak, Joanna Polanska

Chapter 48. Detection of Essential Changes in Spatio-Temporal Processes with Applications to Camera Based Quality Control

Our aim in this paper is to propose a simple detector of changes in time that is well suited for parallel use at a large number of spatial sites, since our main motivation is change detection in a sequence of images that are dedicated for quality control of continuously running industrial processes.

Ewaryst Rafajłowicz

Chapter 49. The Impact of Renewables on Electricity Prices and Congestion in a Regime Switching Model: Evidence from the Italian Grid

In this paper, the cross-zonal impact of renewable energy (RE) on electricity prices is assessed by means of a time-varying regime switching model, focusing on the highly congested line connecting Sicily with the Italian peninsula.

In the base regime, there is no congestion and the price in Sicily (which equals the system marginal price) depends on national electricity demand and RE supply. In the congested regime, the Sicilian price depends on the local electricity demand and RE supply, as well as on market power by local generators. The transition between regimes is modeled through a dynamic probit, including, as explanatory variables, the RE supply on both sides of the potentially congested line.

The regime switching model is estimated using hourly data from the Italian day-ahead electricity market for the year 2012. As shown by results, congestion is determined by the total amount of renewables in mainland Italy, but when the RE supply is disaggregated into different sources, one finds that congestion is mainly due to photovoltaics (from the peninsula) and hydropower (wherever located), whereas wind power has a negative effect on congestion regardless of localization.

Alessandro Sapio

Chapter 50. On Hammerstein System Nonlinearity Identification Algorithms Based on Order Statistics and Compactly Supported Functions

Nonparametric algorithms recovering the nonlinearity in Hammerstein systems are examined. The algorithms are based on ordered measurements and on compactly supported functions. The contribution of the note consists in that the probability density function of the input signal does not need to be strictly bounded from zero but can vanish in a finite number of points. In this setting, the convergence is established for nonlinearities being piecewise-Lipschitz functions. It is also verified that for

p

times locally differentiable nonlinearities, the algorithms attain the convergence rate

O

(

n

−2

p

/(2

p

+1)

), the best possible nonparametric one. Noteworthy, the rate is not worsened by irregularities of the input probability density function.

Przemysław Śliwiński, Paweł Wachel, Zygmunt Hasiewicz

Chapter 51. An Algorithm for Construction of Constrained D-Optimum Designs

A computational algorithm is proposed for determinant maximization over the set of all convex combinations of a finite number of nonnegative definite matrices subject to additional box constraints on the weights of those combinations. The underlying idea is to apply a simplicial decomposition algorithm in which the restricted master problem reduces to an uncomplicated multiplicative weight optimization algorithm.

Dariusz Uciński

Chapter 52. The Analysis of Stochastic Signal from LHD Mining Machine

In this paper a novel procedure for LHD (Load-Haul-Dump) machine temperature signal analysis is proposed. In this procedure the signal segmentation and its decomposition into trend and residuals is made. Moreover in the next step the novel technique for further decomposition of residuals is proposed and stochastic analysis procedure is applied. The stochastic analysis is based on the ARMA (autoregressive moving average) models with Gaussian and strictly stable distribution. Different nature of extracted sub-signals offers specific opportunity to use them for condition monitoring as well as process monitoring purposes. Appropriate processing techniques give a chance to observe specific character in the acquired data. In this paper we present basic theory related to the applied methodology as well as practical example obtained by application of proposed techniques.

Agnieszka Wyłomańska, Radosław Zimroz

Chapter 53. Evaluating the Performance of VaR Models in Energy Markets

We analyze the relative performance of 13 VaR models using daily returns of WTI, Brent, natural gas and heating oil one-month futures contracts. After obtaining VaR estimates we evaluate the statistical significance of the differences in performance of the analyzed VaR models. We employ the simulation-based methodology proposed by Žiković and Filer in Czech J Econ Finan 63(4):327–359,

2013

, which allows us to rank competing VaR models. Somewhat surprisingly, the obtained results indicate that for a large number of different VaR models there is no statistical difference in their performance, as measured by the Lopez size adjusted score. However, filtered historical simulation (FHS) and the BRW model stand out as robust and consistent approaches that – in most cases – significantly outperform the remaining VaR models.

Saša Žiković, Rafał Weron, Ivana Tomas Žiković

Backmatter

Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise