Skip to main content
Erschienen in: Mathematics and Financial Economics 2/2016

Open Access 01.03.2016

Strong asymptotic arbitrage in the large fractional binary market

verfasst von: Fernando Cordero, Lavinia Perez-Ostafe

Erschienen in: Mathematics and Financial Economics | Ausgabe 2/2016

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We study, from the perspective of large financial markets, the asymptotic arbitrage (AA) opportunities in a sequence of binary markets approximating the fractional Black–Scholes model. This approximating sequence was introduced by Sottinen and named fractional binary market. The large financial market under consideration does not satisfy the standard assumptions of the theory of AA. For this reason, we follow a constructive approach to show first that a strong AA (SAA) exists in the frictionless case. Indeed, with the help of an appropriate version of the law of large numbers and a stopping time procedure, we construct a sequence of self-financing trading strategies leading to the desired result. Next, we introduce, in each small market, proportional transaction costs, and we show that a slight modification of the previous trading strategies leads to a SAA when the transaction costs converge fast enough to 0.

1 Introduction

Empirical studies of financial time series indicate that the statistical dependence of the log-return increments decays slowly with the passage of time (see [5, 22]). This property is known as long-range dependence. A good example of a market model exhibiting this behaviour is the fractional Black–Scholes model, where the randomness of the risky asset is described by a fractional Brownian motion with Hurst parameter \(H>1/2.\) Since the fractional Brownian motion fails to be a semimartingale (see [4, 18, 20, 21]), this model allows for a free lunch with vanishing risk (see [10]). This problem can be solved by either regularizing the paths of the fractional Brownian motion (see [3]), or by introducing transactions costs in the model (see [11]).
A sequence of binary markets approximating the fractional Black–Scholes model was introduced by Sottinen [23] and called fractional binary markets. According to [23, Theorem 5.3], the markets in this sequence also allow for arbitrage opportunities, which persist even under sufficiently small transaction costs (see [7]). Moreover, in [6] it is shown that the smallest transaction cost, \(\lambda _{c}^{N},\) needed to eliminate the arbitrage in the N-period fractional binary market (called N-fractional binary market) is asymptotically close to 1. The latter result contrasts with the fact that the fractional Black–Scholes model is free of arbitrage under arbitrarily small transaction costs. This is not a true contradiction, since the arbitrage strategies constructed in [6] provide profits with probabilities vanishing in the limit. As explained in [6], a more appropriate way to compare the arbitrage opportunities in the sequence of fractional binary markets with the arbitrage opportunities in the fractional Black–Scholes market is to study the problem for the former from the perspective of the large financial markets.
The notion of large financial market was introduced by Kabanov and Kramkov [12] as a sequence of ordinary security market models. A suitable property for such kind of markets is the absence of asymptotic arbitrage (AA) opportunities. In the frictionless case, a standard assumption is that each small market is free of arbitrage. If, in addition, the small markets are complete, then the absence of AA is related to some contiguity properties of the sequence of equivalent martingale measures (see [12]). These results are extended to incomplete markets by Klein and Schachermayer [15, 16] and by Kabanov and Kramkov [13]. When frictions are introduced, the standard assumption is that each small market is free of arbitrage under arbitrarily small transaction costs. In this context, characterizations of the absence of AA, similar to those in the frictionless case, can be found in [17].
In this paper, we consider the large financial market given by the sequence of N-fractional binary markets, and we call it large fractional binary market. We point out that this is a non-standard large financial market, since the markets in the sequence admit arbitrage under transaction costs. For this reason, in order to study its AA opportunities, we follow a constructive approach. A first step in this direction was done in [6], where the authors study the existence of AA of first kind (AA1) and of second kind (AA2) under the restriction of using only 1-step self-financing strategies. In this respect, it has been shown the existence of 1-step AA1 in the large fractional binary market when the transaction costs are such that \(\lambda _{N}=o(1/N^{H}).\) If, instead, \(\lambda _{N}\sqrt{N}\) converges to infinity, then no 1-step AA of any kind appears in the model. Moreover, when the Hurst parameter H is chosen close enough to 1/2, then even in the frictionless case there is no 1-step AA2.
In the present work, using more general self-financing trading strategies, we aim to construct, for an appropriate sequence of transaction costs, a strong AA (SAA), i.e., the possibility of getting arbitrarily rich with probability arbitrarily close to one while taking a vanishing risk. This problem can be viewed as a continuation of the study of AA initiated in [6], in the sense that our trading strategies are chosen beyond the 1-step setting of [6]. Not only that, the existence of this form of AA is stronger than AA1 and AA2 and, moreover, is obtained for any Hurst parameter \(H>1/2.\)
First, in the frictionless case, we construct a candidate sequence of self-financing strategies, and we express the value process of the portfolio as a sum of dependent random variables. Due to this dependency, special versions of the law of large numbers are needed in order to conclude on the asymptotic behaviour of the value process at maturity. More precisely, with the help of a law of large numbers for mixingales (see [1]), we prove that our strategies provide a strictly positive profit with probability strictly close to one. Next, we stop the self-financing strategies at the first time the admissibility condition fails to hold. The resulting sequence of trading strategies paves the way to a SAA. When transaction costs are taken into account, we show, following a similar argument, the existence of a SAA when the transaction costs are of order \(o(\sqrt{\ln {N}}/N^{(2H-1/4)\wedge (H+1/2)}).\) In direct comparison with the results of [6], one can observe that, even if, when using a sequence of 1-step self financing trading strategies, the rate of convergence of the transaction costs leading to an AA1 is better, this won’t allow us though to obtain an AA2.
We emphasize that the methods presented in this work are not restricted to the chosen large financial market. To the contrary, since, in discrete time setting, the value process can be written as a sum of random variables, we believe that these techniques may be applicable also for other examples of discrete large financial markets. This is indeed the case whenever we dispose of an appropriate law of large numbers theorem and of a maximal inequality for the value process, in a similar manner as seen in our results.
The paper is structured as follows. In Sect. 2 we introduce the framework of our results, starting with the definition of a fractional binary market. We end this part with a short presentation of the concept of SAA. In Sect. 3 we state the main results: Theorem 3.1 for the frictionless case and Theorem 3.2 for the case with frictions. Sections 4 and 5 are devoted to the proofs of Theorems 3.1 and 3.2, respectively. We end the paper with Appendices 1–3 providing some technical results and definitions used along the paper.

2 Preliminaries

2.1 Fractional binary markets

In this section, we briefly recall the so-called fractional binary markets, which were defined by Sottinen [23] as a sequence of discrete markets approximating the fractional Black–Scholes model.
First, we introduce the fractional Black–Scholes model. This continuous market takes the same form as the classical Black–Scholes model with the difference that the randomness of the risky asset is described by a fractional Brownian motion and not by a standard Brownian one. More precisely, the dynamics of the bond and of the stock are given by:
$$\begin{aligned} B_{t}=1\,\quad \text {and}\quad dS_{t}=\sigma S_{t}dZ_{t}, \end{aligned}$$
(2.1)
where \(\sigma >0\) is a constant representing the volatility and Z is a fractional Brownian motion of Hurst parameter \(H>1/2.\) We assume in (2.1) that the interest rate and the drift of the stock are both identically zero.
It is well known that the fractional Black–Scholes model in (2.1) is not free of arbitrage, [2, 4, 20, 21]. One can find though a solution around this problem by either regularizing the paths of the fractional Brownian motion (see [3, 20]), or by introducing transactions costs in the model (see [11]). By the former it is meant the construction of a family of stochastic processes which are similar to the fractional Brownian motion but carry a unique equivalent martingale measure.
Motivated by the construction of an easy example of arbitrage related to the fractional Black–Scholes model, Sottinen came up with the idea to express this special type of Black–Scholes model as the limit of a sequence of binary markets. For this scope, he shows a Donsker-type theorem, in which the fractional Brownian motion is approximated by an inhomogeneous random walk. From this point on, he constructs a discrete model, called “fractional binary market”, approximating (2.1). Based on the results in [8], we provide here a simplified, but equivalent, presentation of these binary models.
Let \((\Omega ,\,{\mathscr {F}},\,P)\) be a finite probability space and consider a sequence of i.i.d. random variables \((\xi _{i})_{i\ge 1}\) such that \(P(\xi _{1}=-1)=P(\xi _{1}=1)=1/2.\) We denote by \(({\mathscr {F}}_{i})_{i\ge 0}\) the induced filtration, i.e., \({\mathscr {F}}_{i}:=\sigma (\xi _{1},\ldots ,\xi _{i}),\) for \(i\ge 1,\) and \({\mathscr {F}}_{0}:=\{\emptyset ,\,\Omega \}.\)
For each \(N>1,\) the N-fractional binary market is the discrete market in which the bond and stock are traded at the times \(\{0,\, \frac{1}{N},\ldots ,\frac{N-1}{N},\,1\}\) under the dynamics:
$$\begin{aligned} B_{n}^{N}=1\quad \text {and}\quad S_{n}^{N}=\left( 1+\frac{X_{n}}{N^{H}}\right) S_{n-1}^{N}. \end{aligned}$$
(2.2)
We assume that the value of \(S^{N}\) at time 0 is constant, i.e., \(S_{0}^{N}=s_{0}.\) The process \((X_{n})_{n\ge 1}\) can be expressed as
$$\begin{aligned} X_{n}:=\sum \limits _{i=1}^{n-1}j_{n}(i)\xi _{i}+g_{n}\xi _{n}, \end{aligned}$$
(2.3)
where
$$\begin{aligned} j_{n}(i):= & {} \sigma c_{H}\left( H-\frac{1}{2}\right) \int \limits _{i-1}^{i}x^{\frac{1}{2}-H}\left( \int \limits _{0}^{1} (v+n-1)^{H-\frac{1}{2}} (v+n-1-x)^{H-\frac{3}{2}}dv\right) dx,\\ g_{n}:= & {} \sigma c_{H}\left( H-\frac{1}{2}\right) \int \limits _{n-1}^{n}x^{\frac{1}{2}-H}(n-x)^{H-\frac{1}{2}}\left( \int \limits _{0}^{1} (y(n-x)+x)^{H-\frac{1}{2}}y^{H-\frac{3}{2}}dy\right) dx, \end{aligned}$$
and \(c_{H}:=\sqrt{\frac{2H\Gamma \left( \frac{3}{2}-H\right) }{\Gamma \left( H+\frac{1}{2}\right) \Gamma (2-2H)}}\) is a normalizing constant. From (2.3), we see that \(X_{n}\) is the sum of a process depending only on the information until time \(n-1\) and a process depending only on the present. More precisely, \(X_{n}={\mathscr {Y}}_{n}+g_{n}\xi _{n},\) where
$$\begin{aligned} {\mathscr {Y}}_{n}:=\sum \limits _{i=1}^{n-1}j_{n}(i)\xi _{i}. \end{aligned}$$
Therefore, given the history up to time \(n-1,\) which fixes the values of \({\mathscr {Y}}_{n}\) and \(S_{n-1}^{N},\) the price process can take only two possible values at the next step:
$$\begin{aligned} \left( 1+\frac{{\mathscr {Y}}_{n}-g_{n}}{N^{H}}\right) S_{n-1}^{N}\quad \text {or} \quad \left( 1+\frac{{\mathscr {Y}}_{n}+g_{n}}{N^H}\right) S_{n-1}^{N}. \end{aligned}$$
This brings to light the binary structure of these markets.

2.2 Strong asymptotic arbitrage under transaction costs

The arbitrage appearing in the fractional Black–Scholes model is also reflected in the approximating sequence of fractional binary markets. More precisely, as shown by Sottinen [23], the N-fractional binary market admits, for N sufficiently large, arbitrage opportunities. However, a pathological situation occurs when one introduces transaction costs. On the one hand, the fractional Black–Scholes model is free of arbitrage under arbitrarily small transaction costs. On the other hand, one can choose transaction costs \(\lambda _{N}\) converging to 1 such that the N-fractional binary market, for N large enough, admits arbitrage under transaction costs \(\lambda _{N}\) (see [6]). Despite this, the corresponding arbitrage opportunities disappear in the limit, in the sense that, the explicit strategies behind this counterintuitive behaviour provide strictly positive profits with probabilities vanishing in the limit. In order to avoid this kind of situations, we look here to the whole sequence of fractional markets as a large financial market, the large fractional binary market, and we study its AA opportunities, as introduced by Kabanov and Kramkov [12, 13].
Definition 2.1
(Large fractional binary market) The sequence of markets given by \(\{(\Omega ,\,{\mathscr {F}},\,({\mathscr {F}}_{n})_{n=0}^{N},\,P,\,S^{N})\}_{N\ge 1},\) where \(S^{N}\) is the price process defined in (2.2), is called large fractional binary market.
We assume that the N-fractional binary market is subject to \(\lambda _{N}\ge 0\) transaction costs (\(\lambda _{N}=0\) corresponds to the frictionless case). We assume, without loss of generality, that we pay \(\lambda _{N}\) transaction costs only when we sell and not when we buy. This means that the bid and ask price of the stock \(S^{N}\) are modelled by the processes \({((1-\lambda )S^{N}_{n})}_{n=0}^{N}\) and \({(S^{N}_{n})}_{n=0}^{N},\) respectively.
Definition 2.2
(\(\lambda _{N}\)-self-financing strategy) Given \(\lambda _{N}\in [0,\,1],\) a \(\lambda _{N}\)-self-financing strategy for the process \(S^{N}\) is an adapted process \(\varphi ^{N}={(\varphi _{n}^{0,N},\,\varphi _{n}^{1,N})}_{n=-1}^{N}\) satisfying, for all \(n\in \{0,\ldots ,N\},\) the following condition:
$$\begin{aligned} \varphi _{n}^{0,N}-\varphi _{n-1}^{0,N}\le -{\left( \varphi _{n}^{1,N}-\varphi _{n-1}^{1,N}\right) }^{+}S_{n}^{N} + \left( 1-\lambda _{N}\right) {\left( \varphi _{n}^{1,N}-\varphi _{n-1}^{1,N}\right) }^{-}S_{n}^{N}. \end{aligned}$$
(2.4)
Here \(\varphi ^{0,N}\) denotes the number of units we hold in the bond and \(\varphi ^{1,N}\) denotes the number of units in the stock. For such a \(\lambda _{N}\)-self-financing strategy, the liquidated value of the portfolio at each time n is given by
$$\begin{aligned} V_{n}^{\lambda _{N}}\left( \varphi ^{N}\right) :=\varphi _{n}^{0,N}+\left( 1-\lambda _{N}\right) \left( \varphi _{n}^{1,N}\right) ^{+}S_{n}^{N}-\left( \varphi _{n}^{1,N}\right) ^{-}S^{N}_{n}. \end{aligned}$$
If \(\lambda _{N}=0,\) we simply write \(V_{n}(\varphi ^{N})\) instead of \(V_{n}^{0}(\varphi ^{N}).\)
Remark 2.3
Along this work, we restrict our attention to self-financing strategies satisfying (2.4) with equality and having that \(\varphi _{N}^{1,N}=0.\) In other words, we avoid throwing away money and, at maturity, we liquidate the position in stock. For these kind of self-financing strategies, the values of \(\varphi _{n}^{0,N},\,n\in \{0,\ldots ,N\},\) can be expressed in terms of the values of \(\lambda _{N},\,\varphi _{-1}^{0,N}\) and \((\varphi _{k}^{1,N})_{k=-1}^{n}\) as follows:
$$\begin{aligned} \varphi _{n}^{0,N}=\varphi _{-1}^{0,N}-\sum \limits _{k=0}^{n}{\left( \varDelta _{k}\varphi ^{1,N}\right) }^{+}S^{N}_{k} +\left( 1-\lambda _{N}\right) \sum \limits _{k=0}^{n}{\left( \varDelta _{k}\varphi ^{1,N}\right) }^{-}S^{N}_{k}. \end{aligned}$$
(2.5)
In the previous identity, we use the notation \(\varDelta _{n}h:=h_{n}-h_{n-1}.\)
Equation (2.5) gives us a way to construct self-financing strategies. More precisely, given \(\lambda _{N}\ge 0,\) a constant \(\varphi _{-1}^{0,N}\) and an adapted process \((\varphi _{k}^{1,N})_{k=-1}^{N},\) we can use (2.5) to define \((\varphi _{k}^{0,N})_{k=0}^{N}.\) The resulting adapted process \({(\varphi _{n}^{0,N},\,\varphi _{n}^{1,N})}_{n=-1}^{N}\) is by construction a \(\lambda _{N}\)-self-financing strategy, satisfying (2.4) with equality.
In their work, Kabanov and Kramkov [13] distinguished between two kinds of AA, of the first kind and of the second kind. An AA1 gives the possibility of getting arbitrarily rich with strictly positive probability by taking an arbitrarily small risk, whereas the second one is an opportunity of getting a strictly positive profit with probability arbitrarily close to 1 by taking the risk of losing a uniformly bounded amount of money. The authors also considered a stronger version called “strong asymptotic arbitrage”, which inherits the strong properties of the two mentioned kinds. More precisely, it can be seen as the possibility of getting arbitrarily rich with probability arbitrarily close to 1 while taking a vanishing risk. We will work from now on with the latter concept.
We introduce now the definition of SAA. For a detailed presentation on this topic, we refer the reader to [13] for frictionless markets and to [17] for markets with transaction costs.
Definition 2.4
There exists a SAA with transaction costs \(\{\lambda _{N}\}_{N\ge 1}\) if there exists a subsequence of markets (again denoted by N) and self-financing trading strategies \(\varphi ^{N}=(\varphi ^{0,N},\,\varphi ^{1,N})\) with zero endowment for \(S^{N}\) such that
(1)
(\(c_{N}\)-admissibility condition) \(V^{\lambda _{N}}_{i}(\varphi ^{N})\ge -c_{N},\) for all \(i\in \{0,\ldots ,N\},\)
 
(2)
\(\lim \nolimits _{N\rightarrow \infty }P^{N}(V_{N}^{\lambda _{N}}(\varphi ^{N})\ge C_{N})=1,\)
 
where \(c_{N}\) and \(C_{N}\) are sequences of positive real numbers with \(c_{N}\rightarrow 0\) and \(C_{N}\rightarrow \infty .\)
Remark 2.5
For self-financing strategies with zero endowment, and satisfying (2.4) with equality, the value process takes the following form:
$$\begin{aligned} V_{n}^{\lambda _{N}}\left( \varphi ^{N}\right)&=V_{0}^{\lambda _{N}}\left( \varphi ^{N}\right) +\sum \limits _{k=1}^{n}\varphi _{k-1}^{1,N}\varDelta _{k} S^{N}-\lambda _{N}\sum \limits _{k=1}^{n}\mathbb {I}_{\{\varDelta _{k}\varphi ^{1,N}\ge 0\}}\varDelta _{k}\left[ {\left( \varphi ^{1,N}\right) }^{+} S^{N}\right] \nonumber \\&\quad -\lambda _{N}\sum \limits _{k=1}^{n}\mathbb {I}_{\{\varDelta _{k}\varphi ^{1,N}<0\}}\left\{ \varphi _{k-1}^{1,N}\varDelta _{k}S^{N}+\varDelta _{k}\left[ {\left( \varphi ^{1,N}\right) }^{-} S^{N}\right] \right\} , \end{aligned}$$
(2.6)
where
$$\begin{aligned} V^{\lambda _{N}}_{0}\left( \varphi ^{N}\right) =-\lambda _{N}\left| \varphi ^{1,N}_0\right| s_{0}. \end{aligned}$$
(2.7)

3 Main results

As pointed out in the Sect. 1, the large fractional binary market does not fulfil the standard conditions used in the theory of AA for large financial markets. For this reason, we use a constructive approach to study the existence of SAA with and without transaction costs. This section is dedicated to exposing the main results of the paper, whereas their proofs are presented in the following sections.
We proceed first with the frictionless case. Our goal is to show the existence of a SAA. To do so, we first construct a sequence of self-financing strategies, which allows, with probability arbitrarily close to one, for a strictly positive profit. Next, we modify the strategies to ensure that the required admissibility condition is fulfilled. Finally, after an appropriate normalization, we show that the resulting sequence of strategies provides a SAA.
For each \(N\ge 1,\) we start with a trading strategy \(\varphi ^{N}:=(\varphi ^{0,N},\,\varphi ^{1,N})\) similar to the one provided in [2] for the continuous framework. We have seen in Remark 2.3 that, it is enough to indicate the position in stock \(\varphi ^{1,N},\) as the position in bond \(\varphi ^{0,N}\) can be derived from (2.5), setting \(\lambda _{N}=0\) and \(\varphi _{-1}^{0,N}:=0\) (the same procedure is implicit in the statement of Theorem 3.1). Moreover, \(\varphi ^{1,N}\) is given by
$$\begin{aligned} \varphi _{-1}^{1,N}:=\varphi _{0}^{1,N}:=0\qquad \text {and}\qquad \varphi _{k}^{1,N}:=N^{H-1}\frac{X_{k}}{S_{k}^{N}},\quad k\in \{1,\ldots ,N\}. \end{aligned}$$
We formulate now the main results of the paper.
Theorem 3.1
The sequence of self-financing strategies \((\psi ^{0,N},\,\psi ^{1,N})_{ N\ge 1},\) defined, for \(N\ge 1,\) by:
$$\begin{aligned} \psi _{k}^{1,N}:=\frac{1}{\sqrt{\hat{c}_{N}}}\varphi _{k}^{1,N}1_{\{k<T_{N}\}},\quad k\in \{-1,\,0,\ldots ,N\}, \end{aligned}$$
(3.1)
where \(T_{N}\) is a well chosen stopping time and \(\hat{c}_{N}\) an appropriate constant, provides a SAA in the large fractional binary market.
Now, we let each N-fractional binary market be subject to \(\lambda _{N}\) transaction costs, and we show that there exists a SAA if the sequence of transaction costs \((\lambda _{N})_{N\ge 1}\) converges to zero fast enough. The corresponding sequence of self-financing strategies \((\psi ^{N}(\lambda _{N}))_{N\ge 1}\) is constructed as follows. The position in stock is given by \(\psi ^{1,N}\) in (3.1). The position in bond, \(\psi ^{0,N}(\lambda _{N}),\) is constructed from \(\psi ^{1,N}\) through the \(\lambda _{N}\)-self-financing conditions (2.5), setting \(\psi _{-1}^{0,N}(\lambda _{N}):=0.\)
Theorem 3.2
The self-financing strategies \((\psi ^{N}(\lambda _{N}))_{N\ge 1},\) where
$$\begin{aligned} \lambda _{N}=o\left( \frac{\sqrt{\ln {N}}}{N^{\left( 2H-\frac{1}{4}\right) \wedge \left( H+\frac{1}{2}\right) }}\right) , \end{aligned}$$
provide a SAA in the large fractional binary markets with \((\lambda _{N})_{N\ge 1}\) transaction costs.

4 Proof of Theorem 3.1

4.1 The value process at maturity and the law of large numbers

We aim to characterize the asymptotic behaviour of the value process at maturity, \(V_{N}(\varphi ^{N}).\) First, using (2.6) and (2.7) with \(\lambda _{N}=0,\) we deduce that \(V_{n}(\varphi ^{N})\) is given by
$$\begin{aligned} V_{n}\left( \varphi ^{N}\right) =\frac{1}{N}\sum \limits _{k=1}^{n}X_{k-1}X_{k},\quad n\in \{0,\ldots ,N\}. \end{aligned}$$
Note that the terms in the sum can be expressed as
$$\begin{aligned} X_{k-1}X_{k}=\theta _{k}^{(1)}+\theta _{k}^{(2)}+\theta _{k}^{(3)}+\theta _{k}^{(4)}, \end{aligned}$$
(4.1)
where
$$\begin{aligned} \theta _{k}^{(1)}:=g_{k-1}g_{k}\xi _{k-1}\xi _{k},\quad \theta _{k}^{(2)}:=g_{k}\xi _{k}{\mathscr {Y}}_{k-1},\quad \theta _{k}^{(3)}:=g_{k-1}\xi _{k-1}{\mathscr {Y}}_{k},\quad \theta _{k}^{(4)}:={\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}. \end{aligned}$$
Defining, for \(i\in \{1,\,2,\,3,\,4\}\) and \(n\in \{1,\ldots ,N\},\, {\mathscr {S}}_{n}^{(i)}:=\sum \nolimits _{k=1}^{n}\theta _{k}^{(i)},\) we see that
$$\begin{aligned} V_{n}\left( \varphi ^{N}\right) =\frac{1}{N}\left( {\mathscr {S}}_{n}^{(1)}+{\mathscr {S}}_{n}^{(2)}+{\mathscr {S}}_{n}^{(3)}+{\mathscr {S}}_{n}^{(4)}\right) . \end{aligned}$$
(4.2)
We will see that the first term in (4.2) is a sum of pairwise independent random variables and hence, an appropriate extension of the law of large numbers to this situation can be applied (see [9]). The asymptotic behaviour of the second and third terms in (4.2) will be deduced by studying their variances. For the last term, we show in Appendix 2 that the random variables \((\theta _{k}^{(4)}-E[\theta _{k}^{(4)}])_{k\ge 1}\) satisfy an asymptotically weak dependence property known as mixingale property. Based on this, we determine the behaviour of this term using a law of large numbers for uniformly integrable \(L^{1}\)-mixingales.
Let \(\rho \) denote the autocovariance function of a fractional Brownian motion of Hurst parameter \(h=\frac{H}{2}+\frac{1}{4}\in \left( \frac{1}{2},\,\frac{3}{4}\right) ,\) i.e., \(\rho (n):=\frac{1}{2}[(n+1)^{2h}+(n-1)^{2h}- 2n^{2h}]>0.\) The next result gives the asymptotic behaviour of \(V_{N}(\varphi ^{N})\) based on the convergence properties of each term appearing in (4.2).
Theorem 4.1
(Law of large numbers) The following statements hold:
(1)
\(\frac{1}{N}{\mathscr {S}}_{N}^{(1)}\xrightarrow [N\rightarrow \infty ]{a.s.}0,\)
 
(2)
\(\frac{1}{N}{\mathscr {S}}_{N}^{(2)}\xrightarrow [N\rightarrow \infty ]{L^{2}(P)}0,\)
 
(3)
\(\frac{1}{N}{\mathscr {S}}_{N}^{(3)}\xrightarrow [N\rightarrow \infty ]{L^{2}(P)}g^{2}\left( 2^{H+\frac{1}{2}}-2\right) >0,\)
 
(4)
\(\frac{1}{N}{\mathscr {S}}_{N}^{(4)}\xrightarrow [N\rightarrow \infty ]{L^{1}(P)}4g^{2}\sum \nolimits _{k=2}^{\infty }\rho (k)\rho (k-1)>0,\)
 
where \(g:=\frac{\sigma c_{H}}{H+\frac{1}{2}}.\) In particular
$$\begin{aligned} V_{N}\left( \varphi ^{N}\right) \xrightarrow [N\rightarrow \infty ]{P}\vartheta :=4g^{2}\sum \limits _{k=2}^{\infty }\rho (k)\rho (k-1)+g^{2}\left( 2^{H+\frac{1}{2}}-2\right) >0. \end{aligned}$$
Proof
(Proof of Theorem 4.1) (1) Note first that, for all \(j\ne k\) and \(x,\,y\in \{-1,\,1\},\) we have
$$\begin{aligned} P\left( \xi _{k}\xi _{k-1}=x|\xi _{j}\xi _{j-1}=y\right) =\frac{1}{2}. \end{aligned}$$
Therefore, the random variables \((\theta _{k}^{(1)})_{k\ge 1}\) are pairwise independent. In addition, since \({{\mathrm{Var}}}[\xi _{k}\xi _{k-1}]=1,\) we deduce from the inequalities given in Lemma 5.2 that \(\sum \nolimits _{k=1}^{N}\frac{1}{k^{2}}{{\mathrm{Var}}}[\theta _{k}^{(1)}]<\infty .\) Hence, the result follows as an application of the law of large numbers for pairwise independent random variables (see [9, Theorem 1]).
(2) Note that \(\xi _{k}\) is independent of \({\mathscr {Y}}_{k-1},\) and in particular \(E[\xi _{k}{\mathscr {Y}}_{k-1}]=0.\) Consequently, the convergence in \(L^{2}(P)\) of \({\mathscr {S}}^{(2)}_{N}/N\) to 0 is equivalent to the convergence of the variance to 0. In addition, for any \(k<j,\) we have
$$\begin{aligned} E\left[ \xi _{k}{\mathscr {Y}}_{k-1}\xi _{j}{\mathscr {Y}}_{j-1}\right] =E\left[ \xi _{k}{\mathscr {Y}}_{k-1}{\mathscr {Y}}_{j-1}\right] E\left[ \xi _{j}\right] =0. \end{aligned}$$
It follows that
$$\begin{aligned} {{\mathrm{Var}}}\left[ \frac{1}{N}{\mathscr {S}}^{(2)}_{N}\right]&=\frac{1}{N^{2}}\sum \limits _{k=1}^{N}{g_{k}}^{2}{{\mathrm{Var}}}\left[ \xi _{k}{\mathscr {Y}}_{k-1}\right] =\frac{1}{N^{2}}\sum \limits _{k=1}^{N}{g_{k}}^{2}E\left[ \xi _{k}^{2}\left( {\mathscr {Y}}_{k-1}\right) ^{2}\right] \\&=\frac{1}{N^{2}}\sum \limits _{k=1}^{N}{g_{k}}^{2}E\left[ \left( {\mathscr {Y}}_{k-1}\right) ^{2}\right] =\frac{1}{N^{2}}\sum \limits _{k=1}^{N} {g_{k}}^{2}{{\mathrm{Var}}}\left[ {\mathscr {Y}}_{k-1}\right] . \end{aligned}$$
We know from [6] that \({{\mathrm{Var}}}[{\mathscr {Y}}_{n}]\le \sigma ^{2}\) (see the proof of [6, Lemma 6.2]), and hence
$$\begin{aligned} \sup \limits _{k}{{\mathrm{Var}}}\left[ {\mathscr {Y}}_{k-1}\right] <\infty . \end{aligned}$$
(4.3)
This together with Lemma 5.2 leads to
$$\begin{aligned} {{\mathrm{Var}}}\left[ \frac{1}{N}{\mathscr {S}}^{(2)}_{N}\right]&\le \frac{M}{N}\xrightarrow [N\rightarrow \infty ]{}0, \end{aligned}$$
for some constant \(M>0.\) This gives us the convergence of \({\mathscr {S}}_{N}^{(2)}\) to 0 in \(L^{2}(P).\)
(3) We write \({\mathscr {S}}_{N}^{(3)}\) as a sum of a random term and a deterministic one. We prove that the variance of the random term converges to 0 and the deterministic term converges to \(g^{2}\left( 2^{H+\frac{1}{2}}-2\right) >0.\) Indeed, setting \(\tilde{{\mathscr {Y}}}_{k-1}:=\sum \nolimits _{l=1}^{k-2}j_{k}(l)\xi _{l},\) we get
$$\begin{aligned} \frac{1}{N}{\mathscr {S}}_{N}^{(3)}&=\frac{1}{N}\sum \limits _{k=1}^{N}g_{k-1}\xi _{k-1}{\mathscr {Y}}_{k}=\frac{1}{N}\sum \limits _{k=1}^{N}g_{k-1}\xi _{k-1}\tilde{{\mathscr {Y}}}_{k-1}+\frac{1}{N}\sum \limits _{k=1}^{N}g_{k-1}j_{k}(k-1). \end{aligned}$$
From Lemma 5.2 and [8, Sect. 5], we see that \(g_{k-1}j_{k}(k-1)\xrightarrow [k\rightarrow \infty ]{}g^{2}\left( 2^{H+\frac{1}{2}}-2\right) .\) As a consequence, we deduce that
$$\begin{aligned} \frac{1}{N}\sum \limits _{k=1}^{N}g_{k-1}j_{k}(k-1)\xrightarrow [N\rightarrow \infty ]{}g^{2}\left( 2^{H+\frac{1}{2}}-2\right) . \end{aligned}$$
For the random term, using that \(\tilde{{\mathscr {Y}}}_{k-1}\) is independent of \(\xi _{k-1}\) and a similar argument like in the previous part, we obtain
$$\begin{aligned} \frac{1}{N}\sum \limits _{k=1}^{N}g_{k-1}\xi _{k-1}\tilde{{\mathscr {Y}}}_{k-1}\xrightarrow [N\rightarrow \infty ]{L^{2}(P)}0, \end{aligned}$$
and hence the desired result.
(4) We define, for \(k\ge 1,\,{\mathscr {Y}}_{k}^{*}:={\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}-E[{\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}].\) We know from Proposition 5.4 and Remark 5.5 that \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) satisfy the conditions of the law of large numbers for uniformly integrable \(L^{1}\)-mixingales (see [1, Theorem 1]). Thus,
$$\begin{aligned} \frac{1}{N}\sum \limits _{k=1}^{N}{\mathscr {Y}}_{k}^{*}\xrightarrow [N\rightarrow \infty ]{L^{1}(P)}0. \end{aligned}$$
(4.4)
In addition, for \(n\ge 4,\) we have
$$\begin{aligned} E\left[ {\mathscr {Y}}_{n-1}{\mathscr {Y}}_{n}\right] =\sum \limits _{i=1}^{\frac{n}{4}}j_{n}(i)j_{n-1}(i) +\sum \limits _{i=\frac{n}{4}+1}^{n-2}j_{n}(i)j_{n-1}(i). \end{aligned}$$
Using the inequalities given in Lemma 5.1, we deduce that the first sum on the right-hand side converges to zero. For the second sum, following the lines of the proof of [8, Lemma 5.2], we obtain
$$\begin{aligned} \sum \limits _{i=\frac{n}{4}+1}^{n-2}j_{n}(i)j_{n-1}(i) =\sum \limits _{k=2}^{\frac{3n}{4}-1}j_{n}(n-k)j_{n-1}(n-k)\xrightarrow [n\rightarrow \infty ]{}4g^{2}\sum \limits _{k=2}^\infty \rho (k)\rho (k-1):={\mathscr {V}}>0. \end{aligned}$$
Consequently, we conclude that \(E[{\mathscr {Y}}_{n-1}{\mathscr {Y}}_{n}]\xrightarrow [n\rightarrow \infty ]{}{\mathscr {V}},\) and therefore
$$\begin{aligned} \frac{1}{N}\sum \limits _{k=1}^{N}E\left[ {\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}\right] \xrightarrow [N\rightarrow \infty ]{}{\mathscr {V}}. \end{aligned}$$
(4.5)
Thus, we have
$$\begin{aligned} E\left[ \left| \frac{1}{N}\sum \limits _{k=1}^{N}{\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}-{\mathscr {V}}\right| \right]&\le E\left[ \left| \frac{1}{N}\sum \limits _{k=1}^{N}{\mathscr {Y}}_{k}^{*}\right| \right] +\left| \frac{1}{N}\sum \limits _{k=1}^{N}E\left[ {\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}\right] -{\mathscr {V}}\right| . \end{aligned}$$
(4.6)
By (4.4)–(4.6), it follows immediately that
$$\begin{aligned} E\left[ \left| \frac{1}{N}\sum \limits _{k=1}^{N}{\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}-{\mathscr {V}}\right| \right] \xrightarrow [N\rightarrow \infty ]{}0, \end{aligned}$$
and hence
$$\begin{aligned} \frac{1}{N}{\mathscr {S}}_{N}^{(4)}\xrightarrow [N\rightarrow \infty ]{L^{1}(P)}{\mathscr {V}}. \end{aligned}$$
The proof of (4) is now complete. \(\square \)
Corollary 4.2
For all \(\varepsilon >0,\)
$$\begin{aligned} P\left( V_{N}\left( \varphi ^{N}\right) >\vartheta (1-\varepsilon )\right) \xrightarrow [N\rightarrow \infty ]{}1. \end{aligned}$$
Proof
The result follows using Theorem 4.1 and the definition of the convergence in probability. \(\square \)

4.2 Admissibility condition through stopping procedure

The sequence of self-financing strategies \((\varphi ^{N})_{N\ge 1}\) constructed in Sect. 3 gives the possibility to make a strictly positive profit with probability arbitrarily close to one. Now, we proceed to modify our strategies in such a way that the admissibility conditions are satisfied. More precisely, we stop our self-financing strategies at the first time they fail the admissibility condition. To do so, we split the value process as in (4.2), and we study the stopping times corresponding to each part.
For each \(i\in \{1,\,2,\,3,\,4\}\) and any sequence of strictly positive numbers \((\varepsilon _{N})_{N\ge 1},\) we define the stopping time
$$\begin{aligned} T_{\varepsilon _{N}}^{(N,i)}:=\inf \left\{ k\in \{1,\ldots ,N\}{\text {:}}\, \frac{1}{N}{\mathscr {S}}_{k}^{(i)}< {-}\varepsilon _{N}\right\} , \end{aligned}$$
with the convention that \(\inf \emptyset =\infty .\) Note that these stopping times have values on \(\{1,\ldots ,N\}\cup \{\infty \}.\)
The next result studies the behaviour of the first three stopping times. The proof uses the extension of the Kolmogorov’s maximal inequality given in Lemma 5.6.
Lemma 4.3
For each \(i\in \{1,\,2,\,3\},\) there is a constant \(C^{(i)}>0,\) such that
$$\begin{aligned} P\left( T_{\varepsilon _{N}}^{(N,i)}\le N\right) \le \frac{C^{(i)}}{N\varepsilon _{N}^{2}}. \end{aligned}$$
Proof
(1) Define the stopping time
$$\begin{aligned} \widetilde{T}_{\varepsilon _{N}}^{(N,1)}:=T_{\varepsilon _{N}}\left( \frac{1}{N}{\mathscr {S}}^{(1)}\right) , \end{aligned}$$
and note that \(T_{\varepsilon _{N}}^{(N,1)}\ge \widetilde{T}_{\varepsilon _{N}}^{(N,1)}.\) Therefore, it is enough to prove the result for \(\widetilde{T}_{\varepsilon _{N}}^{(N,1)}.\) Since \(g_{k}\le 2g\) and the random variables \(\{\xi _{k-1} \xi _{k}\}_{k\ge 1}\) are pairwise independent, we conclude that
$$\begin{aligned} {{\mathrm{Var}}}\left( \frac{1}{N}{\mathscr {S}}_{N}^{(1)}\right) =\frac{1}{N^{2}}\sum \limits _{\ell =1}^{N}g_{\ell -1}^{2}g_{\ell }^{2}\le \frac{(2g)^{4}}{N}. \end{aligned}$$
Note that \({\mathscr {S}}_{k}^{(1)}1_{\{\widetilde{T}_{\varepsilon _{N}}^{(N,1)}=k\}}\) is \(\sigma (\xi _{1},\ldots ,\xi _{k})\)-measurable. In addition, we have
$$\begin{aligned} {\mathscr {S}}_{N}^{(1)}-{\mathscr {S}}_{k}^{(1)}=\sum \limits _{\ell =k+2}^{N}g_{\ell -1} g_{\ell } \xi _{\ell -1} \xi _{\ell }+g_{k} g_{k+1} \xi _{k} \xi _{k+1}. \end{aligned}$$
Moreover, \(\sum \nolimits _{\ell =k+2}^{N}g_{\ell -1} g_{\ell } \xi _{\ell -1} \xi _{\ell }\) is \(\sigma (\xi _{k+1},\ldots ,\xi _{N})\)-measurable and
$$\begin{aligned} E\left[ \xi _{k}\xi _{k+1}{\mathscr {S}}_{k}^{(1)}1_{\{\widetilde{T}_{\varepsilon _{N}}^{(N,1)}=k\}}\right] =E\left[ \xi _{k+1}\right] E\left[ \xi _{k}{\mathscr {S}}_{k}^{(1)}1_{\{\widetilde{T}_{\varepsilon _{N}}^{(N,1)}=k\}}\right] =0. \end{aligned}$$
Thus, the condition (5.13) is satisfied and the result follows from Lemma 5.6.
(2) Define the stopping time
$$\begin{aligned} \widetilde{T}_{\varepsilon _{N}}^{(N,2)}:=T_{\varepsilon _{N}}\left( \frac{1}{N}{\mathscr {S}}^{(2)}\right) , \end{aligned}$$
and note that \(T_{\varepsilon _N}^{(N,2)}\ge \widetilde{T}_{\varepsilon _{N}}^{(N,2)}.\) As before, it is enough to prove the result for \(\widetilde{T}_{\varepsilon _{N}}^{(N,2)}.\)
Since, for \(k>j,\,E[\xi _{k}{\mathscr {Y}}_{k-1}\xi _{j}{\mathscr {Y}}_{j-1}]=E[\xi _{k}]E[{\mathscr {Y}}_{k-1}\xi _{j}{\mathscr {Y}}_{j-1}]=0,\) we have
$$\begin{aligned} {{\mathrm{Var}}}\left( \frac{1}{N}{\mathscr {S}}_{N}^{(2)}\right) =\frac{1}{N^{2}}\sum \limits _{\ell =1}^{N}\sum \limits _{i=1}^{\ell -2}g_{\ell }^{2} j_{\ell -1}^{2}(i)\le \frac{(2g)^{2}}{N^{2}}\sum \limits _{\ell =1}^{N}\sum \limits _{i=1}^{\ell -2} j_{\ell -1}^{2}(i). \end{aligned}$$
Using \(\sum \nolimits _{i=1}^{\ell -2} j_{\ell -1}^{2}(i)={{\mathrm{Var}}}({\mathscr {Y}}_{\ell -1}),\) we infer from (4.3) that there is \(C^{(2)}>0\) such that
$$\begin{aligned} {{\mathrm{Var}}}\left( \frac{1}{N}{\mathscr {S}}_{N}^{(2)}\right) \le \frac{C^{(2)}}{N}. \end{aligned}$$
Additionally, we conclude that
$$\begin{aligned} {\mathscr {S}}_{N}^{(2)}-{\mathscr {S}}_{k}^{(2)}=\sum \limits _{\ell =k+1}^{N} g_{\ell }\xi _{\ell }{\mathscr {Y}}_{\ell -1}. \end{aligned}$$
On the other hand, for all \(\ell \in \{k+1,\ldots ,N\},\)
$$\begin{aligned} E\left[ \xi _{\ell }{\mathscr {Y}}_{\ell -1}{\mathscr {S}}_{k}^{(2)}1_{\{\widetilde{T}_{\varepsilon _N}^{(N,2)}=k\}}\right] =E\left[ \xi _{\ell }\right] E\left[ {\mathscr {Y}}_{\ell -1}{\mathscr {S}}_{k}^{(2)}1_{\{\widetilde{T}_{\varepsilon _N}^{(N,2)}=k\}}\right] =0. \end{aligned}$$
The condition (5.13) is verified and the result follows.
(3) Define \(\tilde{{\mathscr {Y}}}_{k-1}:=\sum \nolimits _{l=1}^{k-2}j_{k}(l)\xi _{l}\) and note that:
$$\begin{aligned} \theta _{k}^{(3)}=g_{k-1} j_{k}(k-1) +\underbrace{g_{k-1}\xi _{k-1}\tilde{{\mathscr {Y}}}_{k-1}}_{=:\tilde{\theta }_{k}^{(3)}}>\tilde{\theta }_{k}^{(3)}. \end{aligned}$$
As a consequence, for each \(n\in \{1,\ldots ,N\},\) we have
$$\begin{aligned} {\mathscr {S}}_{n}^{(3)}>\sum \limits _{k=1}^{n}\tilde{\theta }_{k}^{(3)}=:\tilde{{\mathscr {S}}}_{n}^{(3)}. \end{aligned}$$
Moreover, if we define
$$\begin{aligned} \widetilde{T}_{\varepsilon _{N}}^{(N,3)}:=T_{\varepsilon _{N}}\left( \frac{1}{N}\tilde{{\mathscr {S}}}^{(3)}\right) , \end{aligned}$$
it follows that \(T_{\varepsilon _N}^{(N,3)}\ge \widetilde{T}_{\varepsilon _{N}}^{(N,3)}.\) Thus, it is enough to prove the result for \(\widetilde{T}_{\varepsilon _{N}}^{(N,3)}.\)
Since, for \(k>j,\,E[\xi _{k-1}\tilde{{\mathscr {Y}}}_{k-1}\xi _{j-1}\tilde{{\mathscr {Y}}}_{j-1}]=E[\xi _{k-1}]E[\tilde{{\mathscr {Y}}}_{k-1}\xi _{j-1}\tilde{{\mathscr {Y}}}_{j-1}]=0,\) we get
$$\begin{aligned} {{\mathrm{Var}}}\left( \frac{1}{N}\tilde{{\mathscr {S}}}_{N}^{(3)}\right) =\frac{1}{N^{2}}\sum \limits _{\ell =1}^{N}\sum \limits _{i=1}^{\ell -2}g_{\ell -1}^{2} j_{\ell }^{2}(i)\le \frac{(2g)^{2}}{N^{2}}\sum \limits _{\ell =1}^{N}\sum \limits _{i=1}^{\ell -2} j_{\ell }^{2}(i). \end{aligned}$$
We know from (4.3) that the quantities \(\sum \nolimits _{i=1}^{\ell -2} j_{\ell }^{2}(i)\) are uniformly bounded. We conclude that there is \(C^{(3)}>0\) such that
$$\begin{aligned} {{\mathrm{Var}}}\left( \frac{1}{N}\tilde{{\mathscr {S}}}_{N}^{(3)}\right) \le \frac{C^{(3)}}{N}. \end{aligned}$$
In addition, we have
$$\begin{aligned} \tilde{{\mathscr {S}}}_{N}^{(3)}-\tilde{{\mathscr {S}}}_{k}^{(3)}=\sum \limits _{\ell =k+1}^{N} g_{\ell -1}\xi _{\ell -1}\tilde{{\mathscr {Y}}}_{\ell -1}. \end{aligned}$$
Moreover, for all \(\ell \in \{k+1,\ldots ,N\},\) we obtain
$$\begin{aligned} E\left[ \xi _{\ell -1}\tilde{{\mathscr {Y}}}_{\ell -1}\tilde{{\mathscr {S}}}_{k}^{(3)}1_{\{\widetilde{T}_{\varepsilon _{N}}^{(N,3)}=k\}}\right] =E\left[ \xi _{\ell -1}\right] E\left[ \tilde{{\mathscr {Y}}}_{\ell -1}\tilde{{\mathscr {S}}}_{k}^{(3)}1_{\{\widetilde{T}_{\varepsilon _{N}}^{(N,3)}=k\}}\right] =0. \end{aligned}$$
The condition in Lemma 5.6 is verified and the result follows. \(\square \)
In the previous proof, we consider the process \(W^{(i)}={\mathscr {S}}^{(i)}-E[{\mathscr {S}}^{(i)}],\,i\in \{1,\,2,\,3\},\) and we use either a pairwise independence argument or the orthogonality of some random variables to prove that condition (5.13) is satisfied. The desired results are obtained with the help of Lemma 5.6. For the stopping time \(T_{\varepsilon _{N}}^{(N,4)}\) we can not proceed in the same way, because the random variables \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) are pairwise correlated. Nevertheless, the key ingredient is again a maximal inequality for the process \(({\mathscr {Y}}_{k}^{*})_{k\ge 1},\) which is given in Lemma 5.9. As a consequence of the latter, we obtain the following analogue of Lemma 4.3 for \(T_{\varepsilon _{N}}^{(N,4)}.\)
Lemma 4.4
There is a constant \(C^{(4)}>0\) such that
$$\begin{aligned} P\left( T_{\varepsilon _{N}}^{(N,4)}\le N\right) \le \frac{C^{(4)}\ln (N)}{N^{4-4H}\varepsilon _{N}^{2}}. \end{aligned}$$
Proof
First, note that
$$\begin{aligned} E\left[ {\mathscr {S}}_{n}^{(4)}\right] =\sum \limits _{k=3}^{n}\sum \limits _{i=1}^{k-2}j_{k}(i)j_{k-1}(i)\ge 0. \end{aligned}$$
Therefore, we have
$$\begin{aligned} {\mathscr {S}}_{n}^{(4)}={\mathscr {S}}_{n}^{*}+E\left[ {\mathscr {S}}_{n}^{(4)}\right] \ge {\mathscr {S}}_{n}^{*}. \end{aligned}$$
Consequently, if we define the stopping time \(T_{\varepsilon _{N}}^{(N,*)}\) by
$$\begin{aligned} T_{\varepsilon _N}^{(N,*)}:=\inf \left\{ k\in \{1,\ldots ,N\}{\text {:}}\, \frac{1}{N} \left| {\mathscr {S}}_{k}^{*}\right| > \varepsilon _{N}\right\} , \end{aligned}$$
then \(T_{\varepsilon _{N}}^{(N,4)}\ge T_{\varepsilon _{N}}^{(N,*)}.\) In particular, we deduce that
$$\begin{aligned} P\left( T_{\varepsilon _{N}}^{(N,4)}\le N\right) \le P\left( T_{\varepsilon _{N}}^{(N,*)}\le N\right) =P\left( \sup \limits _{n\le N}\frac{1}{N}\left| {\mathscr {S}}_{n}^{*}\right| > \varepsilon _{N}\right) . \end{aligned}$$
The result follows as an application of the Tchebychev inequality and Lemma 5.9. \(\square \)

4.3 The strong asymptotic arbitrage strategy

In this section, using the results of Sect. 4.2, we modify the sequence \((\varphi ^{N})_{N\ge 1}\) constructed in Sect. 4.1, in order to construct an explicit SAA. A first modification will lead to a sequence of self-financing strategies \((\hat{\varphi }^{N})_{N\ge 1}\) providing a strictly positive profit with probability arbitrarily close to one and satisfying the admissibility conditions. Finally, after a second modification, we will obtain a new sequence of self-financing strategies \((\psi ^{N})_{N\ge 1}\) leading to the desired SAA.
The sequence \((\hat{\varphi }^{N})_{N\ge 1}\) is defined as follows. The position in stock is given by
$$\begin{aligned} \hat{\varphi }_{k}^{1,N}:=1_{\{k<T_{N}\}}\varphi _{k}^{1,N},\quad k\in \{-1,\,0,\ldots ,N\}, \end{aligned}$$
where
$$\begin{aligned} T_{N}:=T_{\varepsilon _{N}}^{(N,1)}\wedge T_{\varepsilon _{N}}^{(N,2)}\wedge \left( T_{\varepsilon _{N}}^{(N,3)}-1\right) \wedge \left( T_{\varepsilon _{N}}^{(N,4)}-1\right) , \end{aligned}$$
and the position in bond is derived from (2.5) setting \(\lambda _{N}=0\) and \(\hat{\varphi }_{-1}^{0,N}=0.\) Note that, since the random variables \({\mathscr {S}}_{n}^{(3)}\) and \({\mathscr {S}}_{n}^{(4)}\) are \({\mathscr {F}}_{n-1}\)-measurable, \(T_{\varepsilon _{N}}^{(N,3)}-1\) and \(T_{\varepsilon _{N}}^{(N,4)}-1\) are stopping times with respect to \(({\mathscr {F}}_{n})_{n=0}^{N}.\) Clearly, \(T_{\varepsilon _{N}}^{(N,1)}\) and \(T_{\varepsilon _{N}}^{(N,2)}\) are also stopping times with respect to \(({\mathscr {F}}_{n})_{n=0}^{N},\) and consequently, \(T_{N}\) as well.
By construction, the corresponding value process is given by
$$\begin{aligned} V_{n}\left( \hat{\varphi }^{N}\right) =\frac{1}{N}\sum \limits _{i=1}^{4}{\mathscr {S}}_{n\wedge T_{N}}^{(i)}. \end{aligned}$$
In particular, we have
$$\begin{aligned} V_{n}\left( \hat{\varphi }^{N}\right) =V_{n\wedge T_{N}}\left( \varphi ^{N}\right) \ge -4\varepsilon _{N} +\frac{1}{N}\left( \theta _{T_{N}}^{(1)}+\theta _{T_{N}}^{(2)}\right) 1_{\{n\ge T_{N}\}}. \end{aligned}$$
(4.7)
The next lemma provides a uniform control for the second term on the right-hand side of (4.7).
Lemma 4.5
For \(i\in \{1,\,2,\,3\},\) there exists a constant \(C_{i}^{\theta }>0\) such that
$$\begin{aligned} \sup \limits _{1\le n\le N}\left| \theta _{n}^{(i)}\right| \le C_{i}^\theta N^{H-\frac{1}{2}}. \end{aligned}$$
Proof
It follows from the definition of the random variables \(\theta _{n}^{(i)}\) and Lemma 5.1. \(\square \)
Now, motivated by our previous results, we choose
$$\begin{aligned} \varepsilon _{N}:=\frac{\ln (N)}{N^{\frac{1}{2}\wedge (2-2H)}}\quad \text { and}\quad \hat{c}_{N}:=4\varepsilon _{N}+\frac{C_{1,2}}{N^{\frac{3}{2}-H}}, \end{aligned}$$
where \(C_{1,2}=C_{1}^{\theta }+C_{2}^{\theta }\).
Finally, for each \(N\ge 1,\) we define \(\psi ^{N}=(\psi ^{0,N},\,\psi ^{1,N})\) as follows. The position in stock, \(\psi ^{1,N},\) is given by:
$$\begin{aligned} \psi _{k}^{1,N}:=\frac{1}{\sqrt{\hat{c}_{N}}}\hat{\varphi }_{k}^{1,N},\quad k\in \{-1,\,0,\ldots ,N\}, \end{aligned}$$
and the position in bond, \(\psi ^{0,N},\) is constructed as before, through the self-financing conditions (2.5), setting \(\lambda _{N}=0\) and \(\psi _{-1}^{0,N}=0.\)
Proof
(Proof of Theorem 3.1) In order to have a SAA, we need to show that the two conditions of Definition 2.4 are satisfied. More precisely, we prove that these two conditions are verified for
$$\begin{aligned} c_{N}:=\sqrt{\hat{c}_{N}}\xrightarrow [N\rightarrow \infty ]{} 0\quad \text {and}\quad C_{N}:=\frac{\vartheta }{2\sqrt{\hat{c}_{N}}}\xrightarrow [N\rightarrow \infty ]{} \infty . \end{aligned}$$
Note that, from Lemma 4.5 and Eq. (4.7), the self-financing strategy \(\hat{\varphi }^{N}\) is \(\hat{c}_{N}\)-admissible. Since, in addition
$$\begin{aligned} V_{k}\left( \psi ^{N}\right) =\frac{1}{c_{N}}V_{k}\left( \hat{\varphi }^{N}\right) ,\quad k\in \{0,\ldots ,N\}, \end{aligned}$$
we deduce that \({\psi }^{N}\) is \(c_{N}\)-admissible. Regarding the second condition, we use the convergence behaviour of \(V_{N}(\varphi ^{N})\) given in Corollary 4.2. First, note that
$$\begin{aligned} \left\{ T_{\varepsilon _{N}}^{(N,i)}-1\le N\right\} =\left\{ T_{\varepsilon _{N}}^{(N,i)}\le N\right\} , \end{aligned}$$
and then, from the choice of \(\varepsilon _{N}\) and Lemma 4.3 and Lemma 4.4, we obtain
$$\begin{aligned} P\left( T_{N}\le N\right) \le \sum \limits _{i=1}^{4} P\left( T_{\varepsilon _{N}}^{(N,i)}\le N\right) \xrightarrow [N\rightarrow \infty ]{} 0. \end{aligned}$$
(4.8)
On the other side, over the set \(\{N<T_{N}\},\) we have
$$\begin{aligned} V_{N}\left( \psi ^{N}\right) =\frac{1}{c_{N}}V_{N}\left( \varphi ^{N}\right) . \end{aligned}$$
In particular, we get
$$\begin{aligned} P\left( V_{N}\left( \varphi ^{N}\right) >\vartheta /2\right)&=P\left( \left\{ V_{N}\left( \varphi ^{N}\right) >\vartheta /2\right\} \cap \left\{ T_{N}\le N\right\} \right) \\&\quad +P\left( \left\{ V_{N}\left( \varphi ^{N}\right) >\vartheta /2\right\} \cap \left\{ T_{N}> N\right\} \right) \\&\le P\left( T_{N}\le N\right) +P\left( V_{N}\left( \psi ^{N}\right) >C_{N}\right) . \end{aligned}$$
Letting now \(N\rightarrow \infty \) and applying the results of Corollary 4.2 and (4.8), we get
$$\begin{aligned} \lim \limits _{N\rightarrow \infty }P\left( V_{N}\left( \psi ^{N}\right) >C_{N}\right) =1. \end{aligned}$$
The desired result is then proven. \(\square \)

5 Proof of Theorem 3.2

At this stage, we have all the ingredients needed to prove the main result under transaction costs.
Proof
(Proof of Theorem 3.2) In order to show the first condition of Definition 2.4, we have to make sure that the admissibility condition in the presence of transaction costs is fulfilled. Since \(\psi _{k}^{1,N}=\frac{1}{c_{N}}\hat{\varphi }_{k}^{1,N},\) we have
$$\begin{aligned} V_{n}^{\lambda _{N}}\left( \psi ^{N}\left( \lambda _{N}\right) \right) =\frac{1}{c_{N}}V_{n}^{\lambda _{N}}\left( \hat{\varphi }^{N}\left( \lambda _{N}\right) \right) , \end{aligned}$$
(5.1)
where \(\hat{\varphi }^{N}(\lambda _{N})=(\hat{\varphi }^{0,N}(\lambda _{N}),\,\hat{\varphi }^{1,N})\) and \(\hat{\varphi }^{0,N}(\lambda _{N})\) is determined from \(\hat{\varphi }^{1,N}\) by means of the \(\lambda _{N}\)-self-financing conditions (2.5). Additionally, from (2.6) we deduce that
$$\begin{aligned} V_{n}^{\lambda _{N}}\left( \hat{\varphi }^{N}\left( \lambda _{N}\right) \right) =V_{0}^{\lambda _{N}}\left( \hat{\varphi }^{N}\left( \lambda _{N}\right) \right) +V_{n}\left( \hat{\varphi }^{N}\right) -\lambda _{N}\left( {\mathscr {V}}_{n}^{1}+{\mathscr {V}}_{n}^{2}+{\mathscr {V}}_{n}^{3}\right) , \end{aligned}$$
(5.2)
where
$$\begin{aligned} {\mathscr {V}}_{n}^{1}&:=\sum \limits _{k=1}^{n}\mathbb {I}_{\{\varDelta _{k}\hat{\varphi }^{1,N}\ge 0\}}\varDelta _{k}\left[ {\left( \hat{\varphi }^{1,N}\right) }^{+} S^{N}\right] ,\\ {\mathscr {V}}_{n}^{2}&:=\sum \limits _{k=1}^{n}\mathbb {I}_{\{\varDelta _{k}\hat{\varphi }^{1,N}< 0\}}\varDelta _{k}\left[ {\left( \hat{\varphi }^{1,N}\right) }^{-} S^{N}\right] ,\\ {\mathscr {V}}_{n}^{3}&:=\sum \limits _{k=1}^{n}\mathbb {I}_{\{\varDelta _{k}\hat{\varphi }^{1,N}< 0\}}\hat{\varphi }_{k-1}^{1,N}\varDelta _{k}S^{N}. \end{aligned}$$
Using (2.7) and that \(\varphi _{0}^{1,N}=0,\) we see that \(V_{0}^{\lambda _{N}}(\hat{\varphi }^{N}(\lambda _{N}))=0.\) The second term in (5.2) is exactly the value process with 0 transaction costs for the trading strategy \(\hat{\varphi }^{N}\) and then, from the results of the previous section we have
$$\begin{aligned} V_{n}\left( \hat{\varphi }^{N}\right) \ge - \hat{c}_{N}. \end{aligned}$$
For the third term, we proceed as follows. Using that \(|\hat{\varphi }_{k}^{1,N}|\le |\varphi _{k}^{1,N}|,\) we obtain
$$\begin{aligned} \left| {\mathscr {V}}_{n}^{1}\right|&\le \sum \limits _{k=1}^{n}\left| \varphi ^{1,N}_{k}\right| S_{k}^{N}+\sum \limits _{k=1}^{n}\left| \varphi ^{1,N}_{k-1}\right| S_{k-1}^{N}\nonumber \\&\le \frac{1}{N^{1-H}}\left( \sum \limits _{k=1}^{n}\left| X_{k}\right| +\sum \limits _{k=2}^{n}\left| X_{k-1}\right| \right) . \end{aligned}$$
For the latter sums, we use the upper bounds in Lemmas 5.1 and 5.2 to obtain
$$\begin{aligned} \sum \limits _{k=1}^{n}\left| X_{k}\right|&\le \sum \limits _{k=1}^{n}\sum \limits _{l=1}^{k-1}j_{k}(l)+\sum \limits _{k=1}^{n}g_{k}\nonumber \\&\le \sum _{k=1}^{n}\left( \frac{C_{1}}{k^{2-2H}}\sum \limits _{l=1}^{\frac{k}{4}} \frac{1}{l^{H-\frac{1}{2}}}+C_{2}\sum \limits _{l=1}^{\frac{3k}{4}}\frac{1}{l^{\frac{3}{2}-H}} \right) +2^{H-\frac{1}{2}}gn\nonumber \\&\le \tilde{C}\sum \limits _{k=1}^{n} k^{H-\frac{1}{2}}+2^{H-\frac{1}{2}}g n\nonumber \\&\le \hat{C}_{1} n^{H+\frac{1}{2}}, \end{aligned}$$
(5.3)
where \(\tilde{C}\) and \(\hat{C}_{1}\) are appropriate strictly positive constants. Similarly, we have
$$\begin{aligned} \sum \limits _{k=2}^{n}\left| X_{k-1}\right| \le \hat{C}_{1}n^{H+\frac{1}{2}}. \end{aligned}$$
Hence, we deduce that
$$\begin{aligned} \left| {\mathscr {V}}_{n}^{1}\right| \le \hat{C}_{1}\frac{n^{H+\frac{1}{2}}}{N^{1-H}}\le \hat{C}_{1}N^{2H-\frac{1}{2}}. \end{aligned}$$
For the term \({\mathscr {V}}_{n}^{2}\) in (5.2), we proceed in a similar way and we deduce that there is a constant \(\hat{C}_{2}>0\) such that
$$\begin{aligned} \left| {\mathscr {V}}_{n}^{2}\right| \le \hat{C}_{2} N^{2H-\frac{1}{2}}. \end{aligned}$$
(5.4)
It is left to find an upper bound for \(|{\mathscr {V}}_{n}^{3}|.\) Using (4.1) we write
$$\begin{aligned} \left| {\mathscr {V}}_{n}^{3}\right|&\le \sum \limits _{k=1}^{n}\left| \varphi _{k-1}^{1,N}\varDelta _{k}S^{N}\right| =\frac{1}{N}\sum \limits _{k=1}^{n}\left| X_{k-1}X_{k}\right| \nonumber \\&\le \frac{1}{N}\sum _{k=1}^{n}\left| \theta _{k}^{(1)}+\theta _{k}^{(2)}+\theta _{k}^{(3)}\right| +\frac{1}{N}\sum \limits _{k=1}^{n}\left| \theta _{k}^{(4)}\right| . \end{aligned}$$
(5.5)
From Lemma 4.5, defining \(C_{1,2,3}=C_{1}^{\theta }+C_{2}^{\theta }+C_{3}^{\theta }>0,\) we get
$$\begin{aligned} \sup \limits _{1\le k\le N}\left| \theta _{k}^{(1)}+\theta _{k}^{(2)}+\theta _{k}^{(3)}\right| \le C_{1,2,3}N^{H-\frac{1}{2}}. \end{aligned}$$
We conclude that
$$\begin{aligned} \frac{1}{N}\sum _{k=1}^{n}\left| \theta _{k}^{(1)}+\theta _{k}^{(2)}+\theta _{k}^{(3)}\right| \le C_{1,2,3} N^{H-\frac{1}{2}}. \end{aligned}$$
For the last term in (5.5), we first notice that, using Lemma 5.1 and performing a similar calculation like in (5.3), one gets \(\sum \nolimits _{\ell =1}^{k-1}j_{k}(\ell )\le \bar{C} k^{H-\frac{1}{2}},\) for some constant \(\bar{C}>0.\) Using this and the definition of \(\theta _{k}^{(4)},\) we obtain
$$\begin{aligned} \frac{1}{N}\sum \limits _{k=1}^{n}\left| \theta _{k}^{(4)}\right| \le \frac{\bar{C}^{2}}{N}\sum \limits _{k=1}^{n} k^{2H-1}\le \bar{C}^{2} N^{2H-1}. \end{aligned}$$
Hence, for an appropriate constant \(\hat{C}_{3}>0,\) we have
$$\begin{aligned} \left| {\mathscr {V}}_{n}^{3}\right| \le \hat{C}_{3} N^{2H-1}. \end{aligned}$$
From (5.2) we deduce, for some constant \(c_{*}>0,\) that:
$$\begin{aligned} V_{n}^{\lambda _{N}}\left( \hat{\varphi }^{N}\left( \lambda _{N}\right) \right) \ge V_{n}\left( \hat{\varphi }^{N}\right) -c_{*}\lambda _{N}N^{2H-\frac{1}{2}}. \end{aligned}$$
We return to the self-financing trading strategy \(\psi ^{N}.\) Thanks to (5.1), we get
$$\begin{aligned} V_{n}^{\lambda _{N}}\left( \psi ^{N}\left( \lambda _{N}\right) \right) \ge V_{n}\left( \psi ^{N}\right) -c_{*}\frac{\lambda _{N}}{c_{N}}N^{2H-\frac{1}{2}}. \end{aligned}$$
(5.6)
Since \(\psi ^{N}\) is \(c_{N}\)-admissible, we deduce that
$$\begin{aligned} V_{n}^{\lambda _{N}}\left( \psi ^{N}\left( \lambda _{N}\right) \right) \ge -c_{N} -c_{*}\frac{\lambda _{N}}{c_{N}}N^{2H-\frac{1}{2}}=:-c_{N}\left( \lambda _{N}\right) , \end{aligned}$$
or equivalently, that \(\psi ^{N}(\lambda _{N})\) is \(c_{N}(\lambda _{N})\)-admissible. Note that, it is enough to choose
$$\begin{aligned} \lambda _{N}=o\left( \frac{c_{N}}{N^{2H-\frac{1}{2}}}\right) =o \left( \frac{\sqrt{\ln {N}}}{N^{\left( 2H-\frac{1}{4}\right) \wedge (H+\frac{1}{2})}}\right) , \end{aligned}$$
to have \(c_{N}(\lambda _{N})\xrightarrow [N\rightarrow \infty ]{}0.\)
The second condition of Definition 2.4 follows immediately. Indeed, defining
$$\begin{aligned} C_{N}\left( \lambda _{N}\right) :=C_{N}-c_{*}\frac{\lambda _{N}}{c_{N}} \,N^{2H-\frac{1}{2}}\xrightarrow [N\rightarrow \infty ]{}\infty , \end{aligned}$$
and using (5.6), we obtain
$$\begin{aligned} P\left( V_{N}^{\lambda _{N}}\left( \psi ^{N}\left( \lambda _{N}\right) \right) \ge C_{N}\left( \lambda _{N}\right) \right)&\ge P\left( V_N\left( \psi ^{N}\right) \ge C_{N}\right) . \end{aligned}$$
The second condition follows from the properties of \((\psi ^{N})_{N\ge 1},\) and the desired result is proven. \(\square \)

Acknowledgments

The second author gratefully acknowledges financial support from the Austrian Science Fund (FWF): J3453-N25.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Anhänge

Appendix 1: Some useful results

We recall some results obtained in or easily derived from [8] for the quantities involved in the definition of the fractional binary markets, i.e., \(j_{n}\) and \(g_{n}.\)
Lemma 5.1
There exist constants \(C_{1},\,C_{2}>0\) such that for all \(i\ge 2\) we have
  • For \(1\le \ell \le i/4{\text {:}}\)
    $$\begin{aligned} j_{i}(\ell )\le \frac{C_{1}}{i^{2-2H}\ell ^{H-\frac{1}{2}}}. \end{aligned}$$
  • For all \(1\le k\le 3i/4{\text {:}}\)
    $$\begin{aligned} j_{i}(i-k)\le \frac{C_{2}}{k^{\frac{3}{2}-H}}. \end{aligned}$$
Proof
The proof of the first inequality follows using similar arguments to those used in the proof of [8, Proposition 5.1]. The second inequality uses analogous upper bounds to those obtained in the proof of [8, Lemma 5.2]. \(\square \)
The next result corresponds to [8, Lemma 4.2 and Theorem 5.4].
Lemma 5.2
For all \(1< n\le N,\) we have
$$\begin{aligned} g\le g_{n}\le g\left( 1+\frac{1}{n-1}\right) ^{H-\frac{1}{2}}\le g 2^{H-\frac{1}{2}}, \end{aligned}$$
where \(g=\frac{\sigma c_{H}}{H+\frac{1}{2}}.\) This implies that \(\lim _{n\rightarrow \infty }g_{n}=g.\) Moreover,
$$\begin{aligned} {\mathscr {Y}}_{n}\xrightarrow [n\rightarrow \infty ]{(d)}{\mathscr {Y}}:= 2g\sum \limits _{k=1}^{\infty }\rho (k)\xi _{k}. \end{aligned}$$
(5.7)

Appendix 2: A related \(L^{2}\)-mixingale

In this section we are interested in the properties of the process \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) defined in the proof of Theorem 4.1 as \({\mathscr {Y}}_{k}^{*}:={\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}-E[{\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}].\) In this respect, the notion of mixingale plays a crucial role.
Definition 5.3
(\(L^{p}\)-Mixingale) A sequence \(\{X_{k}\}_{k\ge 1}\) of random variables is an \(L^{p}\)-mixingale with respect to a given filtration \(({\mathscr {F}}_{k})_{k\in {\mathbb Z}},\) if there exist non-negative constants \(\{c_{k}\}_{k\ge 1}\) and \(\{\psi _{m}\}_{m\ge 0}\) such that \(\psi _{m}\rightarrow 0\) as \(m\rightarrow \infty \) and for all \(k\ge 1\) and \(m\ge 0\) the following hold:
(a)
\({\parallel E(X_{k}|\mathscr {F}_{k-m})\parallel }_{p}\le c_{k}\psi _{m},\)
 
(b)
\({\parallel X_{k}-E(X_{k}|\mathscr {F}_{k+m})\parallel }_{p}\le c_{k}\psi _{m+1}.\)
 
We associate to \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) the filtration \(\mathbb {F}^{*}:=({\mathscr {F}}_{i}^{*})_{i\in {\mathbb Z}}\) given by \({\mathscr {F}}_{i}^{*}:={\mathscr {F}}_{i-1},\) for \(i\ge 2,\) and \({\mathscr {F}}_{i}^{*}:=\{\emptyset ,\,\Omega \},\) for \(i\le 1.\)
Proposition 5.4
The process \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) is an \(L^{2}\)-bounded \(L^{2}\)-mixingale with respect to \(\mathbb {F}^{*}.\)
Proof
We first prove that the process \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) is \(L^{2}\)-bounded. Note that
$$\begin{aligned} E\left[ \left| {\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}\right| ^{2}\right] \le E\left[ \left| {\mathscr {Y}}_{k-1}\right| ^{4}\right] +E\left[ \left| {\mathscr {Y}}_{k}\right| ^{4}\right] , \end{aligned}$$
and using Khintchine’s inequality (see [14, (1)]) for both terms on the right side, we obtain
$$\begin{aligned} E\left[ \left| {\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}\right| ^{2}\right] \le 3\left( E\left[ \left| {\mathscr {Y}}_{k-1}\right| ^{2}\right] ^{2}+E\left[ \left| {\mathscr {Y}}_{k}\right| ^{2}\right] ^{2}\right) . \end{aligned}$$
From (4.3), we conclude that \(E[|{\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}|^{2}]\) is uniformly bounded, and, therefore, \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) is \(L^{2}\)-bounded.
Now, we show that \({\mathscr {Y}}_{k}^{*}\) is an \(L^{2}\)-mixingale with respect to \(\mathbb {F}^{*},\) i.e., that the two conditions of Definition 5.3 are satisfied. Note that, since \({\mathscr {Y}}_{k}^{*}\) is \({\mathscr {F}}_{k}^{*}\)-measurable, condition (b) is automatically satisfied. Hence, it remains to prove condition (a) of Definition 5.3, i.e.,
$$\begin{aligned} {\parallel E\left[ {\mathscr {Y}}_{k}^{*}|{\mathscr {F}}_{k-m}^{*}\right] \parallel }_{2}\le c_{k}\psi _{m},\quad k\ge 1,~ m\ge 0, \end{aligned}$$
(5.8)
for some non-negative constants \(c_{k}\) and \(\psi _{m}\) such that \(\psi _{m}\rightarrow 0\) as \(m\rightarrow \infty .\)
Note that, for \(k\le m+1,\) the left-hand side of (5.8) is equal to zero, and then, (5.8) holds for any choice of \(c_{k}\) and \(\psi _{m}.\) The case \(m=0\) can be easily treated using that \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) is \(L^{2}\)-bounded. Now, we assume that \(k-1>m\ge 1,\) and we write \({\mathscr {Y}}_{k-1}{\mathscr {Y}}_{k}\) as follows:
$$\begin{aligned} {\mathscr {Y}}_{k}{\mathscr {Y}}_{k-1}&=\left( \sum _{l=1}^{k-1}j_{k}(l)\xi _{l}\right) \left( \sum _{l=1}^{k-2}j_{k-1}(l)\xi _{l}\right) \nonumber \\&=\left( \underbrace{\sum _{l=1}^{k-m-1}j_{k}(l)\xi _{l}}_{P_{k-m}^{(1)}}+\underbrace{\sum _{l=k-m}^{k-1}j_{k}(l)\xi _{l}}_{F_{k-m}^{(1)}}\right) \left( \underbrace{\sum _{l=1}^{k-m-1}j_{k-1}(l)\xi _{l}}_{P_{k-m}^{(2)}}+\underbrace{\sum _{l=k-m}^{k-2}j_{k-1}(l)\xi _{l}}_{F_{k-m}^{(2)}}\right) \nonumber \\&=P_{k-m}^{(1)}P_{k-m}^{(2)}+P_{k-m}^{(2)}F_{k-m}^{(1)}+P_{k-m}^{(1)}F_{k-m}^{(2)}+F_{k-m}^{(1)}F_{k-m}^{(2)}. \end{aligned}$$
(5.9)
Using that \(P_{k-m}^{(i)}\) is independent of \(F_{k-m}^{(j)},\) that \(P_{k-m}^{(i)}\) is measurable with respect to \({\mathscr {F}}_{k-m}^{*}\) and that \(F_{k-m}^{(i)}\) is independent of \({\mathscr {F}}_{k-m}^{*}\) for all \(i,\,j\in \{1,\,2\},\) we deduce from (5.9) that
$$\begin{aligned} E\left[ {\mathscr {Y}}_{k}{\mathscr {Y}}_{k-1}\right] =E\left[ P_{k-m}^{(1)}P_{k-m}^{(2)}\right] +E\left[ F_{k-m}^{(1)}F_{k-m}^{(2)}\right] , \end{aligned}$$
(5.10)
and
$$\begin{aligned} E\left[ {\mathscr {Y}}_{k}{\mathscr {Y}}_{k-1}|{\mathscr {F}}_{k-m}^{*}\right] =P_{k-m}^{(1)}P_{k-m}^{(2)}+E\left[ F_{k-m}^{(1)}F_{k-m}^{(2)}\right] . \end{aligned}$$
(5.11)
From (5.10) and (5.11), we have
$$\begin{aligned} E\left[ {\mathscr {Y}}_{k}^{*}|{\mathscr {F}}_{k-m}^{*}\right] =P_{k-m}^{(1)}P_{k-m}^{(2)} -E\left[ P_{k-m}^{(1)}P_{k-m}^{(2)}\right] . \end{aligned}$$
Now, we write
$$\begin{aligned} P_{k-m}^{(1)}P_{k-m}^{(2)}=\sum _{l\ne p}^{k-m-1}j_{k}(l)j_{k-1}(p)\xi _{l}\xi _{p}+\sum _{l=1}^{k-m-1}j_{k}(l)j_{k-1}(l), \end{aligned}$$
which implies, using the independence of \(\xi _{l}\) and \(\xi _{p}\) for \(l\ne p,\) that
$$\begin{aligned} E\left[ P_{k-m}^{(1)}P_{k-m}^{(2)}\right] =\sum _{l=1}^{k-m-1} j_{k}(l)j_{k-1}(l), \end{aligned}$$
and hence
$$\begin{aligned} E\left[ {\mathscr {Y}}_{k}^{*}|{\mathscr {F}}_{k-m}^{*}\right] =P_{k-m}^{(1)}P_{k-m}^{(2)} -E\left[ P_{k-m}^{(1)}P_{k-m}^{(2)}\right] =\sum _{l\ne p}^{k-m-1}j_{k}(l)j_{k-1}(p)\xi _{l}\xi _{p}=:P_{k-m}^{*}. \end{aligned}$$
Note first that
$$\begin{aligned} E\left[ \left| P_{k-m}^{*}\right| ^{2}\right] \le 2\sum _{l\ne p}^{k-m-1}\left( j_{k}(l)j_{k-1}(p)\right) ^{2}\le 2\sum _{l=1}^{k-m-1}\left( j_{k}(l)\right) ^{2}\sum _{l=1}^{k-m-1}\left( j_{k-1}(l)\right) ^{2}. \end{aligned}$$
(5.12)
Additionally, using Lemma 5.1, we see that
$$\begin{aligned} \sum _{l=1}^{k-m-1}\left( j_{k}(l)\right) ^{2}&\le \sum _{l=1}^{\frac{k}{4}}\left( j_{k}(l)\right) ^{2}+\sum _{l=m+1}^{\frac{3k}{4}-1}\left( j_{k}(k-l)\right) ^{2}\le \sum _{l=1}^{\frac{k}{4}}\left( j_{k}(l)\right) ^{2} +\sum _{l=1}^{\frac{3k}{4}}\left( j_{k}(k-l)\right) ^{2}\\&\le \frac{C_{1}}{k^{4-4H}}\sum _{l=1}^{\frac{k}{4}}\frac{1}{l^{2H-1}} +C_{2}\sum _{l=1}^{\frac{3k}{4}}\frac{1}{l^{3-2H}}\le \frac{C^0}{k^{2-2H}}\le \frac{C^0}{m^{2-2H}}, \end{aligned}$$
where \(C^{0}>0\) is a well chosen constant. In the previous inequality, the term \(\sum \nolimits _{l=m+1}^{\frac{3k}{4}-1}(j_{k}(k-l))^{2}\) has to be understood as equal to zero if \(k-m-1\le k/4.\) A similar argument shows that, there is a constant \(C^{*}>0\) such that
$$\begin{aligned} \sum _{l=1}^{k-m-1}\left( j_{k-1}(l)\right) ^{2}\le \frac{C^{*}}{{m}^{2-2H}}. \end{aligned}$$
Consequently, Eq. (5.12) leads to
$$\begin{aligned} E\left[ \left| P_{k-m}^{*}\right| ^{2}\right] \le \frac{C}{m^{4-4H}}, \end{aligned}$$
where \(C>0\) is an appropriate constant. We therefore obtain that, for an appropriate constant \(c>0,\) the following holds:
$$\begin{aligned} \sqrt{ E\left[ \left| E\left[ {\mathscr {Y}}_{k}^{*}|{\mathscr {F}}_{k-m}^{*}\right] \right| ^{2}\right] }=\sqrt{E\left[ \left| P_{k-m}^{*}\right| ^{2}\right] }\le c\frac{1}{m^{2-2H}}. \end{aligned}$$
The result follows by choosing \(c_{k}:=c\) and \(\psi _{m}:=m^{2H-2}.\) \(\square \)
Remark 5.5
We have proved that \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) is an \(L^{2}\)-bounded \(L^{2}\)-mixingale. In particular, \(({\mathscr {Y}}_{k}^{*})_{k\ge 1}\) is an uniformly integrable \(L^{1}\)-mixingale (we can use the same \(c_{\ell }\) and \(\psi _{m}\)). Since, in addition, \(\sum \nolimits _{k=1}^{n} c_{k}/n=c<\infty ,\) the conditions of the law of large numbers for mixingales given in [1, Theorem 1] are satisfied.

Appendix 3: Some maximal inequalities

We start with the following generalization of the Kolmogorov’s maximal inequality. Let c be a strictly positive constant and \(W=(W_{k})_{k=1}^{N}\) a sequence of centred random variables. We define the stopping time
$$\begin{aligned} T_{c}(W):=\inf \left\{ k\in \{1,\ldots ,N\}{\text {:}}\,\left| W_{k}\right| > c\right\} . \end{aligned}$$
Lemma 5.6
(Maximal inequality) Assume that, for all \(k\in \{1,\ldots ,N\},\) we have
$$\begin{aligned} E\left[ \left( W_{N}-W_{k}\right) W_{k} 1_{\{T_{c}(W)=k\}}\right] =0. \end{aligned}$$
(5.13)
Then
$$\begin{aligned} P\left( \sup \limits _{1\le i\le N}\left| W_{i}\right| > c\right) =P\left( T_{c}(W)\le N\right) \le \frac{{{\mathrm{Var}}}\left( W_{N}\right) }{c^{2}}. \end{aligned}$$
Proof
We may assume that \({{\mathrm{Var}}}(W_N)<\infty .\) Note that
$$\begin{aligned} {{\mathrm{Var}}}\left( W_{N}\right)&=E\left[ W_{N}^{2}\right] \ge \sum \limits _{k=1}^{N} E\left[ \left( W_{k} +W_{N}-W_{k}\right) ^{2} 1_{\{T_{c}(W)=k\}}\right] \\&\ge \sum \limits _{k=1}^{N} \left( E\left[ W_{k}^{2} 1_{\{T_{c}(W)=k\}}\right] + 2E\left[ \left( W_{N}-W_{k}\right) W_{k} 1_{\{T_{c}(W)=k\}}\right] \right) \\&=\sum \limits _{k=1}^{N} E\left[ W_{k}^{2} 1_{\{T_{c}(W)=k\}}\right] \ge c^{2}P\left( T_{c}(W)\le N\right) . \end{aligned}$$
The result follows. \(\square \)
The previous maximal inequality is used in the study of the stopping times \(T_{\varepsilon _N}^{(N,i)},\,i\in \{1,\,2,\,3\},\) defined in Sect. 4.2. For the stopping time \(T_{\varepsilon _N}^{(N,4)},\) we need a maximal inequality fitting the properties of \(({\mathscr {Y}}^{*}_{n})_{n\ge 1}.\)
Let’s define the random variables \(X_{i,k}:=E[{\mathscr {Y}}_{i}^{*}|{\mathscr {F}}_{i-k}^{*}]-E[{\mathscr {Y}}_{i}^{*}|{\mathscr {F}}_{i-k-1}^{*}],\,i\in {\mathbb N},\,k\in {\mathbb Z}.\) Note that \(X_{i,k}=0\) if \(k<0\) or \(i\le k+1.\) As a consequence, we have
$$\begin{aligned} Y_{n,k}:=\sum \limits _{i=1}^{n} X_{i,k}=\sum \limits _{i=k+2}^{n} X_{i,k}. \end{aligned}$$
We also define \({\mathscr {S}}_{n}^{*}:=\sum \nolimits _{k=1}^{n} {\mathscr {Y}}_{k}^{*}= {\mathscr {S}}_{n}^{(4)}-E[{\mathscr {S}}_{n}^{(4)}].\) The following result provides the desired maximal inequality for \(({\mathscr {S}}_{n}^{*})_{n\ge 1}.\)
Lemma 5.7
For all \(n\ge 1,\) we have
$$\begin{aligned} {\mathscr {S}}_{n}^{*}=\sum _{k=-\infty }^{\infty } Y_{n,k}=\sum \limits _{k=0}^{n-2}Y_{n,k}\,\text {a.s.}, \end{aligned}$$
and for any sequence of strictly positive numbers \((a_{k})_{k\in {\mathbb Z}},\) we have
$$\begin{aligned} E\left[ \sup _{n\le N}\left| {\mathscr {S}}_n^*\right| ^2\right] \le 4\left( \sum \limits _{k=0}^{N-2} a_k\right) \left( \sum \limits _{k=0}^{N-2} a_k^{-1}{{\mathrm{Var}}}\left( Y_{N,k}\right) \right) . \end{aligned}$$
(5.14)
Proof
From Proposition 5.4, we know that \(({\mathscr {Y}}^*_n)_{n\ge 1}\) is a sequence of centred square integrable random variables. It is also straightforward to see that
$$\begin{aligned} E\left[ {\mathscr {Y}}_n^*|{\mathscr {F}}^*_{-\infty }\right] ={\mathscr {Y}}_n^*-E\left[ {\mathscr {Y}}_n^*|{\mathscr {F}}^*_{\infty }\right] =0\,\text {a.s.}, \end{aligned}$$
where \({\mathscr {F}}^*_{-\infty }:=\{\emptyset ,\,\Omega \}\) and \({\mathscr {F}}^*_{\infty }:=\sigma (\xi _{i}{\text {:}}\,i\ge 1).\) Therefore, the first statement follows as a direct application of [19, Lemma 1.5] and the fact that \(Y_{n,k}=0\) for \(k<0\) and \(k>n-2.\) For the remaining part, we need to slightly modify the arguments of [19, Lemma 1.5]. First note that, for \(k\ge 1,\, (Y_{n,k})_{n\ge 1}\) is a square integrable \(({\mathscr {F}}^*_{n-k})_{n\ge 1}\)-martingale. On the other hand, using Cauchy–Schwartz
$$\begin{aligned} \left( {\mathscr {S}}_n^*\right) ^2=\left( \sum \limits _{k=0}^{n-2} \sqrt{a_k}\frac{Y_{n,k}}{\sqrt{a_k}}\right) ^2\le \left( \sum \limits _{k=0}^{n-2}a_k\right) \left( \sum \limits _{k=0}^{n-2} \frac{Y_{n,k}^2}{a_k}\right) \le \left( \sum \limits _{k=0}^{N-2}a_k\right) \left( \sum \limits _{k=0}^{N-2} \frac{Y_{n,k}^2}{a_k}\right) . \end{aligned}$$
Taking \(\sup _{n\le N}\) and the expected value on both sides of the above inequality, we then apply Doob’s inequality to bound the right-hand side. The result follows. \(\square \)
In order to obtain an explicit upper bound for the left-hand side in (5.14), we start by studying the variance of \(Y_{N,k}.\)
Lemma 5.8
For all \(0\le k< i-1,\) we have
$$\begin{aligned} X_{i,k}=\xi _{i-k-1}\sum \limits _{\ell =1}^{i-k-2}j_{i}(\ell ,\,i-k-1)\xi _\ell , \end{aligned}$$
where \(j_i(\ell ,p):=j_i(\ell )j_{i-1}(p)+j_i(p)j_{i-1}(\ell ).\) In particular, for each \(k\ge 0,\) the random variables \((X_{i,k})_{ i> k+1}\) are centred and pairwise uncorrelated. Moreover, we have
$$\begin{aligned} {{\mathrm{Var}}}\left( Y_{N,k}\right) =\sum \limits _{i=k+2}^N\sum \limits _{\ell =1}^{i-k-2}j_i(\ell ,\,i-k-1)^2. \end{aligned}$$
Proof
It is straightforward from the expression of \(E[{\mathscr {Y}}_i^*|{\mathscr {F}}_{i-k}^*]\) obtained in the proof of Proposition 5.4. \(\square \)
Next result gives an explicit upper bound for the left-hand side in (5.14).
Lemma 5.9
There is a constant \(C^*>0\) such that
$$\begin{aligned} E\left[ \sup _{n\le N}\left| {\mathscr {S}}_n^*\right| ^2\right] \le C^* \ln (N)N^{4H-2}. \end{aligned}$$
Proof
We note first that
$$\begin{aligned} j_i(\ell ,\,p)^2\le 2\left( j_i(\ell )^2j_{i-1}(p)^2+j_i(p)^2j_{i-1}(\ell )^2\right) . \end{aligned}$$
Using Lemma 5.8, we get
$$\begin{aligned} \frac{{{\mathrm{Var}}}\left( Y_{N,k}\right) }{2}\le \underbrace{\sum \limits _{i=k+2}^N\sum \limits _{\ell =1}^{i-k-2} j_i(i-k-1)^2j_{i-1}(\ell )^2}_{:=V_{N,k}} +\underbrace{\sum \limits _{i=k+2}^N\sum \limits _{\ell =1}^{i-k-2} j_i(\ell )^2j_{i-1}(i-k-1)^2}_{:=W_{N,k}}. \end{aligned}$$
Now, we write \(V_{N,k}=V_{N,k}^1+V_{N,k}^2+V_{N,k}^3,\) where
$$\begin{aligned} V_{N,k}^1&:=\sum \limits _{i=k+2}^{\frac{4(k+1)}{3}\wedge N}j_i(i-k-1)^2\sum \limits _{\ell =1}^{i-k-2}j_{i-1}(\ell )^2,\\ V_{N,k}^2&:=\sum \limits _{i=\frac{4(k+1)}{3}\wedge N+1}^{N}j_i(i-k-1)^2\sum \limits _{\ell =1}^{\frac{i-1}{4}}j_{i-1}(\ell )^2,\\ V_{N,k}^3&:=\sum \limits _{i=\frac{4(k+1)}{3}\wedge N+1}^{N}j_i(i-k-1)^2\sum \limits _{\ell =\frac{i+3}{4}}^{i-k-2}j_{i-1}(\ell )^2. \end{aligned}$$
For \(V_{N,k}^1,\) we use Lemma 5.1 to obtain
$$\begin{aligned} V_{N,k}^1&\le C_1^4\sum \limits _{i=k+2}^{\frac{4(k+1)}{3}}\frac{1}{(i-1)^{8-8H}(i-k-1)^{2H-1}}\sum \limits _{\ell =1}^{i-k-2}\frac{1}{\ell ^{2H-1}}\\&\le \frac{{C_1^4}}{(2-2H)}\sum \limits _{i=k+2}^{\frac{4(k+1)}{3}}\frac{(i-k-2)^{2-2H}}{(i-1)^{8-8H}(i-k-1)^{2H-1}}\\&\le \frac{{C_1^4}}{(2-2H)(k+1)^{6-6H}}\sum \limits _{i=k+2}^{\frac{4(k+1)}{3}}\frac{1}{(i-k-1)^{2H-1}}\le \frac{\widehat{C}_1}{(k+1)^{4-4H}}, \end{aligned}$$
where \(\widehat{C}_1>0\) is an appropriate constant. For the other terms, we assume that \(\frac{4(k+1)}{3}\le N,\) otherwise they are trivially equal to zero. Thus, for \(V_{N,k}^2,\) we have
$$\begin{aligned} V_{N,k}^2&\le \frac{(C_1C_2)^2}{(k+1)^{3-2H}}\sum \limits _{i=\frac{4(k+1)}{3}+1}^{N}\frac{1}{(i-1)^{4-4H}}\sum \limits _{\ell =1}^{\frac{i-1}{4}}\frac{1}{\ell ^{2H-1}}\\&\le \frac{(C_1 C_2)^2}{(2-2H)(k+1)^{3-2H}}\sum \limits _{i=2}^{N}\frac{1}{(i-1)^{2-2H}}\le \frac{\widehat{C}_2 N^{2H-1}}{(k+1)^{3-2H}},\\ \end{aligned}$$
where \(\widehat{C}_2>0\) is a well chosen constant. Similarly, for the last term we have
$$\begin{aligned} V_{N,k}^3&\le \frac{C_2^2}{(k+1)^{3-2H}}\sum \limits _{i=\frac{4(k+1)}{3}+1}^{N}\sum \limits _{\ell =k+1}^{\frac{3(i-1)}{4}}j_{i-1}(i-1-\ell )^2\le \frac{\widehat{C}_3 N}{(k+1)^{5-4H}}, \end{aligned}$$
where \(\widehat{C}_3>0\) is a well chosen constant. Therefore, there exists \(C_0>0\) such that
$$\begin{aligned} V_{N,k}\le \frac{C_0 N}{(k+1)^{5-4H}}, \end{aligned}$$
for all \(k\le N-2.\) An upper bound of the same order for \(W_{N,k}\) can be obtained using similar arguments. Consequently, there is \(C^*>0\) such that
$$\begin{aligned} {{\mathrm{Var}}}(Y_{N,k})\le \frac{C^* N}{(k+1)^{5-4H}}. \end{aligned}$$
The result follows by plugging this upper bound in (5.14) with \(a_k:=(k+1)^{-1}.\) \(\square \)
Literatur
2.
Zurück zum Zitat Bender, C., Sottinen, T., Valkeila, E.: Fractional processes as models in stochastic finance. In: Advanced Mathematical Methods for Finance, pp. 75–103. Springer, Heidelberg (2011). doi:10.1007/978-3-642-18412-3_3 Bender, C., Sottinen, T., Valkeila, E.: Fractional processes as models in stochastic finance. In: Advanced Mathematical Methods for Finance, pp. 75–103. Springer, Heidelberg (2011). doi:10.​1007/​978-3-642-18412-3_​3
3.
Zurück zum Zitat Cheridito, P.: Regularizing fractional Brownian motion with a view towards stock price modelling. PhD Thesis, ETH Zurich (2001) Cheridito, P.: Regularizing fractional Brownian motion with a view towards stock price modelling. PhD Thesis, ETH Zurich (2001)
5.
Zurück zum Zitat Cont, R.: Long range dependence in financial markets. In: Fractals in Engineering, pp. 159–180. Springer, Berlin (2005) Cont, R.: Long range dependence in financial markets. In: Fractals in Engineering, pp. 159–180. Springer, Berlin (2005)
9.
Zurück zum Zitat Csörgő, S., Tandori, K., Totik, V.: On the strong law of large numbers for pairwise independent random variables. Acta Math. Hungar. 42(3–4), 319–330 (1983). doi:10.1007/BF01956779 MathSciNet Csörgő, S., Tandori, K., Totik, V.: On the strong law of large numbers for pairwise independent random variables. Acta Math. Hungar. 42(3–4), 319–330 (1983). doi:10.​1007/​BF01956779 MathSciNet
12.
15.
Zurück zum Zitat Klein, I., Schachermayer, W.: Asymptotic arbitrage in non-complete large financial markets. Teor. Veroyatnost. i Primenen. 41(4), 927–934 (1996a). doi:10.4213/tvp3284 Klein, I., Schachermayer, W.: Asymptotic arbitrage in non-complete large financial markets. Teor. Veroyatnost. i Primenen. 41(4), 927–934 (1996a). doi:10.​4213/​tvp3284
18.
Zurück zum Zitat Mandelbrot, B., Van Ness, J.: Fractional Brownian motions, fractional noises and applications. SIAM Rev. 10, 422–437 (1968)CrossRefMathSciNetMATH Mandelbrot, B., Van Ness, J.: Fractional Brownian motions, fractional noises and applications. SIAM Rev. 10, 422–437 (1968)CrossRefMathSciNetMATH
21.
Zurück zum Zitat Shiryaev, A.: On Arbitrage and Replication for Fractal Models. Research Report 30. MaPhySto, Department of Mathematical Sciences, University of Aarhus (1998) Shiryaev, A.: On Arbitrage and Replication for Fractal Models. Research Report 30. MaPhySto, Department of Mathematical Sciences, University of Aarhus (1998)
22.
Zurück zum Zitat Shiryaev, A.: Essentials of stochastic finance. In: Advanced Series on Statistical Science and Applied Probability, vol. 3. Facts, models, theory, translated from the Russian manuscript by N. Kruzhilin. World Scientific Publishing Co., Inc., River Edge (1999). doi:10.1142/9789812385192 Shiryaev, A.: Essentials of stochastic finance. In: Advanced Series on Statistical Science and Applied Probability, vol. 3. Facts, models, theory, translated from the Russian manuscript by N. Kruzhilin. World Scientific Publishing Co., Inc., River Edge (1999). doi:10.​1142/​9789812385192
23.
Metadaten
Titel
Strong asymptotic arbitrage in the large fractional binary market
verfasst von
Fernando Cordero
Lavinia Perez-Ostafe
Publikationsdatum
01.03.2016
Verlag
Springer Berlin Heidelberg
Erschienen in
Mathematics and Financial Economics / Ausgabe 2/2016
Print ISSN: 1862-9679
Elektronische ISSN: 1862-9660
DOI
https://doi.org/10.1007/s11579-015-0155-3

Weitere Artikel der Ausgabe 2/2016

Mathematics and Financial Economics 2/2016 Zur Ausgabe