In this paper, the dynamically consistent nonlinear evaluations that were introduced by Peng are considered in probability space \(L^{2} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{t\geq0},P )\). We investigate the n-dimensional (\(n\geq1\)) Jensen inequality, Hölder inequality, and Minkowski inequality for dynamically consistent nonlinear evaluations in \(L^{1} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{t\geq0},P )\). Furthermore, we give four equivalent conditions on the n-dimensional Jensen inequality for g-evaluations induced by backward stochastic differential equations with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)). Finally, we give a sufficient condition on g that satisfies the non-uniform Lipschitz condition under which Hölder’s inequality and Minkowski’s inequality for the corresponding g-evaluation hold true. These results include and extend some existing results.
Hinweise
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.
1 Introduction
It is well known that (see Peng [1, 2]) a dynamically consistent nonlinear evaluation in probability space \(L^{2} (\Omega,{\mathcal {F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\), where \(\{{\mathcal {F}}_{t}\}_{t\geq0}\) is a given filtration, is a system of operators:
Of course, we can define this notion in \(L^{1} (\Omega,{\mathcal {F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\).
Anzeige
In a financial market, the evaluation of the discounted value of a derivative is often treated as a dynamically consistent nonlinear evaluation (expectation). The well-known g-evaluation (g-expectation) induced by backward stochastic differential equations (BSDEs for short), which was put forward by Peng, is a special case of a dynamically consistent nonlinear evaluation (expectation). While nonlinear BSDEs were firstly introduced by Pardoux and Peng [3], who proved the existence and uniqueness of adapted solutions, when the coefficient g is Lipschitz in \((y,z)\) uniformly in \((t,\omega)\), with square-integrability assumptions on the coefficient \(g(t,\omega,y,z)\) and terminal condition ξ. Later many researchers developed the theory of BSDEs and their applications in a series of papers (for example see Hu and Peng [4], Lepeltier and San Martin [5], El Karoui et al. [6], Pardoux [7, 8], Briand et al. [9] and the references therein) under some other assumptions on the coefficients but for a fixed terminal time \(T>0\). In 2000, Chen and Wang [10] obtained the existence and uniqueness theorem for \(L^{2}\) solutions of infinite time interval BSDEs when \(T=\infty\), by the martingale representation theorem and fixed point theorem. Recently, Zong [11] have obtained the result on \(L^{p}\) (\(1< p<2\)) solutions of infinite time interval BSDEs. One of the special cases is the existence and uniqueness theorem of BSDEs with non-uniformly Lipschitz coefficients.
The original motivation for studying nonlinear evaluation (expectation) and g-evaluation (g-expectation) comes from the theory of expected utility, which is the foundation of modern mathematical economics. Chen and Epstein [12] gave an application of dynamically consistent nonlinear evaluation (expectation) to recursive utility, Peng [1, 2, 13‐15] and Rosazza Gianin [16] investigated some applications of dynamically consistent nonlinear evaluations (expectations) and g-evaluations (g-expectations) to static and dynamic pricing mechanisms and risk measures.
Since the notions of nonlinear evaluation (expectation) and g-evaluation (g-expectation) were introduced, many properties of the nonlinear evaluation (expectation) and g-evaluation (g-expectation) have been studied in [1, 2, 6, 10‐31]. In [1, 2], Peng obtained an important result: he proved that if a dynamically consistent nonlinear evaluation \({\mathcal {E}}_{s,t}[\cdot]\) can be dominated by a kind of g-evaluation, then \({\mathcal{E}}_{s,t}[\cdot]\) must be a g-evaluation. Thus, in this case, many problems on dynamically consistent nonlinear evaluations \({\mathcal{E}}_{s,t}[\cdot]\) can be solved through the theory of BSDEs.
It is well known that Jensen’s inequality for classic mathematical expectations holds in general, which is a very important property and has many important applications. But for nonlinear expectation, even for its special case: g-expectation, by Briand et al. [17], we know that Jensen’s inequality for g-expectations usually does not hold in general. So under the assumption that g is continuous with respect to t, some papers, such as [18, 19, 25, 27, 28] have been devoted to Jensen’s inequality for g-expectations, with the help of the theory of BSDEs, they have obtained the necessary and sufficient conditions under which Jensen’s inequality for g-expectations holds in general. Under the assumptions that g does not depend on y and is convex, Chen et al. [18, 19] studied Jensen’s inequality for g-expectations and gave a necessary and sufficient condition on g under which Jensen’s inequality holds for convex functions. Provided g only does not depend on y, Jiang and Chen [28] gave another necessary and sufficient condition on g under which Jensen’s inequality holds for convex functions. It was an improved result in comparison with the result that Chen et al. found. Later, this result was improved by Hu [25] and Jiang [27], in fact, Jiang [27] showed that g must be independent of y. In addition, Fan [22] studied Jensen’s inequality for filtration-consistent nonlinear expectations without domination condition. Jia [26] studied the n-dimensional (\(n>1\)) Jensen’s inequality for g-expectations and got the result that the n-dimensional (\(n>1\)) Jensen’s inequality holds for g-expectations if and only if g is independent of y and linear with respect to z, in other words, the corresponding g-expectation must be linear. Then the natural question is asked:
Anzeige
For more general dynamically consistent nonlinear evaluation \({\mathcal{E}}_{s,t}[\cdot]\), what are the sufficient and necessary conditions under which Jensen’s inequality for \({\mathcal{E}}_{s,t}[\cdot]\) holds in general? Roughly speaking, what conditions on \({\mathcal{E}}_{s,t}[\cdot]\) are equivalent with the inequality
holding for any convex function \(\varphi: \mathcal{R}\mapsto\mathcal{R}\)?
One of the objectives of this paper is to investigate this problem. At the same time, this paper will also investigate the sufficient and necessary conditions on \({\mathcal {E}}_{s,t}[\cdot]\) under which the n-dimensional (\(n>1\)) Jensen inequality holds. As applications of these two results, we give four equivalent conditions on the 1-dimensional Jensen inequality and the n-dimensional (\(n>1\)) Jensen inequality for g-evaluations induced by BSDEs with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)), respectively.
The remainder of this paper is organized as follows: In Section 2, we study the n-dimensional (\(n\geq1\)) Jensen inequality, Hölder inequality, and Minkowski inequality for dynamically consistent nonlinear evaluations in \(L^{1} (\Omega,{\mathcal{F}}, ({\mathcal{F}}_{t} )_{t\geq0},P )\). In Section 3, we give four equivalent conditions on the 1-dimensional Jensen inequality and the n-dimensional (\(n>1\)) Jensen inequality for g-evaluations induced by BSDEs with non-uniform Lipschitz coefficients in \(L^{p} (\Omega,{\mathcal{F}}, ({\mathcal {F}}_{t} )_{0\leq t\leq T},P )\) (\(1< p\leq2\)), respectively. These results generalize the known results on Jensen’s inequality for g-expectation in [18, 19, 22, 25‐28, 31]. In Section 4, we give a sufficient condition on g that satisfies the non-uniform Lipschitz condition under which Hölder’s inequality and Minkowski’s inequality for the corresponding g-evaluation hold true.
2 Jensen’s inequality, Hölder’s inequality, and Minkowski’s inequality for dynamically consistent nonlinear evaluations
Let \((\Omega,{\mathcal{F}},P)\) be a probability space carrying a standard d-dimensional Brownian motion \((B_{t})_{t\geq0}\), and let \(({\mathcal{F}}_{t} )_{t\geq0}\) be the σ-algebra generated by \((B_{t} )_{t\geq0}\). We always assume that \(({\mathcal{F}}_{t} )_{t\geq0}\) is complete. Let \(T > 0\) be a given real number. In this paper, we always work in the probability space \((\Omega,{\mathcal{F}}_{T},P)\), and only consider processes indexed by \(t\in[0, T ]\). We denote \(L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\) (\(p\geq1\)), the space of \({\mathcal {F}}_{t}\)-measurable random variables satisfying \(E_{P}[|X|^{p}]<\infty\), and by \(L^{p}_{+}(\Omega,{\mathcal{F}}_{t} ,P)\) the space of non-negative random variables in \(L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\). Let \(1_{A}\) denote the indicator of event A. For notational simplicity, we use \(L^{p}({\mathcal{F}}_{t}):= L^{p}(\Omega,{\mathcal{F}}_{t} ,P)\) and \(L^{p}_{+}({\mathcal{F}}_{t}):=L^{p}_{+}(\Omega,{\mathcal{F}}_{t} ,P)\). For the convenience of the reader, we recall the notion of a dynamically consistent nonlinear evaluation, defined in \(L^{2}({\mathcal{F}}_{T})\) in Peng [1, 2], but defined in \(L^{1}({\mathcal{F}}_{T})\) in this section.
Definition 2.1
An \({\mathcal{F}}_{t}\)-consistent nonlinear evaluation in \(L^{1}({\mathcal{F}}_{T})\) is a system of operators:
monotonicity: \({\mathcal {E}}_{s,t}[X_{1}]\geq{\mathcal{E}}_{s,t}[X_{2}]\), if \(X_{1}\geq X_{2}\);
(A.2)
\({\mathcal{E}}_{t,t}[X]=X\);
(A.3)
dynamical consistency: \({\mathcal {E}}_{r,s}[{\mathcal{E}}_{s,t}[X]]={\mathcal{E}}_{r,t}[X]\), if \(0\leq r\leq s\leq t\leq T\);
(A.4)
zero one law: \(1_{A}{\mathcal {E}}_{s,t}[X]=1_{A}{\mathcal{E}}_{s,t}[1_{A}X]\), \(\forall A\in{\mathcal{F}}_{s}\).
First, we consider Jensen’s inequality for \({\mathcal{F}}_{t}\)-consistent nonlinear evaluations. We have the following results.
Theorem 2.1
Suppose that\({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is an\({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in\(L^{1}({\mathcal{F}}_{T})\), then the following two statements are equivalent:
(i)
Jensen’s inequality for\({\mathcal{F}}_{t}\)-consistent evaluation\({\mathcal{E}}_{s,t}[\cdot]\)holds in general, i.e., for each convex function\(\varphi: \mathcal{R}\mapsto\mathcal{R}\)and\(\xi\in L^{1}({\mathcal{F}}_{t})\), if\(\varphi(\xi)\in L^{1}({\mathcal{F}}_{t})\), then we have
First, we prove (i) implies (ii). Suppose (i) holds, for each \((\xi,a, b)\in L^{1}({\mathcal {F}}_{t} )\times\mathcal{R} \times\mathcal{R}\), let \(\varphi(x):=ax + b\). Obviously, \(\varphi(x)\) is a convex function and \(\varphi(\xi)\in L^{1}({\mathcal {F}}_{t} )\), then we have
In the following, we prove (ii) implies (i). Suppose (ii) holds, for each \((\xi,a, b)\in L^{1}({\mathcal{F}}_{t} )\times \mathcal{R} \times\mathcal{R}\), we have
But, for any convex function \(\varphi: \mathcal{R}\mapsto\mathcal{R}\), there exists a countable set \(\mathcal{D}\subseteq\mathcal{R}^{2}\) such that
which implies (i) by taking into consideration of (2.2). □
Theorem 2.2
Suppose that\({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is an\({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in\(L^{1}({\mathcal{F}}_{T})\)and\(n>1\), then the following two statements are equivalent:
(i)
then-dimensional Jensen inequality for a\({\mathcal{F}}_{t}\)-consistent evaluation\({\mathcal{E}}_{s,t}[\cdot]\)holds in general, i.e., for each convex function\(\varphi: \mathcal{R}^{n}\mapsto\mathcal{R}\)and\(\xi_{i}\in L^{1}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if\(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in L^{1}({\mathcal{F}}_{t})\), then we have
First, we prove (i) implies (ii)(a). For each \((X,\lambda)\in L^{1}({\mathcal{F}}_{t})\times\mathcal{R}\), let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=\lambda x_{1}\) and \(\xi_{1}:=X\). Obviously, \(\varphi(x_{1},x_{2},\ldots,x_{n})\) is a convex function and \(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in L^{1}({\mathcal{F}}_{t} )\), then we have
On the other hand, let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=x_{1}-(\lambda-1)x_{2}\), \(\xi_{1}:=\lambda X\), and \(\xi_{2}:=X\). By (i), we can deduce that
It follows from (2.3) and (2.4) that (ii)(a) holds true.
Next we prove (ii)(b) holds. For each \((X,Y)\in L^{1}({\mathcal {F}}_{t})\times L^{1}({\mathcal{F}}_{t})\), let \(\varphi(x_{1},x_{2},\ldots,x_{n}):=x_{1}+x_{2}\), \(\xi_{1}:=X\), and \(\xi_{2}:=Y\), then we have
It follows from (2.7) and (2.8) that (ii)(c) holds true.
In the following, we prove (ii) implies (i). Suppose (ii) holds, for any \((a_{1},a_{2},\ldots,a_{n},b)\in\mathcal{R}^{n+1}\) and \(\xi_{i}\in L^{1}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), we have
But, for any convex function \(\varphi: \mathcal{R}^{n}\mapsto\mathcal{R}\), there exists a countable set \(\mathcal{D}\subseteq\mathcal{R}^{n+1}\) such that
where X, Y are non-negative random variables in \((\Omega,{\mathcal {F}}_{T},P)\) and \(1< p\), \(q<\infty\) is a pair of conjugated exponents, i.e., \(\frac{1}{p}+\frac{1}{q}=1\). One may proceed in the following way (cf., e.g., Krein et al. [32], p.43). By elementary calculus, one verifies
for any constant \(a, b\geq0\). This yields \(XY\leq\frac{r^{p}}{p}X^{p}+\frac{r^{-q}}{q}Y^{q}\) a.s. for any \(r>0\). Taking the expectation yields \(E_{P}[XY]\leq\frac{r^{p}}{p}E_{P}[X^{p}]+\frac{r^{-q}}{q}E_{P}[Y^{q}]\) for any \(r>0\), and taking the infimum with respect to r again we arrive at (2.11).
By the above argument, we have the following Hölder inequality for \({\mathcal{F}}_{t}\)-consistent nonlinear evaluations.
Theorem 2.3
Suppose that\({\mathcal {E}}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is an\({\mathcal {F}}_{t}\)-consistent nonlinear evaluation in\(L^{1}({\mathcal{F}}_{T})\). If\({\mathcal{E}}_{s,t}[\cdot]\)satisfies the following conditions:
there exist two non-negative deterministic functions \(\alpha(t)\) and \(\beta(t)\) such that for all \(y_{1},y_{2}\in\mathcal{R}\), \(z_{1},z_{2}\in\mathcal{R}^{d}\),
It is well known that (see Zong [11]) if we suppose that the function g satisfies (B.1) and (B.2), then for each given \(X\in{\mathcal {L}}({\mathcal{F}}_{t})\), there exists a unique solution \((Y^{X},Z^{X})\in {\mathcal{S}}(0,t;P;\mathcal{R})\times{\mathcal{L}}(0,t;P;\mathcal {R}^{d})\) of BSDE (3.1).
Example 3.1
For each given \(\xi\in{\mathcal {L}}({\mathcal{F}}_{T})\), the BSDE
has a unique solution in \({\mathcal{S}}(0,T;P;\mathcal{R})\times{\mathcal {L}}(0,T;P;\mathcal{R}^{d})\).
We denote \({\mathcal{E}}^{g} _{s,t}[X] :=Y_{s}^{X}\). We thus define a system of operators:
$${\mathcal{E}}^{g}_{s,t}[X]:X\in{\mathcal{L}}({ \mathcal{F}}_{t})\mapsto {\mathcal{L}}({\mathcal{F}}_{s}), \quad 0\leq s\leq t\leq T. $$
This system is completely determined by the above given function g. We have the following.
Proposition 3.1
We assume that the functiongsatisfies (B.1) and (B.2). Then the system of operators\({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is an\({\mathcal {F}}_{t}\)-consistent nonlinear evaluation defined in\({\mathcal {L}}({\mathcal{F}}_{T})\).
The proof of Proposition 3.1 is very similar to that of Corollary 2.9 in [13], so we omit it.
Remark 3.1
From Proposition 3.1, we know that the dynamically consistent nonlinear evaluation \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) is completely determined by the given function g. Thus, we call \({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\) a g-evaluation.
Suppose that the function g satisfies (B.1) and (B.3). The g-expectation \({\mathcal{E}}_{g}[\cdot]:{\mathcal{L}}({\mathcal {F}}_{T})\mapsto\mathcal{R}\) is defined by \({\mathcal {E}}_{g}[\xi]=Y_{0}^{\xi}\).
Suppose that the function g satisfies (B.1) and (B.3). The conditional g-expectation of ξ with respect to \({\mathcal{F}}_{t}\) is defined by \({\mathcal{E}}_{g}[\xi|{\mathcal{F}}_{t}]=Y_{t}^{\xi}\).
The proof of Proposition 3.3 is very similar to that of Theorem 3.1 in Hu and Chen [24], so we omit it.
In the following, we study Jensen’s inequality for g-evaluations. First, we introduce some notions on g.
Definition 3.3
Let \(g: \Omega\times[0,T]\times \mathcal{R}\times\mathcal{R}^{d}\mapsto\mathcal{R}\). The function g is said to be super-homogeneous if for each \((y,z)\in\mathcal{R}\times\mathcal {R}^{d}\) and \(\lambda\in\mathcal{R}\), then \(g(t,\lambda y,\lambda z)\geq\lambda g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be positively homogeneous if for each \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\) and \(\lambda\geq0\), then \(g(t,\lambda y,\lambda z)=\lambda g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be sub-additive if, for any \((y,z), (\overline{y},\overline{z})\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y+\overline{y},z+\overline{z})\leq g(t,y,z) +g(t,\overline{y},\overline{z})\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. The function g is said to be super-additive if, for any \((y,z), (\overline{y},\overline{z})\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y+\overline{y},z+\overline{z})\geq g(t,y,z) +g(t,\overline{y},\overline{z})\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.
Theorem 3.1
Suppose that\({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is ag-evaluation, then the following three statements are equivalent:
(i)
Jensen’s inequality forg-evaluation\({\mathcal{E}}^{g}_{s,t}[\cdot]\)holds in general, i.e., for each convex function\(\varphi(x): \mathcal{R}\mapsto\mathcal{R}\)and each\(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), if\(\varphi(\xi)\in{\mathcal {L}}({\mathcal{F}}_{t})\), then we have
gis independent ofyand super-homogeneous with respect toz.
Theorem 3.2
Suppose that\({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is ag-evaluation, then the following three statements are equivalent:
(i)
then-dimensional (\(n>1\)) Jensen inequality for theg-evaluation\({\mathcal{E}}^{g}_{s,t}[\cdot]\)holds in general, i.e., for each convex function\(\varphi: R^{n}\mapsto R\)and\(\xi_{i}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if\(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in{\mathcal{L}}({\mathcal{F}}_{t})\), then we have
\({\mathcal{E}}^{g}_{s,t}\)is linear in\({\mathcal {L}}({\mathcal{F}}_{t})\);
(iii)
gis independent ofyand linear with respect toz, i.e., gis of the form\(g(t,y,z)=g(t,z)=\alpha_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), whereαis a\(R^{d}\)-valued progressively measurable process.
In order to prove Theorems 3.1 and 3.2, we need the following lemmas. These lemmas can be found in Zong and Hu [33].
Lemma 3.1
Suppose that the functiongsatisfies (B.1) and (B.2). Then the following three conditions are equivalent:
(i)
The functiongis independent ofy.
(ii)
The corresponding dynamically consistent nonlinear evaluation\({\mathcal{E}}^{g}[\cdot]\)satisfies: for each\(0\leq s\leq t\leq T\), \({\mathcal{F}}_{t}\)measurable simple functionXand\(y\in\mathcal{R}\),
From Theorem 2.1, we only need to prove (ii) ⇔ (iii). (iii) ⇒ (ii) is obvious.
In the following, we prove (ii) ⇒ (iii). First, we prove that g is independent of y. Suppose (ii) holds, then we have, for any \((\xi,y)\in{\mathcal{L}}({\mathcal {F}}_{t})\times \mathcal{R}\),
For each \((s,z)\in[0,t]\times\mathcal{R}^{d}\), let \(Y_{\cdot}^{s,z}\) be the solution of the following stochastic differential equation (SDE for short) defined on \([s,t]\):
Thus, \((\lambda Y_{r}^{s,z})_{r\in[s,t]}\) is an \({\mathcal{E}}_{g}\)-submartingale. From the decomposition theorem of an \({\mathcal {E}}_{g}\)-supermartingale (see Zong and Hu [33]), it follows that there exists an increasing process \((A_{r})_{r\in[s,t]}\) such that
This with \(\lambda Y_{t}^{s,z}=-\int_{s}^{t}\lambda g(r,z)\, \mathrm{d}r+\int_{s}^{t}\lambda z\cdot\mathrm{d}B_{r}\) yields \(Z_{r}\equiv\lambda z\) and
The condition that g is super-homogeneous with respect to z implies that g is positively homogeneous with respect to z. Indeed, for each fixed \(\lambda>0\), by (3.5), we have \(\frac{1}{\lambda}g(t,\lambda z)\leq g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., i.e.,
In particular, choosing \(\lambda=2\), we have \(2 g(t,0)=g(t,0)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Hence \(g(t,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Thus, for \(\lambda=0\) (3.7) still holds.
From Theorem 2.2, we only need to prove (ii) ⇔ (iii). (iii) ⇒ (ii) is obvious.
In the following, we prove (ii) ⇒ (iii). From the proof of Theorem 3.1, we can obtain, for any \(\lambda\in\mathcal{R}\) and \((y,z)\in\mathcal{R}\times\mathcal {R}^{d}\), \(g(t,y,\lambda z)=g(t,\lambda z)\geq\lambda g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Using the same method, we have \(g(t,y,\lambda z)=g(t,\lambda z)\leq\lambda g(t,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall\lambda\in\mathcal{R}\), \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\). The above arguments imply that, for any \(\lambda\in\mathcal{R}\) and \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\),
It follows from (3.8) and (3.9) that (iii) holds true. The proof of Theorem 3.2 is complete. □
From Theorem 3.1(iii), we know that, for any \(y\in\mathcal{R}\), \(g(t,y,0)=g(t,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s. Hence, \({\mathcal{E}}^{g}_{s,t}[\cdot]={\mathcal{E}}_{g}[\cdot|{\mathcal {F}}_{s}]\). Thus, Theorem 3.1 can be rewritten as follows.
Corollary 3.1
Suppose that\({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is ag-evaluation, then the following four statements are equivalent:
(i)
Jensen’s inequality for theg-evaluation\({\mathcal{E}}^{g}_{s,t}[\cdot]\)holds in general, i.e., for each convex function\(\varphi(x): \mathcal{R}\mapsto\mathcal{R}\)and each\(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), if\(\varphi(\xi)\in{\mathcal {L}}({\mathcal{F}}_{t})\), then we have
gis independent ofyand super-homogeneous with respect toz.
Similarly, Theorem 3.2 can be rewritten as follows.
Corollary 3.2
Suppose that\({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is ag-evaluation, then the following four statements are equivalent:
(i)
then-dimensional (\(n>1\)) Jensen inequality forg-evaluation\({\mathcal{E}}^{g}_{s,t}[\cdot]\)holds in general, i.e., for each convex function\(\varphi: \mathcal {R}^{n}\mapsto\mathcal{R}\)and\(\xi_{i}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(i=1,2,\ldots,n\)), if\(\varphi(\xi_{1},\xi_{2},\ldots,\xi_{n})\in{\mathcal{L}}({\mathcal{F}}_{t})\), then we have
\({\mathcal{E}}^{g}_{0,T}\)is linear in\(L^{2}({\mathcal{F}}_{T})\)and, for any\(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.;
(iii)
\({\mathcal{E}}^{g}_{s,t}\)is linear in\(L^{2}({\mathcal{F}}_{t})\);
(iv)
for each\((y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), \(g(t,y,z)=g(t,z)=\alpha_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., whereαis a\(\mathcal{R}^{d}\)-valued progressively measurable process.
From Proposition 3.3 and Theorem 3.1, we only need to prove (ii) ⇔ (iii). It is obvious that (iii) implies (ii).
In the following, we prove that (ii) implies (iii). Suppose (ii) holds. For each \((X,t,k)\in L^{2}({\mathcal{F}}_{T} )\times[0, T]\times\mathcal{R}\), by (ii), we know that for each \(A\in{\mathcal{F}}_{t}\),
For each \(\lambda\neq0\), define \({\mathcal {E}}^{\lambda}_{t,T}[\cdot]:=\frac{{\mathcal {E}}^{g}_{t,T}[\lambda\cdot]}{\lambda}\), \(\forall t\in[0,T]\). It is easy to check that \({\mathcal{E}}^{g}_{t,T}[\cdot]\) and \({\mathcal {E}}^{\lambda}_{t,T}[\cdot]\) are two \({\mathcal{F}}\)-expectations in \(L^{2}({\mathcal{F}}_{T})\) (the notion of \({\mathcal{F}}\)-expectation can be seen in Coquet et al. [20]). If \(\lambda>0\), for each \(\xi \in L^{2}({\mathcal{F}}_{T})\), \({\mathcal{E}}^{\lambda}_{0,T}[\xi]\geq{\mathcal {E}}^{g}_{0,T}[\xi]\). In a similar manner to Lemma 4.5 in Coquet et al. [20], we can obtain
$$ {\mathcal{E}}^{\lambda}_{t,T}[\xi]\geq{\mathcal {E}}^{g}_{t,T}[\xi] \quad \mbox{a.s.}, \forall t \in[0,T]. $$
(3.11)
If \(\lambda<0\), for each \(\xi\in L^{2}({\mathcal{F}}_{T})\), \({\mathcal {E}}^{\lambda}_{0,T}[\xi]\leq{\mathcal {E}}^{g}_{0,T}[\xi]\). In a similar manner to Lemma 4.5 in Coquet et al. [20] again, we have
From Proposition 3.3 and Theorem 3.2, we only need to prove (ii) ⇔ (iii). It is obvious that (iii) implies (ii).
In the following, we prove that (ii) implies (iii). Suppose (ii) holds. By Proposition 3.3, we know that for each sequence \(\{X_{n}\}_{n=1}^{\infty}\subset L^{2}({\mathcal{F}}_{T})\) such that \(X_{n}(\omega)\downarrow0\) for all ω, \({\mathcal {E}}^{g}_{0,T}[X_{n}]\downarrow0\). By the well-known Daniell-Stone theorem (cf., e.g., Yan [34], Theorem 3.6.8, p.83), there exists a unique probability measure \(P_{\alpha}\) defined on \((\Omega,{\mathcal {F}}_{T})\) such that
holds. Indeed, from (iv), we know that \(\frac{\mathrm{d}P_{\alpha}}{\mathrm{d}P}={ \exp} (\int_{0}^{T}\alpha_{t}\cdot\mathrm{d}B_{t}-\frac{1}{2}\int_{0}^{T}|\alpha_{t}|^{2}\, \mathrm{d}t )\).
On the other hand, since, for any \(y\in\mathcal{R}\), \(g(t,y,0)=0\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., we can obtain
Therefore, \({\mathcal {E}}^{g}_{s,t}\) is linear in \(L^{2}({\mathcal{F}}_{t})\). The proof of Corollary 3.2 is complete. □
From Corollary 3.2, we can immediately obtain the following.
Theorem 3.3
Suppose that\({\mathcal {E}}^{g}_{s,t}[\cdot]\), \(0\leq s\leq t\leq T\)is ag-evaluation, then the following two statements are equivalent:
(i)
\({\mathcal{E}}^{g}_{s,t}\)is linear in\({\mathcal {L}}({\mathcal{F}}_{t})\);
(ii)
there exists a unique probability measure\(P_{\alpha}\)defined on\((\Omega,{\mathcal{F}}_{T})\)such that, for any\(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
\({\mathcal{E}}^{g}_{s,t}[\lambda X]=\lambda{\mathcal {E}}^{g}_{s,t}[X]\)a.s., for any\(X\in{\mathcal{L}}({\mathcal{F}}_{t})\)and\(\lambda\geq0\);
(g)
\({\mathcal{E}}_{s,t}[X+Y]\leq{\mathcal {E}}^{g}_{s,t}[X]+{\mathcal{E}}^{g}_{s,t}[Y]\)a.s., for any\((X,Y)\in {\mathcal{L}}({\mathcal{F}}_{t})\times{\mathcal{L}}({\mathcal{F}}_{t})\);
(h)
\({\mathcal{E}}^{g}_{s,t}[\mu]=\mu\)a.s., for any\(\mu\in\mathcal{R}\);
(ii)
for any\(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
In the following, we prove that (i) implies (ii). Suppose (i) holds. Since \({\mathcal{E}}_{0,T}[\cdot]\) is a sublinear expectation in \({\mathcal{L}}({\mathcal{F}}_{T})\), by Lemma 2.4 in Peng [35], we know that there exists a family of linear expectations \(\{E_{\theta}:\theta\in\Theta\}\) on \((\Omega,{\mathcal {F}}_{T})\) such that, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\),
On the other hand, by Proposition 3.3, we know that for each sequence \(\{X_{n}\}_{n=1}^{\infty}\subset{\mathcal{L}}({\mathcal{F}}_{T})\) such that \(X_{n}(\omega)\downarrow0\) for all ω, \({\mathcal {E}}^{g}_{0,T}[X_{n}]\downarrow0\). By the well-known Daniell-Stone theorem, we can deduce that for each \(\theta\in\Theta\) and \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\), there exists a unique probability measure \(Q_{\theta}\) defined on \((\Omega,{\mathcal{F}}_{T})\) such that
$$ E_{\theta}[\xi]=E_{Q_{\theta}}[\xi]. $$
(3.17)
It follows from (3.16) and (3.17) that, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\),
where \(\Theta^{g}\) := {\((\alpha_{t})_{t\in[0,T]}:\alpha\) is \(\mathcal{R}^{d}\)-valued, progressively measurable and, for any \((y,z)\in\mathcal{R}\times \mathcal{R}^{d}\), \(\alpha_{t}\cdot z\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s.}. In order to prove (ii), now we prove that \(\Pi=\Lambda\).
For any \(\alpha\in\Theta^{g}\), we define \(g^{\alpha}(t,y,z):=\alpha_{t}\cdot z\), \(\forall t\in[0,T]\), \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\). Then, for any \(\xi\in{\mathcal {L}}({\mathcal{F}}_{T})\), by the well-known Girsanov theorem, we can deduce that
Since, for any \((y,z)\in\mathcal{R}\times\mathcal{R}^{d}\), \(\alpha_{t}\cdot z=g^{\alpha}(t,y,z)\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., it follows from the well-known comparison theorem for BSDEs that \(E_{P_{\alpha}}[\xi]={\mathcal {E}}^{g^{\alpha}}_{0,T}[\xi]\leq{\mathcal{E}}^{g}_{0,T}[\xi]\). Hence \(\Pi\subseteq\Lambda\).
Next let us prove that \(\Lambda\subseteq\Pi\). For each \(Q_{\theta}\in\Lambda\), since \(E_{Q_{\theta}}[\cdot]\leq{\mathcal {E}}^{g}_{0,T}[\cdot]\), \(\forall\xi, \eta\in L^{2}({\mathcal{F}}_{T})\), we have
Denote \(g^{\beta}(t,y,z):=\beta(t)|z|\), \(\forall t\in[0,T]\), \((y,z)\in \mathcal{R}\times\mathcal{R}^{d}\). From Lemmas 3.1 and 3.2 and applying the well-known comparison theorem for BSDEs again, we have
From (3.19) and (3.20), we can deduce that \(E_{Q_{\theta}}[\xi+\eta]-E_{Q_{\theta}}[\eta]\leq{\mathcal {E}}_{g^{\beta}}[\xi]\). Then, in a similar manner to Theorem 7.1 in Coquet et al. [20], we know that there exists a unique function \(g^{\theta}\) defined on \(\Omega\times[0,T]\times\mathcal{R}\times \mathcal{R}^{d}\) satisfying the following three conditions:
\(|g^{\theta}(t,y_{1},z_{1})-g^{\theta}(t,y_{2},z_{2})|\leq\beta(t)|z_{1}-z_{2}|\), \(\forall(y_{1},z_{1}), (y_{2},z_{2})\in\mathcal{R}\times\mathcal{R}^{d}\), where \(\beta(t)\) is a non-negative deterministic function satisfying that \(\int_{0}^{T}\beta^{2}(t)\, \mathrm{d}t<\infty\);
It follows from the linearity of \(({\mathcal {E}}_{g^{\theta}}[\cdot|{\mathcal{F}}_{t}] )_{t\in[0,T]}\) and Theorem 3.2 that \(g^{\theta}\) is linear with respect to z. Therefore, there exists a \(\mathcal{R}^{d}\)-valued progressively measurable process \((\theta_{t})_{t\in[0,T]}\) such that \(g^{\theta}(t,y,z)=\theta_{t}\cdot z\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times \mathcal{R}^{d}\). In view of \(Q_{\theta}\in\Lambda\) and (H.3), we have for each \(\xi\in L^{2}({\mathcal{F}}_{T})\), \({\mathcal {E}}_{g^{\theta}}[\xi]=E_{Q_{\theta}}[\xi]\leq{\mathcal {E}}^{g}_{0,T}[\xi]\). Then in a similar manner to Lemma 4.5 in Coquet et al. [20] and by Lemma 3.4, we can obtain \(g^{\theta}(t,y,z)=\theta_{t}\cdot z\leq g(t,y,z)\), \(\mathrm{d}P\times\mathrm{d}t\)-a.s., \(\forall(y,z)\in\mathcal{R}\times\mathcal {R}^{d}\). For θ, we define the probability measure \(P_{\theta}\) satisfying \(\frac{\mathrm{d}P_{\theta}}{\mathrm{d}P}={ \exp} (\int_{0}^{T}\theta_{t}\cdot\mathrm{d}B_{t}-\frac{1}{2}\int_{0}^{T}|\theta_{t}|^{2}\, \mathrm{d}t )\), then \(P_{\theta}\in\Pi\) and \(E_{P_{\theta}}[\xi]={\mathcal {E}}_{g^{\theta}}[\xi]=E_{Q_{\theta}}[\xi]\), \(\forall\xi\in L^{2}({\mathcal {F}}_{T})\). Hence, \(Q_{\theta}=P_{\theta}\in\Pi\). Thus, \(\Lambda\subseteq\Pi\). Therefore, we have \(\Pi=\Lambda\).
Finally, we prove that, for any \(s, t\in[0,T]\) satisfying \(s\leq t\) and \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\), \({\mathcal {E}}^{g}_{s,t}[\xi]=\sup_{Q_{\theta}\in\Lambda}E_{Q_{\theta}}[\xi |{\mathcal{F}}_{s}]\) a.s. It follows from (H.3), the well-known comparison theorem for BSDEs, and Proposition 3.3 that
On the other hand, by Lemmas 3.1, 3.2, and 3.3, we can deduce that g is independent of y and positively homogeneous, sub-additive with respect to z. For any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{T})\), let \((Y^{\xi}_{t},Z^{\xi}_{t} )_{t\in[0,T]}\) denote the solution of the following BSDE:
By a measurable selection theorem (cf., e.g., El Karoui and Quenez [21], p.215), we can deduce that there exists a progressively measurable process \(\alpha^{\xi}\in\Theta^{g}\) such that
From (3.22) and applying the well-known Girsanov theorem, we have \({\mathcal{E}}^{g}_{s,t}[\xi]={\mathcal {E}}^{g}_{s,T}[\xi]=E_{P_{\alpha^{\xi}}}[\xi|{\mathcal{F}}_{s}]\) a.s. Hence, for any \(\xi\in{\mathcal{L}}({\mathcal{F}}_{t})\),
then\({\mathcal{E}}^{g}_{s,t}[\cdot]\)satisfies the following conditions:
(j)
\({\mathcal{E}}^{g}_{s,t}[\xi+\eta]\leq{\mathcal {E}}^{g}_{s,t}[\xi]+{\mathcal{E}}^{g}_{s,t}[\eta]\)a.s., for any\((\xi,\eta)\in{\mathcal{L}}_{+}({\mathcal{F}}_{t})\times{\mathcal {L}}_{+}({\mathcal{F}}_{t})\);
(k)
\({\mathcal{E}}^{g}_{s,t}[\lambda\xi]=\lambda{\mathcal {E}}^{g}_{s,t}[\xi]\)a.s., for any\(\xi\in{\mathcal{L}}_{+}({\mathcal{F}}_{t})\)and\(\lambda\geq0\).
The key idea of the proof of Lemma 4.1 is the well-known comparison theorem for BSDEs. The proof is very similar to that of Proposition 4.2 in Jia [26]. So we omit it.
Applying Lemma 4.1 and Theorems 2.3 and 2.4, we immediately have the following Hölder inequality and Minkowski inequality for g-evaluations.
Theorem 4.1
Letgsatisfy the conditions of Lemma 4.1, then, for any\(X,Y\in{\mathcal{L}}({\mathcal{F}}_{t})\)and\(|X|^{p}, |Y|^{q}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(p, q>1\)and\(1/p+1/q=1\)), we have
Letgsatisfy the conditions of Lemma 4.1, then, for any\(X, Y\in{\mathcal{L}}({\mathcal{F}}_{t})\), and\(|X|^{p},|Y|^{p}\in{\mathcal{L}}({\mathcal{F}}_{t})\) (\(p>1\)), we have
The authors would like to thank the anonymous referees for their careful reading of this paper, correction of errors, and valuable suggestions. The work of Zhaojun Zong, Feng Hu and Chuancun Yin is supported by the National Natural Science Foundation of China (Nos. 11301295 and 11171179), the Doctoral Program Foundation of Ministry of Education of China (Nos. 20123705120005 and 20133705110002), the Program for Scientific Research Innovation Team in Colleges and Universities of Shandong Province of China and the Program for Scientific Research Innovation Team in Applied Probability and Statistics of Qufu Normal University (No. 0230518). The work of Helin Wu is Supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission (No. KJ1400922).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.