Skip to main content
Erschienen in: Finance and Stochastics 1/2021

Open Access 01.01.2021

Nonlinear expectations of random sets

verfasst von: Ilya Molchanov, Anja Mühlemann

Erschienen in: Finance and Stochastics | Ausgabe 1/2021

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Sublinear functionals of random variables are known as sublinear expectations; they are convex homogeneous functionals on infinite-dimensional linear spaces. We extend this concept for set-valued functionals defined on measurable set-valued functions (which form a nonlinear space) or, equivalently, on random closed sets. This calls for a separate study of sublinear and superlinear expectations, since a change of sign does not alter the direction of the inclusion in the set-valued setting.
We identify the extremal expectations as those arising from the primal and dual representations of nonlinear expectations. Several general construction methods for nonlinear expectations are presented and the corresponding duality representation results are obtained. On the application side, sublinear expectations are naturally related to depth trimming of multivariate samples, while superlinear ones can be used to assess utilities of multiasset portfolios.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Fix a probability space \((\Omega ,\mathfrak{F},\mathbb{P})\). A sublinear expectation is a real-valued function \(\mathtt{e}\) defined on the space \(L^{p}(\mathbb{R})\) of \(p\)-integrable random variables (with \(p\in [1,\infty ]\)) such that
$$ \mathtt{e}(\xi +a)=\mathtt{e}(\xi )+a $$
(1.1)
for each deterministic \(a\), the function \(\mathtt{e}\) is monotone, i.e.,
$$ \mathtt{e}(\xi ) \leq \mathtt{e}(\eta )\qquad \text{if $\xi \leq \eta $ a.s.}, $$
homogeneous, i.e.,
$$ \mathtt{e}(c\xi )=c\mathtt{e}(\xi ),\qquad c\geq 0, $$
and subadditive, i.e.,
$$ \mathtt{e}(\xi +\eta )\leq \mathtt{e}(\xi )+\mathtt{e}(\eta ); $$
(1.2)
see Peng [29, 30], who brought sublinear expectations to the realm of probability theory and established their close relationship to solutions of backward stochastic differential equations. A superlinear expectation \(\mathtt{u}\) satisfies the same properties with (1.2) replaced by
$$ \mathtt{u}(\xi +\eta )\geq \mathtt{u}(\xi )+\mathtt{u}(\eta ). $$
(1.3)
In many studies, the homogeneity property together with sub-(super-)additivity is replaced by convexity of \(\mathtt{e}\) and concavity of \(\mathtt{u}\). The range of values may be extended to \((-\infty ,\infty ]\) for the sublinear expectation and to \([-\infty ,\infty )\) for the superlinear one.
Abstract sublinear functionals have been studied by Fuglede [11], Schmeidler [32] and many further papers in relation to capacities and the Choquet integral and in view of applications to game theory and optimisation. While the notation \(\mathtt{e}\) reflects the expectation meaning, the choice of notation \(\mathtt{u}\) is explained by the fact that the superlinear expectation can be viewed as a utility function that assigns a higher utility value to the sum of two random variables in comparison with the sum of their individual utilities; see Delbaen [7, Chap. 4]. If the random variable \(\xi \) models a financial gain, then \(r(\xi )=-\mathtt{u}(\xi )\) is called a coherent risk measure. The property (1.1) is then termed cash-invariance, and the superadditivity property is turned into subadditivity due to the change of sign. The subadditivity of a risk measure means that the sum of two random variables bears at most the same risk as the sum of their risks; this is justified by the economic principle of diversification.
It is easy to see that \(\mathtt{e}\) is a sublinear expectation if and only if
$$ \mathtt{u}(\xi )=-\mathtt{e}(-\xi ) $$
(1.4)
is a superlinear one, and in this case \(\mathtt{e}\) and \(\mathtt{u}\) are said to form an exact dual pair. The sublinearity property yields \(\mathtt{e}(\xi )+\mathtt{e}(-\xi )\geq \mathtt{e}(0)=0\), so that \(-\mathtt{e}(-\xi )\leq \mathtt{e}(\xi )\). The interval \([\mathtt{u}(\xi ),\mathtt{e}(\xi )]\) generated by an exact dual pair of nonlinear expectations characterises the uncertainty in the determination of the expectation of \(\xi \). In finance, such intervals determine price ranges in illiquid markets; see Madan [24].
We equip the space \(L^{p}\) with the \(\sigma (L^{p},L^{q})\)-topology based on the standard pairing of \(L^{p}\) and \(L^{q}\) with \(1/p+1/q=1\). It is usually assumed that \(\mathtt{e}\) is lower semicontinuous and \(\mathtt{u}\) is upper semicontinuous in the \(\sigma (L^{p},L^{q})\)-topology. Given that \(\mathtt{e}\) and \(\mathtt{u}\) take finite values, general results of functional analysis concerning convex functions on linear spaces imply the semicontinuity property if \(p\in [1,\infty )\) (see Kaina and Rüschendorf [20]); it is additionally imposed if \(p=\infty \). A nonlinear expectation is said to be law-invariant (more exactly, law-determined) if it takes the same value on identically distributed random variables; see Föllmer and Schied [10, Sect. 4.5].
A rich source of sublinear expectations is provided by suprema of conventional (linear) expectations taken with respect to several probability measures. Assuming the \(\sigma (L^{p},L^{q})\)-lower semicontinuity, the bipolar theorem yields that this is the only possible case; see Delbaen [7, Sect. 4.5] and Kaina and Rüschendorf [20]. Then
$$ \mathtt{e}(\xi )=\sup _{\gamma \in \mathcal{M},\mathbb{E}[\gamma ]=1} \mathbb{E}[\gamma \xi ] $$
(1.5)
is the supremum of expectations \(\mathbb{E}[\gamma \xi ]\) over a convex \(\sigma (L^{q},L^{p})\)-closed cone ℳ in \(L^{q}(\mathbb{R}_{+})\); the superlinear expectation is obtained by replacing the supremum with the infimum. In the following, we assume that (1.5) holds and that the representing set ℳ is chosen in such a way that the corresponding sublinear and superlinear expectations are law-invariant, that is, with each \(\gamma \), ℳ contains all random variables identically distributed as \(\gamma \).
A random closed set \(X\) in Euclidean space is a random element with values in the family ℱ of closed sets in \(\mathbb{R}^{d}\) such that \(\{X\cap K\neq \varnothing \}\) is in \(\mathfrak{F}\) for all compact sets \(K\) in \(\mathbb{R}^{d}\); see Molchanov [25, Sect. 1.1.1]. In other words, a random closed set is a measurable set-valued function. A random closed set \(X\) is said to be convex if \(X\) almost surely belongs to the family \(\operatorname{co}\mathcal{F}\) of closed convex sets in \(\mathbb{R}^{d}\). For convex random sets in Euclidean space, the measurability condition is equivalent to the condition that the support function of \(X\) (see (2.2) below) is a random function on \(\mathbb{R}^{d}\) with values in \((-\infty ,\infty ]\).
In the set-valued setting, it is natural to replace the inequalities (1.2) and (1.3) with inclusions. For sets, the minus sign corresponds to the reflection with respect to the origin; it does not alter the direction of the inclusion, and so there is no direct link between set-valued sublinear and superlinear expectations.
This paper aims to systematically explore nonlinear set-valued expectations. Section 2 recalls the classical concept of the (linear) selection expectation for random closed sets, introduced by Aumann [4] and Artstein and Vitale [3]; see also Molchanov [25, Sect. 2.1]. The selection expectation \(\mathbb{E}[X]\) is defined as the closure of the set of expectations \(\mathbb{E}[\xi ]\) of all integrable random vectors \(\xi \) such that \(\xi \in X\) almost surely (selections of \(X\)). In Sect. 2.3, we introduce a suitable convergence concept for (possibly unbounded) random convex sets based on linear functionals applied to the support function.
Nonlinear expectations of random convex sets are introduced in Sect. 3. We refine the properties of nonlinear expectations stated in Molchanov [25, Sect. 2.2.7]. Basic examples of such expectations and more involved constructions are considered with a particular attention to the expectations of random singletons. It is also explained how the set-valued expectation applies to random convex functions and how it is possible to get rid of the homogeneity property and extend the setting to convex/concave functionals.
Among the rather vast variety of nonlinear expectations, it is possible to identify extremal ones: the minimal sublinear expectation of \(X\) is the convex hull of nonlinear expectations of all sets from some family that yields \(X\) as their union. In the case of selections, this becomes a direct generalisation of the representation of the selection expectation as the set of expectations for all random points almost surely belonging to a random set. The maximal superlinear extension is the intersection of nonlinear expectations of all half-spaces containing the random set. While the two coincide in the linear case and provide two equivalent definitions of the selection expectation, the two constructions differ in general. Similar set-valued functions on linear spaces have been studied by Hamel [12] and Hamel and Heyde [13], and the dual representation in [12, 13] appears to be the representation of maximal superlinear expectations in our setting restricted to special random closed sets.
Nonlinear maps restricted to the family \(L^{p}(\mathbb{R}^{d})\) of \(p\)-integrable random vectors and sets having the form of a random vector plus a cone have been studied by Cascos and Molchanov [6] and Hamel and Heyde [12, 13]; comprehensive duality results have been proved by Drapeau et al. [9]. In our terminology, these studies concern the case when the argument of a superlinear expectation is the sum of a random vector and a convex cone, which in Hamel et al. [14] is allowed to be random, but is the same for all random vectors involved. However, for general set-valued arguments, it does not seem possible to rely on the approach of [9, 12, 13], since the known techniques of set-valued optimisation theory (see e.g. Khan and Tammer [21]) do not suffice to handle functions whose arguments belong to a nonlinear space.
The key technique suitable to handle nonlinear expectations relies on the bipolar theorem. A direct generalisation of this theorem for functionals of random convex sets is not feasible, since random convex sets do not form a linear space. Section 5 provides duality results for sublinear expectations and Sect. 6 for superlinear ones. Specifically, the constant-preserving minimal sublinear expectations are identified. For the superlinear case, the family of random closed convex sets such that a superlinear expectation contains the origin is a convex cone. However, it is rather tricky to use separation results since linear functions (such as the selection expectation) may have trivial values on unbounded integrable random sets. For instance, the selection expectation of a random half-space with a nondeterministic normal is the whole space; in this case, superlinear expectations are not dominated by any nontrivial linear expectation. In order to handle such situations, the duality results for superlinear expectations are proved for the maximal superlinear expectation. It is shown that the superlinear expectation of a singleton is usually empty; in order to come up with a nontrivial minimal extension, singletons in the definition of the minimal extension are replaced by translated cones. For arguments being the sum of a point and a cone in \(\mathbb{R}^{d}\), we recover the results of Hamel and Heyde [12, 13].
Some applications are presented in Sect. 7. Sublinear expectations are useful in order to identify outliers in samples of random sets. Such samples often appear in partially identified models in econometrics, e.g. as intervals giving the salary range (see Molchanov and Molinari [27]), or as interval-valued price ranges in finance. The superlinear expectation can be used to assess multivariate risk in finance and to measure multivariate utilities. The superlinearity property is essential, since the utility of the sum of two portfolios described by random sets “dominates” the sum of their individual utilities. We show that the minimal extension of a superlinear expectation is closely related to the selection risk measure of lower random sets considered by Molchanov and Cascos [26]. Allowing the arguments of multiasset utilities to be general convex random sets makes it possible to use iteration-based constructions in the dynamic framework (see Lépinette and Molchanov [23]) and so consider nonlinear extensions of multivariate martingales. The case of random sets having the form of a vector plus a cone is the standard setting in the theory of markets with proportional transaction costs; see Kabanov and Safarian [19]. Superlinear expectations make it possible to assess utilities (and risks) of such portfolios and so develop dynamic hedging strategies; see [23]. Allowing general arguments of superlinear expectations makes it possible to include models of general convex transaction costs (see Pennanen and Penner [31]), most importantly, the setting of limit order books.
The appendix presents a self-contained proof of the fact that vector-valued sublinear expectations of random vectors necessarily split into sublinear expectations applied to each component of the vector. This fact reiterates the point that the set-valued setting is essential for defining multivariate nonlinear expectations.
We use the following notational conventions: \(X,Y\) denote random closed convex sets, \(F\) is a deterministic closed convex set, \(\xi \) and \(\beta \) are \(p\)-integrable random vectors and random variables, \(\zeta \) and \(\gamma \) are \(q\)-integrable vectors and variables with \(1/p+1/q=1\), \(\eta \) is usually a random vector with values in the unit sphere \(\mathbb{S}^{d-1}\), \(u\) and \(v\) are deterministic points from \(\mathbb{S}^{d-1}\).

2 Selection expectation

2.1 Integrable random sets and selection expectation

Let \(X\) be a random closed set in \(\mathbb{R}^{d}\), always assumed to be almost surely nonempty. A random vector \(\xi \) is called a selection of \(X\) if \(\xi \in X\) almost surely. Let \(L^{p}(X)\) denote the family of (equivalence classes of) \(p\)-integrable selections of \(X\) for \(p\in [1,\infty )\), essentially bounded ones if \(p=\infty \), and all selections if \(p=0\). If \(L^{p}(X)\) is not empty, then \(X\) is called \(p\)-integrable, shortly integrable if \(p=1\). This is the case if \(X\) is \(p\)-integrably bounded, that is, if \(| X |=\sup \{ | x | : x\in X\}\) is \(p\)-integrable (essentially bounded if \(p=\infty \)).
If \(X\) is integrable, then its selection expectation is defined by
$$ \mathbb{E}[X]:=\operatorname{cl}\{\mathbb{E}[\xi ]: \xi \in L^{1}(X) \}, $$
(2.1)
which is the closure of the set of expectations of all integrable selections of \(X\); see Molchanov [25, Sect. 2.1.2]. In (2.1), the same expectation is applied to all selections of \(X\). If \(X\) is integrably bounded, then the closure on the right-hand side is not needed and \(\mathbb{E}[X]\) is compact. The set \(\mathbb{E}[X]\) is convex if \(X\) is convex or if the underlying probability space is non-atomic. From now on, we assume that all random closed sets we consider are almost surely convex.
The support function of any nonempty set \(F\) in \(\mathbb{R}^{d}\) is defined by
$$ h(F,u)=\sup \{\langle x,u\rangle : x\in F \},\qquad u\in \mathbb{R}^{d}, $$
(2.2)
allowing possibly infinite values if \(F\) is not bounded, where \(\langle u,x\rangle \) denotes the scalar product. Due to homogeneity, the support function is determined by its values on the unit sphere \(\mathbb{S}^{d-1}\).
If \(X\) is an integrable random closed set, then its expected support function is the support function of \(\mathbb{E}[X]\), that is,
$$ \mathbb{E}[h(X,u) ]=h (\mathbb{E}[X],u ),\qquad u\in \mathbb{R}^{d}; $$
(2.3)
see [25, Theorem 2.1.38]. Thus
$$ \mathbb{E}[X]=\bigcap _{u\in \mathbb{S}^{d-1}} \{x : \langle x,u \rangle \leq \mathbb{E}[h(X,u) ] \}, $$
which may be seen as the dual representation of the selection expectation with (2.1) being its primal representation. Ararat and Rudloff [2] provide an axiomatic Daniell–Stone type characterisation of the selection expectation. The property (2.3) can also be expressed as
$$ \mathbb{E}\Big[\sup _{\xi \in X} \langle \xi ,u\rangle \Big] =\sup _{ \xi \in L^{1}(X)} \mathbb{E}[\langle \xi ,u\rangle ], $$
(2.4)
meaning that in this case, it is possible to interchange expectation and supremum. If \(X\) is an integrable random closed set and ℌ is a sub-\(\sigma \)-algebra of \(\mathfrak{F}\), the conditional expectation \(\mathbb{E}[X|\mathfrak{H}]\) is identified by its support function, being the conditional expectation of the support function of \(X\); see Hiai and Umegaki [16] and [25, Sect. 2.1.6].
The dilation (scaling) of a closed set \(F\) is defined as \(cF=\{cx : x\in F\}\) for \(c\in \mathbb{R}\). For two closed sets \(F_{1}\) and \(F_{2}\), their closed Minkowski sum is defined by
$$ F_{1}+F_{2}=\operatorname{cl}\{x+y : x\in F_{1},\,y\in F_{2}\}, $$
and the sum is empty if at least one summand is empty. If at least one of \(F_{1}\) and \(F_{2}\) is compact, the closure on the right-hand side is not needed. We write shortly \(F+a\) instead of \(F+\{a\}\) for \(a\in \mathbb{R}^{d}\).
If \(X\) and \(Y\) are random closed convex sets, then \(X+Y\) is a random closed convex set; see [25, Theorem 1.3.25]. The selection expectation is linear on integrable random closed sets, that is, \(\mathbb{E}[X+Y]=\mathbb{E}[X]+ \mathbb{E}[Y]\); see e.g. [25, Proposition 2.1.32].
In the following, the letter \(C\) always refers to a deterministic closed convex cone in \(\mathbb{R}^{d}\) which is distinct from the whole space. If \(F=F+C\), then \(F\) is said to be \(C\)-closed. Due to the closed Minkowski sum on the right-hand side, \(F\) is also topologically closed. Let \(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\) denote the family of all \(C\)-closed convex sets in \(\mathbb{R}^{d}\) (including the empty set), and let \(L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) be the family of all \(p\)-integrable random sets with values in \(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\). Any such random set is necessarily a.s. nonempty. By
$$ C^{o}= \{u\in \mathbb{R}^{d} : h(C,u)\leq 0 \}, $$
we denote the polar cone of \(C\).
Example 2.1
If \(C=\{0\}\), then \(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},\{0\})\) is the family \(\operatorname{co}\mathcal{F}\) of all convex closed sets in \(\mathbb{R}^{d}\). If \(C=\mathbb{R}_{-}^{d}\), then \(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},\mathbb{R}_{-}^{d})\) is the family of lower convex closed sets, and a random closed convex set with realisations in this family is called a random lower set.
Example 2.2
Let \(C\) be a convex closed cone in \(\mathbb{R}^{d}\) which does not coincide with the whole space. If \(X=\xi +C\) for \(\xi \in L^{p}(\mathbb{R}^{d})\), then \(X\) belongs to the space \(L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\). For each \(\zeta \in L^{q}( C^{o} )\), we have \(h(X,\zeta )=\langle \xi ,\zeta \rangle \).

2.2 Support function at random directions

For \(t\in \mathbb{R}\), let
$$ H_{u}(t)= \{x\in \mathbb{R}^{d} : \langle x,u\rangle \leq t \}, \qquad u\neq 0, $$
denote a half-space in \(\mathbb{R}^{d}\), and set \(H_{u}(\infty )=\mathbb{R}^{d}\). Particular difficulties when dealing with unbounded random closed sets are caused by the fact that the support function of any deterministic argument may be infinite with probability one.
Example 2.3
Let \(X=H_{\eta }(0)\) be the random half-space with the normal vector \(\eta \) having a non-atomic distribution. Then \(\mathbb{E}[X]\) is the whole space. The support function of \(X\) is finite only on the random ray \(\{c\eta : c\geq 0\}\).
It is shown by Lépinette and Molchanov [22, Corollary 3.5] that each random closed convex set satisfies
$$ X=\bigcap _{\eta \in L^{0}(\mathbb{S}^{d-1})} H_{\eta }(X), $$
(2.5)
where
$$ H_{\eta }(X)=H_{\eta }\big(h(X,\eta )\big) $$
is the smallest half-space with outer normal \(\eta \) that contains \(X\). If \(X\) is a.s. \(C\)-closed, then (2.5) holds with \(\eta \) running through the family of selections of \(\mathbb{S}^{d-1}\cap C^{o} \).
For each \(\zeta \in L^{q}(\mathbb{R}^{d})\), the support function \(h(X,\zeta )\) is a random variable with values in \((-\infty ,\infty ]\); see [22, Lemma 3.1]. While \(h(X,\zeta )\) is not necessarily integrable, its negative part is always integrable if \(X\) is \(p\)-integrable. Indeed, choose any \(\xi \in L^{p}(X)\) and write \(h(X,\zeta )=h(X-\xi ,\zeta )+\langle \xi ,\zeta \rangle \). The second summand on the right-hand side is integrable, while the first one is nonnegative.
Lemma 2.4
Let \(X,Y\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\). If we have \(\mathbb{E}[h(Y,\zeta )]\leq \mathbb{E}[h(X,\zeta )]\) for all \(\zeta \in L^{q}( C^{o} )\), then \(Y\subseteq X\) a.s.
Proof
For each \(A\in \mathfrak{F}\), replacing \(\zeta \) with \(\zeta \mathbf{1}_{A}\) yields
$$ \mathbb{E}[h(Y,\zeta )\mathbf{1}_{A} ]\leq \mathbb{E}[h(X,\zeta ) \mathbf{1}_{A} ], $$
whence \(h(Y,\zeta )\leq h(X,\zeta )\) a.s. The same holds for a general \(\zeta \in L^{q}(\mathbb{R}^{d})\) by splitting it into the cases when \(\zeta \in C^{o} \) and \(\zeta \notin C^{o} \). For a general \(\zeta \in L^{0}(\mathbb{R}^{d})\), we have \(h(Y,\zeta _{n})\leq h(X,\zeta _{n})\) a.s. with \(\zeta _{n}=\zeta \mathbf{1}_{\{|\zeta |\leq n\}}\) for \(n\in \mathbb{N}\). Thus \(h(Y,\zeta ) \leq h(X,\zeta )\) almost surely for all \(\zeta \in L^{0}(\mathbb{R}^{d})\), and the statement follows from [22, Corollary 3.6]. □
Corollary 2.5
The distribution of \(X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) is uniquely determined by \(\mathbb{E}[h(X,\zeta )]\) for \(\zeta \in L^{q}( C^{o} )\).
Proof
Apply Lemma 2.4 with \(Y=\{\xi \}\), so that the values of \(\mathbb{E}[h(X,\zeta )]\) identify all \(p\)-integrable selections of \(X\), and note that \(X\) equals the closure of the family of its \(p\)-integrable selections; see [25, Proposition 2.1.4]. □
A random closed set \(X\) is called Hausdorff-approximable if it appears as the almost sure limit in the Hausdorff metric of random closed sets with at most a finite number of values. It is known [25, Theorem 1.3.18] that all random compact sets are Hausdorff-approximable, as well as those that appear as the sum of a random compact set and a random closed set with at most a finite number of possible values. The random closed set \(X\) from Example 2.3 is not Hausdorff-approximable.
The distribution of a Hausdorff-approximable \(p\)-integrable random closed convex set \(X\) is uniquely determined by the selection expectations \(\mathbb{E}[\gamma X]\) for all \(\gamma \in L^{q}(\mathbb{R}_{+})\), and it actually suffices to let \(\gamma \) run through all measurable indicators; see Hess [15] and [25, Proposition 2.1.33]. If \(X\) is Hausdorff-approximable, then its selections \(\xi \) are identified by the condition \(\mathbb{E}[\xi \mathbf{1}_{A}]\in \mathbb{E}[X\mathbf{1}_{A}]\) for all \(A\in \mathfrak{F}\). By passing to the support functions, we arrive at a variant of Lemma 2.4 with \(\zeta =u\mathbf{1}_{A}\) for all \(u\in \mathbb{S}^{d-1}\) and \(A\in \mathfrak{F}\).

2.3 Convergence of random closed convex sets

Convergence of random closed sets is typically considered in probability, almost surely or in distribution; see Molchanov [25, Sect. 1.7]. In the following, we define \(L^{p}\)-type convergence concepts. The space \(L^{p}(\mathbb{R}^{d})\) is equipped with the \(\sigma (L^{p},L^{q})\)-topology, that is, \(\xi _{n}\to \xi \) means that \(\mathbb{E}[\langle \xi _{n} ,\zeta \rangle ]\to \mathbb{E}[\langle \xi , \zeta \rangle ]\) for all \(\zeta \in L^{q}(\mathbb{R}^{d})\).
Lemma 2.6
Recall that \(C\) denotes a generic convex cone in \(\mathbb{R}^{d}\) which differs from the whole space. If \(X\) is a \(p\)-integrable random \(C\)-closed convex set, then \(L^{p}(X)\) is a nonempty convex \(\sigma (L^{p},L^{q})\)-closed and \(L^{p}(C)\)-closed subset of \(L^{p}(\mathbb{R}^{d})\).
Proof
If \(\xi _{n}\in L^{p}(X)\) and \(\xi _{n}\to \xi \in L^{p}(\mathbb{R}^{d})\) in \(\sigma (L^{p},L^{q})\), then
$$ \mathbb{E}[\langle \xi ,\zeta \rangle ] =\lim _{n\to \infty } \mathbb{E}[\langle \xi _{n},\zeta \rangle ] \leq \mathbb{E}[h(X, \zeta ) ] $$
for all \(\zeta \in L^{q}(\mathbb{R}^{d})\). Thus \(\xi \) is a selection of \(X\) by Lemma 2.4. The statement concerning \(C\)-closedness is obvious. □
A sequence \((X_{n})_{n\in \mathbb{N}}\) in \(L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) is said to converge to a random set \(X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) scalarly in \(\sigma (L^{p},L^{q})\) (shortly, scalarly) if
$$ \mathbb{E}[h(X_{n},\zeta )]\longrightarrow \mathbb{E}[h(X,\zeta )] \qquad \text{as $n\to \infty $, for all $\zeta \in L^{q}( C^{o} )$}, $$
where the convergence is understood in the extended line \((-\infty ,\infty ]\). Since \(\mathbb{E}[h(X_{n},\zeta )]\) equals the support function of \(L^{p}(X_{n})\) in the direction \(\zeta \), this convergence is the scalar convergence \(L^{p}(X_{n})\to L^{p}(X)\) as convex sets in \(L^{p}(\mathbb{R}^{d})\); see Sonntag and Zǎlinescu [34].

3 General nonlinear set-valued expectations

3.1 Definitions

Fix \(p\in [1,\infty ]\) and a convex closed cone \(C\) distinct from the whole space \(\mathbb{R}^{d}\).
Definition 3.1
A sublinear set-valued expectation is a function
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equk_HTML.png
such that
i) https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq256_HTML.gif for each deterministic \(a\in \mathbb{R}^{d}\) (additivity on deterministic singletons);
ii) https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq258_HTML.gif for all deterministic \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\);
iii) https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq260_HTML.gif if \(X\subseteq Y\) almost surely (monotonicity);
iv) https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq262_HTML.gif for all \(c>0\) (homogeneity);
v) https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq264_HTML.gif is subadditive, that is,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ11_HTML.png
(3.1)
for all \(p\)-integrable random closed convex sets \(X\) and \(Y\). A superlinear set-valued expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq268_HTML.gif satisfies the same properties with the exception of ii) replaced by https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq269_HTML.gif and (3.1) replaced by the superadditivity property
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ12_HTML.png
(3.2)
The nonlinear expectations https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq270_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq271_HTML.gif are said to be law-invariant if they retain their values on identically distributed random closed convex sets.
Proposition 3.2
All nonlinear expectations on \(L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) take their values in \(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\).
Proof
If \(a\in C\), then \(X+a\subseteq X\) a.s., whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq276_HTML.gif . Therefore, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq277_HTML.gif . □
While the argument \(X\) of nonlinear expectations is a.s. nonempty, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq279_HTML.gif may be empty and then the right-hand side of (3.2) is also empty. However, if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq280_HTML.gif is empty for some \(X\), then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq282_HTML.gif for \(\xi \in L^{p}(X)\); hence
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equl_HTML.png
is empty for all \(p\)-integrable random sets \(Y\). Thus each sublinear expectation is either always empty or always nonempty. In view of this, we assume that sublinear expectations take nonempty values. We always exclude the trivial cases when https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq286_HTML.gif or https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq287_HTML.gif for all \(X\).
Note that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq289_HTML.gif is a closed convex cone, which may be strictly larger than \(C\). By Proposition 3.2, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq291_HTML.gif is either \(C\) or is empty. The sublinear (respectively, superlinear) expectation is said to be normalised if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq293_HTML.gif (respectively, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq294_HTML.gif ). We always have https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq295_HTML.gif by property ii), and we also have https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq296_HTML.gif , since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq297_HTML.gif for all \(a\in \mathbb{R}^{d}\) and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq299_HTML.gif is not identically empty.
The properties of nonlinear expectations do not imply that they preserve deterministic convex closed sets. A deterministic set \(F\) from \(\operatorname{co}(F(\mathbb{R}^{d},C))\) is called invariant if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq302_HTML.gif . The family of invariant sets is closed under translations, under dilations by positive reals and for Minkowski sums, e.g. if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq303_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq304_HTML.gif , then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equm_HTML.png
A nonlinear expectation is said to be constant-preserving if all nonempty deterministic sets from \(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\) are invariant.
The superlinear and sublinear expectations https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq306_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq307_HTML.gif form a dual pair if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq308_HTML.gif is a subset of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq309_HTML.gif for each \(p\)-integrable random closed convex set \(X\). In contrast to the univariate setting, the reflection \(-X=\{-x : x\in X\}\) of \(X\) with respect to the origin does not alter the direction of set inclusions, so that the exact duality relation (1.4) is useless; if \(C=\{0\}\), then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq315_HTML.gif is also a sublinear expectation.
For a sequence \((F_{n})_{n\in \mathbb{N}}\) of closed sets, its lower limit \(\liminf _{n\to \infty } F_{n}\) is the set of limits for all convergent sequences \(x_{n}\in F_{n}\), \(n\in \mathbb{N}\), and its upper limit \(\limsup _{n\to \infty } F_{n}\) is the set of limits for all convergent subsequences \(x_{n_{k}}\in F_{n_{k}}\), \(k\in \mathbb{N}\).
The sublinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq323_HTML.gif is called lower semicontinuous if
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ13_HTML.png
(3.3)
and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq324_HTML.gif is upper semicontinuous if
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equn_HTML.png
for any sequence \((X_{n})_{n\in \mathbb{N}}\) of random closed convex sets converging to \(X\) in the chosen topology, e.g. scalarly lower semicontinuous if \((X_{n})\) scalarly converges to \(X\). Note that our lower semicontinuity definition is weaker than its standard variant for set-valued functions which would require that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq329_HTML.gif is a subset of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq330_HTML.gif ; see Hu and Papageorgiou [18, Proposition 2.35].
Proposition 3.3
If \(X+X'=\mathbb{R}^{d}\) a.s. with \(X'\) being an independent copy of \(X\), then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq334_HTML.gif for each law-invariant sublinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq335_HTML.gif .
Proof
By subadditivity and law-invariance,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equo_HTML.png
 □
Proposition 3.3 applies if \(X=H_{\eta }(0)\) is a half-space with a non-atomic \(\eta \), so that each law-invariant sublinear expectation on such random sets takes trivial values.
Example 3.4
Let \(C=\mathbb{R}_{-}^{d}\). If https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq339_HTML.gif for a vector-valued function \(\overline{\mathtt{e}}:L^{p}(\mathbb{R}^{d})\to \mathbb{R}^{d}\), then \(\overline{\mathtt{e}}(\xi )\) splits into the vector of superlinear expectations applied to the components of \(\xi =(\xi _{1},\dots ,\xi _{d})\); see Theorem A.1.
Remark 3.5
It is possible to consider nonlinear expectations defined only on some special random sets, e.g. singletons or half-spaces. It is then only required that the family of such sets is closed under translations, under dilations by positive reals and for Minkowski sums.
Remark 3.6
Utility functions of random variables are usually assumed to be superadditive. Risk measures of random variables are defined by inverting the sign and so become subadditive. In order to resemble the terminology common for risk measures, the family \(\operatorname{co}\mathcal{F}\) could be ordered by the reverse inclusion ordering; then the terminology is correspondingly adjusted, e.g. a superlinear expectation becomes sublinear and monotonically decreasing. The use of the reverse inclusion order promoted by Hamel et al. [13, 14] is largely motivated by financial terminology, where risk measures are traditionally assumed to be antimonotonic and subadditive; see e.g. Föllmer and Schied [10, Chap. 4]. In the reverse inclusion order, set-valued risk measures become subadditive, exactly as conventional risk measures of random variables are. We, however, systematically consider the conventional inclusion order, and so our set-valued setting extends the setup advocated by Delbaen [7] in the numerical case. He considers utility functions instead of risk measures: utility functions are superlinear and increasing, corresponding to the properties of the superlinear set-valued expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq344_HTML.gif . Thus up to a change of terminology, our superlinear expectation corresponds to the sublinear set-valued risk measure of Hamel et al. [13, 14]. On the other hand, our sublinear expectation is a different object, which requires a separate treatment. Indeed, in the set-valued framework, a change of sign (that is, the central symmetry) does not alter the direction of the inclusion, and so it is not possible to convert a superlinear function to a sublinear one.
Remark 3.7
Motivated by financial applications, it is possible to replace the homogeneity and sub-(super-)additivity properties with convexity or concavity, e.g.
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equp_HTML.png
But then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq345_HTML.gif can be turned into a superlinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq346_HTML.gif for random sets in the space \(\mathbb{R}^{d+1}\) by letting
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equq_HTML.png
The arguments of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq348_HTML.gif are random closed convex sets \(Y=\{t\}\times X\); they form a family closed for dilations, Minkowski sums and translations by singletons from \(\mathbb{R}_{+}\times \mathbb{R}^{d}\). Note that selections of \(\{t\}\times X\) are given by \((t,\xi )\) with \(\xi \) being a selection of \(X\). In view of this, all results in the homogeneous case apply to the convex case if the dimension is increased by one.

3.2 Examples

The simplest example is provided by the selection expectation, which is linear and law-invariant on all integrable random convex sets.
Example 3.8
Let
$$ F_{X}= \{x : \mathbb{P}[x\in X]=1 \} $$
denote the set of fixed points of a random closed set \(X\). If \(X\) is almost surely convex, then \(F_{X}\) is also almost surely convex, and if \(X\) is compact with positive probability, then \(F_{X}\) is compact. It is easy to see that \(F_{X+Y}\) contains \(F_{X}+F_{Y}\), whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq362_HTML.gif is a law-invariant superlinear expectation. With a similar idea, it is possible to define the sublinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq363_HTML.gif as the support of \(X\), which is the set of points \(x \in \mathbb{R}^{d}\) such that \(X\) hits any open neighbourhood of \(x\) with positive probability. By the monotonicity property, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq368_HTML.gif for any \(x\in F_{X}\), whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq370_HTML.gif is a subset of any other normalised superlinear expectation of \(X\). By a similar argument, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq372_HTML.gif dominates any other constant-preserving sublinear expectation.
Example 3.9
Fix \(C=\{0\}\) and let \(X=[\beta ,\infty )\subseteq \mathbb{R}\) be a half-line. The functional https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq375_HTML.gif is superlinear if and only if \(\beta \mapsto \mathtt{e}(\beta )\) is sublinear in the usual sense of (1.2). For random sets of the type \(Y=(-\infty ,\beta ]\), the superlinearity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq378_HTML.gif corresponds to the univariate superlinearity of \(\beta \mapsto \mathtt{u}(\beta )\). This example shows that numerical sublinear expectations may be converted to both sublinear and superlinear set-valued ones depending on the choice of relevant random sets.
Example 3.10
Let \(X=[\beta ',\beta ]\) be a random interval on the line with \(\beta ,\beta '\in L^{p}(\mathbb{R})\) and let \(C=\{0\}\). Then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq383_HTML.gif is the interval formed by a numerical superlinear expectation of \(\beta '\) and a numerical sublinear expectation of \(\beta \) such that \(\mathtt{u}\) is dominated by \(\mathtt{e}\), e.g. if \(\mathtt{u}\) and \(\mathtt{e}\) form an exact dual pair. The superlinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq390_HTML.gif may be empty.

3.3 Expectations of singletons

The additivity property on deterministic singletons immediately yields the following useful fact.
Lemma 3.11
We have https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq391_HTML.gif , and the same holds for the superlinear expectation.
Fix \(C=\{0\}\). Restricted to singletons, the sublinear expectation is a homogeneous map https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq393_HTML.gif that satisfies
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equs_HTML.png
Note that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq394_HTML.gif is not necessarily a singleton. If https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq395_HTML.gif is a singleton for each \(\xi \in L^{p}( \mathbb{R}^{d})\), then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq397_HTML.gif is linear on \(L^{p}(\mathbb{R}^{d})\). Assuming in addition lower semicontinuity, the sublinear expectation then becomes the usual (linear) expectation.
The following result concerns the superlinear expectation of singletons. For a general cone \(C\), a similar result holds with singletons replaced by sets \(\xi +C\).
Proposition 3.12
Let \(C=\{0\}\). For each \(\xi \in L^{p}(\mathbb{R}^{d})\) and any normalised superlinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq403_HTML.gif , the set https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq404_HTML.gif is either empty or a singleton, and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq405_HTML.gif is additive on https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq406_HTML.gif .
Proof
By (3.2) applied to \(X=\{\xi \}\) and \(Y=\{-\xi \}\), we have
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equt_HTML.png
whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq409_HTML.gif is either empty or a singleton, and then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq410_HTML.gif . If https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq411_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq412_HTML.gif are singletons (and so are nonempty) for \(\xi ,\xi '\in L^{p}(\mathbb{R}^{d})\), then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equu_HTML.png
whence the inclusion turns into equality. □
In view of Proposition 3.12 and if we impose in addition upper semicontinuity on the superlinear expectation, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq414_HTML.gif equals \(\{\mathbb{E}[\xi ]\}\) or is empty for each \(p\)-integrable \(\xi \). The family of \(\xi \in L^{p}(\mathbb{R}^{d})\) such that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq419_HTML.gif is then a convex cone in \(L^{p}(\mathbb{R}^{d})\).

3.4 Nonlinear expectations of random convex functions

A lower semicontinuous convex function \(f:\mathbb{R}^{d}\to [0,\infty ]\) yields a convex set \(T_{f}\) in \(\mathbb{R}^{d+1}\) uniquely identified by its support function
$$ h\big(T_{f},(t,x)\big)= \textstyle\begin{cases} t f(x/t), &\quad t>0, \\ 0, & \quad \text{otherwise}. \end{cases} $$
This support function is called the perspective transform of \(f\); see Hiriart-Urruty and Lemaréchal [17, Sect. IV.2.2]. Note that \(f\) can be recovered by letting \(t=1\) in the support function of \(T_{f}\).
If \(x \mapsto \xi (x)\) is a random nonnegative lower semicontinuous convex function on \(\mathbb{R}^{d}\), then its sublinear expectation can be defined as https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq430_HTML.gif , and the superlinear one is defined similarly. With this definition, all constructions from this paper apply to random functions.

4 Extensions of nonlinear expectations

4.1 Minimal extension

The minimal extension of a sublinear set-valued expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq431_HTML.gif on random sets from \(L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) is defined by
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ14_HTML.png
(4.1)
where \(\overline{\mathrm{co}}\) denotes the closed convex hull operation. The extension is called minimal since it is the smallest sublinear expectation compatible with the values of the original expectation on sets \(\xi +C\). It extends a sublinear expectation defined on sets \(\xi +C\) to all \(p\)-integrable random closed sets \(X\) such that \(X=X+C\) a.s. In terms of support functions, the minimal extension is given by
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ15_HTML.png
(4.2)
Proposition 4.1
If https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq439_HTML.gif is a sublinear expectation defined on random sets \(\xi +C\) for \(\xi \in L^{p}(\mathbb{R}^{d})\), then its minimal extension (4.1) is a sublinear expectation.
Proof
The additivity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq442_HTML.gif on deterministic singletons follows from this property of  https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq443_HTML.gif . For a deterministic \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\),
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equw_HTML.png
The homogeneity and monotonicity properties of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq445_HTML.gif are obvious. The subadditivity follows from the fact that \(L^{p}(X+Y)\) is the \(L^{p}\)-closure of the sum \(L^{p}(X)+L^{p}(Y)\); see [25, Proposition 2.1.6]. □

4.2 Maximal extension

Extending a superlinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq449_HTML.gif from its values on half-spaces yields its maximal extension
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ16_HTML.png
(4.3)
being the intersection of superlinear expectations of random half-spaces
$$ H_{\eta }(X)=H_{\eta }\big(h(X,\eta )\big) $$
almost surely containing \(X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\). The maximal extension is the largest superlinear expectation consistent with the values of the original one on half-spaces. Since \(H_{\eta }(h(X,\eta ))=H_{t\eta }(h(X,t\eta ))\) for all \(t>0\), it is possible to take the intersection in (4.3) over \(\eta \in L^{q}( C^{o} )\).
Proposition 4.2
If https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq454_HTML.gif is superlinear on half-spaces with the same normal, that is,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ17_HTML.png
(4.4)
for all \(\beta ,\beta '\in L^{p}(\mathbb{R})\) and \(\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )\), and is scalarly upper semicontinuous on half-spaces with the same normal, that is,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equy_HTML.png
if \(\beta _{n}\to \beta \) in \(\sigma (L^{p},L^{q})\), then its maximal extension https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq459_HTML.gif given by (4.3) is superlinear and upper semicontinuous with respect to the scalar convergence of random closed convex sets. If https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq460_HTML.gif is law-invariant on half-spaces, then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq461_HTML.gif is law-invariant.
Proof
The additivity on deterministic singletons follows from the fact that we have \(H_{\eta }(X+a)=H_{\eta }(X)+a\) for all \(a\in \mathbb{R}^{d}\). If \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\) is deterministic, then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equz_HTML.png
The homogeneity and monotonicity properties of the extension are obvious. For two \(p\)-integrable random closed convex sets \(X\) and \(Y\), (4.4) yields that
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equaa_HTML.png
Assume that \((X_{n})\) scalarly converges to \(X\). Let https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq470_HTML.gif and let \((x_{n_{k}})\) converge to \(x\). Then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq473_HTML.gif for all \(\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )\). Since \(h(X_{n_{k}}, \eta )\to h(X,\eta )\) in \(\sigma (L^{p},L^{q})\), scalar upper semicontinuity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq477_HTML.gif on half-spaces yields that we have https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq478_HTML.gif , whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq479_HTML.gif for all \(\eta \). Therefore https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq481_HTML.gif , confirming the upper semicontinuity of the maximal extension. The law-invariance property is straightforward. □
It is possible to let \(\eta \) in (4.3) be deterministic and define
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ18_HTML.png
(4.5)
With this reduced maximal extension, the superlinear expectation is extended from its values on half-spaces with deterministic normal vectors. Note that the reduced maximal extension may be equal to the whole space, e.g. for \(X\) being a half-space \(H_{\eta }(0)\) with a nondeterministic normal. It is obvious that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq485_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq486_HTML.gif is constant-preserving. The reduced maximal extension is particularly useful for Hausdorff-approximable random closed sets.

4.3 Exact nonlinear expectations

It is possible to apply the maximal extension to the sublinear expectation and the minimal extension to the superlinear one, resulting in https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq487_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq488_HTML.gif . The monotonicity property yields that for each \(p\)-integrable random closed set \(X\),
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ19_HTML.png
(4.6)
It is easy to see that each extension is an idempotent operation, e.g. the minimal extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq491_HTML.gif coincides with https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq492_HTML.gif .
A nonlinear sublinear expectation is said to be minimal (respectively, maximal) if it coincides with its minimal (respectively, maximal) extension. A superlinear expectation is said to be reduced maximal if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq493_HTML.gif .
If (4.6) holds with the first two inclusions being equalities (that is, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq494_HTML.gif coincides with its minimal and maximal extensions), then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq495_HTML.gif is called exact. The same applies to superlinear expectations. Note that the selection expectation is exact on all integrable random closed convex sets, its minimality corresponds to (2.1) and maximality is (2.3).
Since random convex closed sets can be represented either as families of their selections or as intersections of half-spaces, the minimal representation of an exact nonlinear expectation may be considered its primal representation, while the maximal representation becomes the dual one.

5 Sublinear set-valued expectations

5.1 Duality for minimal sublinear expectations

The minimal sublinear expectation is determined by its restriction on random sets \(\xi +C\); the following result characterises such a restriction.
Lemma 5.1
A map https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq497_HTML.gif defined for \(\xi \in L^{p}(\mathbb{R}^{d})\) is a \(\sigma (L^{p},L^{q})\)-lower semicontinuous normalised sublinear expectation if and only if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq500_HTML.gif for \(u\notin C^{o}\) and
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equab_HTML.png
where \(\mathcal{Z}_{u}\), \(u\in C^{o} \), are convex \(\sigma (L^{q},L^{p})\)-closed cones in \(L^{q}( C^{o} )\) such that
$$ \{\mathbb{E}[\zeta ]: \zeta \in \mathcal{Z}_{u}\}=\{tu : t\geq 0\} $$
for all \(u\neq 0\), \(\mathcal{Z}_{cu}=\mathcal{Z}_{u}\) for all \(c>0\), \(\mathcal{Z}_{0}=\{0\}\) and
$$ \mathcal{Z}_{u+v}\subseteq \mathcal{Z}_{u}+\mathcal{Z}_{v},\qquad u,v \in C^{o} . $$
(5.1)
Proof
(Sufficiency) For linearly independent \(u\) and \(v\) in \(\mathbb{R}^{d}\), each \(\zeta \in \mathcal{Z}_{u+v}\) satisfies \(\zeta =\zeta _{1}+\zeta _{2}\) with \(\mathbb{E}[\zeta _{1}]=t_{1}u\) and \(\mathbb{E}[\zeta _{2}]=t_{2}v\). Thus \(\mathbb{E}[\zeta ]=t(u+v)\) only if \(t_{1}=t_{2}=t\). Therefore,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equad_HTML.png
Since \(\mathcal{Z}_{cu}=\mathcal{Z}_{u}=c\mathcal{Z}_{u}\) for any \(c>0\),
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equae_HTML.png
so the function https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq521_HTML.gif is sublinear in \(u\) and hence a support function. The additivity property on singletons follows from the construction since
$$ \sup _{\zeta \in \mathcal{Z}_{u}, \mathbb{E}[\zeta ]=u}\mathbb{E}[ \langle \zeta ,\xi +a\rangle ] =\sup _{\zeta \in \mathcal{Z}_{u}, \mathbb{E}[\zeta ]=u}\mathbb{E}[\langle \zeta ,\xi \rangle ]+\langle a,u \rangle $$
for each deterministic \(a\in \mathbb{R}^{d}\). Furthermore, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq524_HTML.gif which implies that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq525_HTML.gif . The homogeneity property is obvious. The function https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq526_HTML.gif is subadditive since
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equag_HTML.png
Finally, for \(u\in C^{o} \), the set \(\{\zeta \in \mathcal{Z}_{u} : \mathbb{E}[\zeta ]=u\}\) is closed in \(\sigma (L^{q},L^{p})\). Since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq530_HTML.gif is the support function of the closed set \(\{\zeta \in \mathcal{Z}_{u} : \mathbb{E}[\zeta ]=u\}\) in the direction \(\xi \), it is lower semicontinuous as a function of \(\xi \) so that (3.3) holds.
(Necessity) By Proposition 3.2, the support function is infinite for \(u\notin C^{o} \). For \(u\in C^{o} \), let
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equah_HTML.png
The map https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq536_HTML.gif is sublinear from \(L^{p}(\mathbb{R}^{d})\) to \((-\infty ,\infty ]\). By sublinearity, \(\mathcal{A}_{u}\) is a convex cone in \(L^{p}(\mathbb{R}^{d})\), and \(\mathcal{A}_{cu}=\mathcal{A}_{u}\) for all \(c>0\). Furthermore, \(\mathcal{A}_{u}\) is closed with respect to the scalar convergence \(\xi _{n}+C\to \xi +C\) by the assumed lower semicontinuity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq545_HTML.gif . Hence it is closed with respect to the convergence \(\xi _{n}\to \xi \) in \(\sigma (L^{p},L^{q})\).
Note that \(0\in \mathcal{A}_{u}\) and let
$$ \mathcal{Z}_{u}= \{\zeta \in L^{q}(\mathbb{R}^{d}) : \mathbb{E}[ \langle \zeta ,\xi \rangle ]\leq 0\; \text{for all} \; \xi \in \mathcal{A}_{u} \} $$
be the polar cone of \(\mathcal{A}_{u}\). For \(u=0\), we have \(\mathcal{A}_{0}=L^{p}(\mathbb{R}^{d})\) and \(\mathcal{Z}_{0}=\{0\}\). Consider \(u\neq 0\). Letting \(\xi =a\mathbf{1}_{A}\) for an event \(A\) and a deterministic \(a\) such that \(\langle a,u\rangle \leq 0\), we obtain a member of \(\mathcal{A}_{u}\), whence each \(\zeta \in \mathcal{Z}_{u}\) satisfies \(\langle \mathbb{E}[\zeta ],a\mathbf{1}_{A}\rangle \leq 0\) whenever \(\langle a,u\rangle \leq 0\). Thus \(\zeta \in C^{o} \) a.s., and letting \(A=\Omega \) yields that \(\mathbb{E}[\zeta ]=tu\) for some \(t\geq 0\) and all \(\zeta \in \mathcal{Z}_{u}\). The subadditivity property of the support function of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq567_HTML.gif yields that \(\mathcal{A}_{u+v}\supseteq (\mathcal{A}_{u}\cap \mathcal{A}_{v})\) for \(u,v\in C^{o} \). By a Banach space analogue of Schneider [33, Theorem 1.6.9], the polar of \(\mathcal{A}_{u}\cap \mathcal{A}_{v}\) is the closed sum \(\mathcal{Z}_{u}+\mathcal{Z}_{v}\) of the polars, whence (5.1) holds.
By the definition of \(\mathcal{A}_{u}\),
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equaj_HTML.png
Since \(\mathcal{A}_{u}\) is convex and \(\sigma (L^{p},L^{q})\)-closed, the bipolar theorem yields that
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equak_HTML.png
 □
Theorem 5.2
A function https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq575_HTML.gif is a scalarly lower semicontinuous minimal normalised sublinear expectation if and only if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq576_HTML.gif admits the representation
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ21_HTML.png
(5.2)
and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq577_HTML.gif for \(u\notin C^{o} \), where \(C^{o} \) and the sets \(\mathcal{Z}_{u}\), \(u\in \mathbb{R}^{d}\), satisfy the conditions of Lemma 5.1.
Proof
(Necessity) Lemma 5.1 applies to the restriction of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq582_HTML.gif onto random sets \(\xi +C\). By the minimality assumption, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq584_HTML.gif coincides with its minimal extension (4.2). By Lemma 5.1, (4.2) and (2.4), for \(u\in C^{o} \),
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equal_HTML.png
(Sufficiency) The right-hand side of (5.2) is sublinear in \(u\) and so is a support function. The additivity on singletons, monotonicity, subadditivity and homogeneity properties of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq587_HTML.gif are obvious. For a deterministic \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\), the sublinearity of the support function yields that
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equam_HTML.png
whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq589_HTML.gif . The minimality of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq590_HTML.gif follows from
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equan_HTML.png
Since the support function of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq591_HTML.gif given by (5.2) is the supremum of scalarly continuous functions of \(X\), the minimal sublinear expectation is scalarly lower semicontinuous. □
Remark 5.3
The sets \(\mathcal{Z}_{u}\), \(u\in \mathbb{R}^{d}\), constructed in the proof of necessity in Lemma 5.1 are maximal sets representing the sublinear expectation.
Corollary 5.4
If \(u\in \mathcal{Z}_{u}\) for all \(u\in \mathbb{R}^{d}\), then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq597_HTML.gif for all \(p\)-integrable \(X\) and any scalarly lower semicontinuous normalised minimal sublinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq600_HTML.gif .
Proof
By (5.2), https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq601_HTML.gif for all \(u\in C^{o} \). □
Remark 5.5
The sublinear expectation given by (5.2) is law-invariant if and only if the sets \(\mathcal{Z}_{u}\) are law-complete, that is, with each \(\zeta \in \mathcal{Z}_{u}\), the set \(\mathcal{Z}_{u}\) contains all random vectors that have the same distribution as \(\zeta \). If \(p=\infty \), then the elements of \(\mathcal{Z}_{u}\) can be represented as vectors composed of probability measures absolutely continuous with respect to ℙ. This is also possible for \(p\in [1,\infty )\) using measures with \(p\)-integrable densities.
Example 5.6
Let \(Z\) be a random matrix with \(\mathbb{E}[Z]\) being the identity matrix, and let \(\mathcal{Z}_{u}=\{tZu^{\top }: t\geq 0\}\) for \(u\in C^{o} =\mathbb{R}^{d}\), where \(C=\{0\}\). Then (5.2) turns into the condition https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq616_HTML.gif , whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq617_HTML.gif . In this example, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq618_HTML.gif is not solely determined by \(h(X,u)\). This sublinear expectation is not necessarily constant-preserving.
Example 5.7
Let \(X=H_{\eta }(\beta )\) with \(\beta \in L^{p}(\mathbb{R})\) and \(\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )\) be a random half-space. By (5.2), https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq623_HTML.gif is finite for \(u\in \mathbb{S}^{d-1}\cap C^{o} \) only if each \(\zeta \in \mathcal{Z}_{u}\) with \(\mathbb{E}[\zeta ]=u\) satisfies \(\zeta =\gamma \eta \) a.s. with \(\gamma \in L^{q}(\mathbb{R}_{+})\). For such \(u\),
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equao_HTML.png
If the normal \(\eta =u\) is deterministic and
$$ \mathcal{Z}_{u}\subseteq \{\gamma u : \gamma \in L^{q}(\mathbb{R}_{+}) \}, $$
(5.3)
then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq631_HTML.gif with
$$ t=\sup _{\gamma u\in \mathcal{Z}_{u},\mathbb{E}[\gamma ]=1} \mathbb{E}[ \gamma \beta ]. $$
Otherwise, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq632_HTML.gif . Thus the sublinear expectation of a random half-space with a deterministic normal is either a half-space with the same normal or the whole space.

5.2 Exact sublinear expectation

Consider now the situation when for each \(u\), the value of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq634_HTML.gif is solely determined by the distribution of \(h(X,u)\). This is the case if the supremum in (5.2) involves only \(\zeta \) such that \(\zeta =\gamma u\) for some \(\gamma \in L^{q}(\mathbb{R}_{+})\). The following result shows that this condition characterises constant-preserving minimal sublinear expectations, which then necessarily become exact ones.
Theorem 5.8
A mapping https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq639_HTML.gif is a scalarly lower semicontinuous constant-preserving minimal sublinear expectation if and only if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq640_HTML.gif for \(u\notin C^{o} \) and
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ23_HTML.png
(5.4)
where \(\mathcal{M}_{u}\), \(u\in C^{o} \), are convex \(\sigma (L^{q},L^{p})\)-closed cones in \(L^{q}(\mathbb{R}_{+})\) with \(\mathcal{M}_{cu}=\mathcal{M}_{u}\) for all \(c>0\) and \(\mathcal{M}_{u+v}\subseteq \mathcal{M}_{u}\cap \mathcal{M}_{v}\) for all \(u,v\in C^{o} \).
Proof
(Sufficiency) If \(\mathcal{M}_{u}\), \(u\in C^{o} \), satisfy the imposed conditions, then \(\mathcal{Z}_{u}\) given by \(\{\gamma u : \gamma \in \mathcal{M}_{u}\}\) for \(u\in C^{o} \) satisfy the conditions of Lemma 5.1. Indeed, \(\mathcal{Z}_{cu}=\mathcal{Z}_{u}\) for all \(c>0\) and
$$ \mathcal{Z}_{u+v}= \{\gamma (u+v) : \gamma \in \mathcal{M}_{u+v} \} \subseteq \{\gamma (u+v) : \gamma \in \mathcal{M}_{u}\cap \mathcal{M}_{v} \} \subseteq \mathcal{Z}_{u}+\mathcal{Z}_{v} $$
for all \(u,v\in C^{o} \). If \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\) is deterministic, then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equar_HTML.png
whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq659_HTML.gif is constant-preserving.
(Necessity) Since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq660_HTML.gif is minimal, the support function of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq661_HTML.gif is given by (5.2). The constant-preserving property yields that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq662_HTML.gif for all half-spaces \(H_{u}(t)\) with \(u\in C^{o} \). By the argument from Example 5.7, the minimal sublinear expectation of a half-space \(H_{u}(t)\) is distinct from the whole space only if (5.3) holds.
The properties of \(\mathcal{Z}_{u}\) imply those of \(\mathcal{M}_{u}=\{\gamma : \gamma u\in \mathcal{Z}_{u}\}\). Indeed, assume that \(\gamma \in \mathcal{M}_{u+v}\) so that \(\gamma (u+v)\in \mathcal{Z}_{u+v}\). Hence \(\gamma (u+v)\in (\mathcal{Z}_{u}+\mathcal{Z}_{v})\), meaning that \(\gamma (u+v)\) is the limit of \(\gamma _{1n}u+\gamma _{2n}v\) for \(\gamma _{1n}u\in \mathcal{Z}_{u}\) and \(\gamma _{2n}v\in \mathcal{Z}_{v}\), \(n \in \mathbb{N}\). The linear independence of \(u\) and \(v\) yields \(\gamma _{1n}\to \gamma \) and \(\gamma _{2n}\to \gamma \), whence \(\gamma \in (\mathcal{M}_{u}\cap \mathcal{M}_{v})\). □
It is possible to rephrase (5.4) as
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ24_HTML.png
(5.5)
for numerical sublinear expectations
$$ \mathtt{e}_{u}(\beta ) =\sup _{\gamma \in \mathcal{M}_{u},\mathbb{E}[ \gamma ]=1}\mathbb{E}[\gamma \beta ], \qquad \beta \in L^{p}(\mathbb{R}), $$
defined by an analogue of (1.5). Since the negative part of \(h(X,u)\) is \(p\)-integrable, it is possible to consistently let \(\mathtt{e}_{u}(h(X,u))=\infty \) in (5.5) if \(h(X,u)\) is not \(p\)-integrable.
Corollary 5.9
Each scalarly lower semicontinuous constant-preserving minimal sublinear expectation is exact.
Proof
Since (5.4) yields that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq686_HTML.gif if \(\eta \) is random, the maximal extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq688_HTML.gif by an analogue of (4.3) reduces to deterministic \(\eta \), and so https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq690_HTML.gif is the reduced maximal extension. For \(u\in \mathbb{S}^{d-1}\cap C^{o} \) and \(\beta \in L^{p}(\mathbb{R})\), we have https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq693_HTML.gif ; cf. Example 5.7. Thus the reduced maximal extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq694_HTML.gif is given by
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equat_HTML.png
Comparing with (5.5), we see that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq695_HTML.gif . The opposite inclusion is obvious, whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq696_HTML.gif . □
Corollary 5.10
If https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq697_HTML.gif is a scalarly lower semicontinuous constant-preserving minimal normalised sublinear expectation, then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq698_HTML.gif for each deterministic \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\).
Corollary 5.11
Let https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq700_HTML.gif be a scalarly lower semicontinuous constant-preserving minimal law-invariant sublinear expectation. Then we have https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq701_HTML.gif for all \(X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) and any \(\sigma \)-algebra \(\mathfrak{H}\subseteq \mathfrak{F}\). In particular, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq705_HTML.gif .
Proof
The law-invariance of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq706_HTML.gif implies that \(\mathtt{e}_{u}\) is law-invariant. The sublinear expectation \(\mathtt{e}_{u}\) is dilatation-monotonic, meaning that \(\mathtt{e}_{u}(\mathbb{E}[\beta |\mathfrak{H}])\leq \mathtt{e}_{u}( \beta )\) for all \(\beta \in L^{p}(\mathbb{R})\); see Föllmer and Schied [10, Corollary 4.59] for this fact derived for risk measures. The statement follows from (5.5). □
The following result identifies the particularly important case when the families \(\mathcal{M}_{u}=\mathcal{M}\) do not depend on \(u\). This essentially means that the sublinear expectation preserves centred balls. Let \(B_{r}\) denote the ball of radius \(r\) centred at the origin.
Theorem 5.12
A scalarly lower semicontinuous constant-preserving minimal superlinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq715_HTML.gif satisfies https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq716_HTML.gif for all \(\beta \in L^{p}(\mathbb{R}_{+})\) and some \(r\geq 0\) if and only if (5.4) holds with \(\mathcal{M}_{u}=\mathcal{M}\) for all \(u\neq 0\). Then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ25_HTML.png
(5.6)
where \(\mathtt{e}\) admits the representation (1.5). Furthermore,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ26_HTML.png
(5.7)
Proof
Assume that the \(\mathcal{M}_{u}\) are constructed as in the proof of Theorem 5.8 so that \(\mathcal{M}_{u}\) is maximal for each \(u\in C^{o} \). The right-hand side of
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equau_HTML.png
does not depend on \(u\in \mathbb{S}^{d-1}\cap C^{o} \) if and only if \(\mathcal{M}_{u}=\mathcal{M}\) for all \(u\in C^{o} \). The representation (5.6) follows from (5.5) with \(\mathcal{M}_{u}=\mathcal{M}\). In view of (1.5),
$$\begin{aligned} \sup _{\gamma \in \mathcal{M},\mathbb{E}[\gamma ]=1} \mathbb{E}[h( \gamma X,u) ] &=\sup _{\gamma \in \mathcal{M},\mathbb{E}[\gamma ]=1} \mathbb{E}[h(\gamma X,u) ] =\sup _{\gamma \in \mathcal{M},\mathbb{E}[ \gamma ]=1} \mathbb{E}[\gamma h(X,u) ] \\ &=\mathtt{e}\big(h(X,u)\big)\,. \end{aligned}$$
By (5.6), the support functions of both sides of (5.7) are identical. □
If \(X=\{\xi \}\) is a singleton, there is no need to take the convex hull on the right-hand side of (5.7).
Remark 5.13
Equality (5.6) can be viewed as a scalarisation of the sublinear expectation. Indeed, it represents the convex set https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq730_HTML.gif as an intersection of half-spaces \(H_{u}(\mathtt{e}(h(X,u)))\) and so provides a dual representation of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq732_HTML.gif . Such scalarisations have been considered by Hamel and Heyde [13] and Hamel et al. [14] for set-valued risk measures, which are sublinear for the reverse inclusion. In that case, the exact equality may be violated, see (6.5) below, and the scalarisation is defined as the support function of the superlinear expectation.
Example 5.14
For an integrable \(X\) and \(n\in \mathbb{N}\), consider the sublinear expectation
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equaw_HTML.png
where \(X_{1},\dots ,X_{n}\) are independent copies of \(X\). It is easy to see that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq737_HTML.gif is a minimal constant-preserving sublinear expectation; it is given by (5.6) with the corresponding numerical sublinear expectation \(\mathtt{e}(\beta )\) being the expected maximum of \(n\) i.i.d. copies of \(\beta \in L^{1}(\mathbb{R})\). By Corollary 5.9, this sublinear expectation is exact.
Example 5.15
For \(\alpha \in (0,1)\), let \(\mathcal{P}_{\alpha }\) be the family of random variables \(\gamma \) with values in \([0,\alpha ^{-1}]\) and such that \(\mathbb{E}[\gamma ]=1\). Furthermore, let ℳ be the cone generated by \(\mathcal{P}_{\alpha }\), that is, \(\mathcal{M}=\{t\gamma :\gamma \in \mathcal{P}_{\alpha }, t\geq 0\}\). In finance, the set \(\mathcal{P}_{\alpha }\) generates the average value-at-risk, which is the risk measure obtained as the average quantile; see Föllmer and Schied [10, Definition 4.43]. Similarly, the numerical sublinear \(\mathtt{e}\) and superlinear \(\mathtt{u}\) generated by this set ℳ are represented as average quantiles. Namely, \(\mathtt{e}(\beta )\) is the average of the quantiles of \(\beta \) at levels \(t\in (1-\alpha ,1)\), and \(\mathtt{u}(\beta )\) is the average of the quantiles at levels \(t\in (0,\alpha )\).

6 Superlinear set-valued expectations

6.1 Duality for maximal superlinear expectations

Consider a superlinear expectation defined on \(L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\). If \(C=\{0\}\), we deal with all \(p\)-integrable random closed convex sets. Recall that \(C^{o}\) is the polar cone of \(C\).
Theorem 6.1
A map https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq761_HTML.gif is a scalarly upper semicontinuous normalised maximal superlinear expectation if and only if
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ27_HTML.png
(6.1)
for a collection of convex \(\sigma (L^{q},L^{p})\)-closed cones \(\mathcal{M}_{\eta }\subseteq L^{q}(\mathbb{R}_{+})\) parametrised by \(\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )\) and such that \(\mathcal{M}_{u}\) is strictly larger than \(\{0\}\) for each deterministic \(\eta =u\in \mathbb{S}^{d-1}\cap C^{o} \).
Proof
(Necessity) Fix \(\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )\) and let \(\mathcal{A}_{\eta }\) be the set of all \(\beta \in L^{p}(\mathbb{R})\) such that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq771_HTML.gif contains the origin. Since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq772_HTML.gif , we have \(0\in \mathcal{A}_{\eta }\). Since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq774_HTML.gif , the family \(\mathcal{A}_{u}\) does not contain \(\beta =t\) for \(t<0\) and \(u\in \mathbb{S}^{d-1}\cap C^{o} \).
If \(\beta _{n}\to \beta \) in \(\sigma (L^{p},L^{q})\), then \(\mathbb{E}[h(H_{\eta }(\beta _{n}),\gamma \eta )]\to \mathbb{E}[h(H_{\eta }(\beta ),\gamma \eta )]\) for all \(\gamma \in L^{q}(\mathbb{R})\), whence \(H_{\eta }(\beta _{n})\to H_{\eta }(\beta )\) scalarly in \(\sigma (L^{p},L^{q})\). Therefore,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equax_HTML.png
by the assumed upper semicontinuity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq785_HTML.gif . Thus \(\mathcal{A}_{\eta }\) is a convex \(\sigma (L^{p},L^{q})\)-closed cone in \(L^{p}(\mathbb{R})\). Consider its positive dual cone
$$ \mathcal{M}_{\eta }= \{\gamma \in L^{q}(\mathbb{R}) : \mathbb{E}[\gamma \beta ]\geq 0\; \text{for all}\; \beta \in \mathcal{A}_{\eta }\}. $$
Since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq789_HTML.gif , we have https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq790_HTML.gif whenever \(C\subseteq X\) a.s. In view of this, if \(\beta \) is a.s. nonnegative, then \(H_{\eta }(\beta )\) a.s. contains zero and so \(\beta \in \mathcal{A}_{\eta }\). Thus each \(\gamma \) from \(\mathcal{M}_{\eta }\) is a.s. nonnegative. The bipolar theorem yields that
$$ \mathcal{A}_{\eta }= \{\beta \in L^{p}(\mathbb{R}) : \mathbb{E}[\gamma \beta ]\geq 0\; \text{for all}\; \gamma \in \mathcal{M}_{\eta }\}. $$
(6.2)
Since \((-t)\notin \mathcal{A}_{u}\), (6.2) implies that the cone \(\mathcal{M}_{u}\) is strictly larger than \(\{0\}\). Since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq800_HTML.gif is assumed to be maximal, (4.3) implies that
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equaz_HTML.png
(Sufficiency) It is easy to check that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq801_HTML.gif given by (6.1) is additive on deterministic singletons, homogeneous and monotonic. If \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\) is deterministic, then letting \(\eta =u\) in (6.1) be deterministic and using the nontriviality of \(\mathcal{M}_{u}\) yields that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq805_HTML.gif . Furthermore, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq806_HTML.gif , since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq807_HTML.gif contains the origin and so is not empty. The superadditivity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq808_HTML.gif follows from the fact that
$$\begin{aligned} & \{x: \langle x,\mathbb{E}[\gamma \eta ] \rangle \leq \mathbb{E}[h(X, \gamma \eta ) ] +\mathbb{E}[h(Y,\gamma \eta ) ] \} \\ & \supseteq \{x: \langle x,\mathbb{E}[\gamma \eta ] \rangle \leq \mathbb{E}[h(X,\gamma \eta ) ] \} + \{x: \langle x,\mathbb{E}[\gamma \eta ] \rangle \leq \mathbb{E}[h(Y,\gamma \eta ) ] \}. \end{aligned}$$
It is easy to see that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq809_HTML.gif coincides with its maximal extension.
Note that (6.1) is equivalently written as
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbb_HTML.png
If \((X_{n})\) scalarly converges to \(X\) and \(x_{n_{k}}\to x\) for https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq813_HTML.gif , \(k\in \mathbb{N}\), then \(\mathbb{E}[h(X_{n}-x_{n},\gamma \eta )]\) converges to \(\mathbb{E}[h(X-x,\gamma \eta )]\) for all \(\gamma \in L^{q}(\mathbb{R}_{+})\) and \(\eta \) from \(L^{0}(\mathbb{S}^{d-1}\cap C^{o} )\). Thus \(\mathbb{E}[h(X-x,\gamma \eta )]\geq 0\), whence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq821_HTML.gif , and the upper semicontinuity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq822_HTML.gif follows. □
In contrast to the sublinear case (see Theorem 5.2), the cones \(\mathcal{M}_{\eta }\) from Theorem 6.1 need not satisfy additional conditions like those imposed in Lemma 5.1. However, if the intersection in (6.1) is taken over all \(\eta \in L^{q}( C^{o} )\), then one must require that \(\mathcal{M}_{\beta \eta }=\{\gamma /\beta : \gamma \in \mathcal{M}_{\eta }\}\) for all \(\beta \in L^{p}((0,\infty ))\).
Corollary 6.2
If \(1\in \mathcal{M}_{\eta }\) for all \(\eta \), then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq829_HTML.gif for all \(p\)-integrable \(X\) and any scalarly upper semicontinuous maximal normalised superlinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq832_HTML.gif .
Proof
Restrict the intersection in (6.1) to deterministic \(\eta =u\) and \(\gamma =1\), so that the right-hand side of (6.1) becomes \(\mathbb{E}[X]\). □
Example 6.3
Let \(X=H_{\eta }(\beta )\) be the half-space with normal \(\eta \in L^{0}(\mathbb{S}^{d-1})\) and \(\beta \in L^{p}(\mathbb{R})\). If \(C=\{0\}\), the maximal superlinear expectation of \(X\) is given by
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbc_HTML.png
Assume that \(d=2\) and let \(\eta =(1,\pi )/\sqrt{1+\pi ^{2}}\) with \(\pi \) being an almost surely positive random variable. This example represents the case of two currencies exchangeable at rate \(\pi \) without transaction costs. We then have
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbd_HTML.png
where \(\mathtt{u}\) is the numerical superlinear expectation with the representing set
$$ \mathcal{M}= \{\gamma /\sqrt{1+\pi ^{2}} : \gamma \in \mathcal{M}_{\eta }\}. $$
In particular, if \(\beta =0\) a.s., then the random set \(H_{\eta }(0)\) describes all portfolios available at price zero for two currencies with the exchange rate \(\pi \), and
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbf_HTML.png
Hence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq849_HTML.gif with \(w'=(1,\mathtt{e}(\pi ))\) and \(w''=(1,\mathtt{u}(\pi ))\) for the exact dual pair \(\mathtt{e}\) and \(\mathtt{u}\) of nonlinear expectations with the representing set ℳ.

6.2 Reduced maximal extension

The following result can be proved similarly to Theorem 6.1 for the reduced maximal extension from (4.5).
Theorem 6.4
A map https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq854_HTML.gif is a scalarly upper semicontinuous normalised reduced maximal superlinear expectation if and only if
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ29_HTML.png
(6.3)
for a collection of nontrivial convex \(\sigma (L^{q},L^{p})\)-closed cones \(\mathcal{M}_{v}\subseteq L^{q}(\mathbb{R}_{+})\) parametrised by \(v\in \mathbb{S}^{d-1}\cap C^{o} \).
It is possible to take the intersection in (6.3) over all \(v\in \mathbb{S}^{d-1}\) since \(h(X,v)=\infty \) for \(v\notin C^{o} \). The representation (6.3) can be equivalently written as the intersection of the half-spaces \(\{x : \langle x,v\rangle \leq \mathtt{u}_{v}(h(X,v))\}\), where
$$ \mathtt{u}_{v}(\beta )=\inf _{\gamma \in \mathcal{M}_{v},\mathbb{E}[ \gamma ]=1}\mathbb{E}[\gamma \beta ] $$
(6.4)
is a superlinear univariate expectation of \(\beta \in L^{p}(\mathbb{R})\) for each \(v\in \mathbb{S}^{d-1}\cap C^{o} \). The superlinear expectation (6.3) is law-invariant if and only if the families \(\mathcal{M}_{v}\) are law-complete for all \(v\) or, equivalently, if \(\mathtt{u}_{v}\) is law-invariant for all \(v\).
Corollary 6.5
Let https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq868_HTML.gif be a scalarly upper semicontinuous law-invariant normalised reduced maximal superlinear expectation, and let the probability space be non-atomic. Then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq869_HTML.gif is dilatation-monotonic, meaning that for each sub-\(\sigma \)-algebra \(\mathfrak{H}\subseteq \mathfrak{F}\) and all \(X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\),
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbg_HTML.png
In particular, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq873_HTML.gif .
Proof
Since \(\mathtt{u}_{v}(\beta )\) given by (6.4) is a law-invariant concave function of \(\beta \in L^{p}(\mathbb{R})\) and the probability space is non-atomic, it is dilatation-monotonic, meaning that \(\mathtt{u}_{v}(\mathbb{E}[\xi |\mathfrak{H}])\geq \mathtt{u}_{v}(\xi )\); see Föllmer and Schied [10, Corollary 4.59]. Hence,
$$ \mathtt{u}_{v}\big(h(X,v)\big)\leq \mathtt{u}_{v} \big(\mathbb{E}[h(X,v) |\mathfrak{H}]\big) =\mathtt{u}_{v}\big(h (\mathbb{E}[X| \mathfrak{H}],v )\big). $$
Thus the infimum on the right-hand side of (6.3) written for https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq877_HTML.gif is dominated by the infimum corresponding to https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq878_HTML.gif . This implies the inclusion of the two sets. □
Example 6.6
If \(\mathcal{M}_{v}=\mathcal{M}\) in (6.3) is nontrivial and does not depend on \(v\), then (6.3) turns into
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbi_HTML.png
where \(\mathtt{u}\) given by (6.4) is the numerical superlinear expectation with the representing set ℳ. In this case, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq882_HTML.gif is the largest convex set whose support function is dominated by \(\mathtt{u}(h(X,v))\), that is,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ31_HTML.png
(6.5)
Note that \(\mathtt{u}(h(X,\cdot ))\) may fail to be a support function. The left-hand side of (6.5) is the scalarisation of the superlinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq885_HTML.gif ; cf. Hamel et al. [13, 14]. Since
$$ \bigcap _{v\in \mathbb{S}^{d-1}\cap C^{o} } \{x : \langle x,v \rangle \leq \mathbb{E}[\gamma h(X,v) ] \}=\mathbb{E}[\gamma X] $$
for \(X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\), this reduced maximal superlinear expectation admits an equivalent representation as
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ32_HTML.png
(6.6)
Example 6.7
Let \(X=\xi +C\) for a \(\xi \in L^{p}(\mathbb{R}^{d})\) and a deterministic convex closed cone \(C\) that is different from the whole space. Then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbk_HTML.png
If \(\mathcal{M}_{v}=\mathcal{M}\) for all \(v\in \mathbb{S}^{d-1}\cap C^{o} \), then \(\mathtt{u}_{v}=\mathtt{u}\) and
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbl_HTML.png
A cone \(C\) is said to be a Riesz cone (or lattice cone) if \(\mathbb{R}^{d}\) with the partial order generated by \(C\) is a Riesz space (or a vector lattice), that is, the maximum of any two points from \(\mathbb{R}^{d}\) is well defined. If this is the case, then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq897_HTML.gif for some \(x\), since an intersection of translations of \(C\) is again a translation of \(C\); see Aliprantis and Tourky [1, Theorem 1.16].
Example 6.8
Let https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq901_HTML.gif for \(n\) independent copies of \(X\), noticing that the expectation is empty if the intersection \(X_{1}\cap \cdots \cap X_{n}\) is empty with positive probability. This superlinear expectation is not a reduced maximal one. For instance,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbm_HTML.png
so that the reduced maximal extension https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq905_HTML.gif is the largest convex set whose support function is dominated by https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq906_HTML.gif , \(v\in \mathbb{S}^{d-1}\). However, the support function of \(\mathbb{E}[X_{1}\cap \cdots \cap X_{n}]\) is the expectation of the largest sublinear function dominated by \(\min (h(X_{i},v), i=1,\dots ,N)\), and so https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq910_HTML.gif may be a strict subset of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq911_HTML.gif .
For instance, let \(X=\xi +\mathbb{R}_{-}^{d}\) for \(\xi \in L^{p}(\mathbb{R}^{d})\). Then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbn_HTML.png
where the minimum is applied coordinatewise to independent copies of \(\xi \), while https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq915_HTML.gif is the largest convex set whose support function is dominated by the function \(v \mapsto \mathbb{E}[\min (\langle \xi _{i},v\rangle ,i=1,\dots ,n)]\) for \(v\in \mathbb{R}_{+}^{d}\). Obviously,
$$ \min (\langle \xi _{i},v\rangle ,i=1,\dots ,n ) \geq \langle \min ( \xi _{1},\dots ,\xi _{n}),v \rangle $$
with a possibly strict inequality.

6.3 Minimal extension of a superlinear expectation

In any nontrivial case, the superlinear expectation of a nondeterministic singleton is empty.
Proposition 6.9
Let https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq918_HTML.gif be a normalised superlinear expectation satisfying the conditions of Proposition 4.2. Then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq919_HTML.gif for \(\xi \in L^{p}(\mathbb{R}^{d})\) only if
$$ \sup _{\gamma \in \mathcal{M}_{-v},\mathbb{E}[\gamma ]=1} \mathbb{E}[ \langle \xi ,\gamma v\rangle ] \leq \inf _{\gamma \in \mathcal{M}_{v}, \mathbb{E}[\gamma ]=1} \mathbb{E}[\langle \xi ,\gamma v\rangle ] $$
(6.7)
for all \(v\in \mathbb{S}^{d-1}\).
Proof
By a variant of Proposition 4.2 for the reduced maximal extension, this extension satisfies the conditions of Theorem 6.4 and hence admits the representation (6.3). If \(\xi \in L^{p}(\mathbb{R}^{d})\), then (6.3) yields that
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbp_HTML.png
which is not empty only if (6.7) holds. □
In the setting of Example 6.6, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq923_HTML.gif is empty unless \(\mathtt{u}(\langle \xi ,v\rangle )+\mathtt{u}(-\langle \xi ,v \rangle )\) is nonnegative for all \(u\). The latter means that \(\mathtt{u}(\langle \xi ,v\rangle )=\mathtt{e}(\langle \xi ,v \rangle )\) for the exact dual pair of real-valued nonlinear expectations. Equivalently, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq927_HTML.gif if \(\mathbb{E}[\gamma \xi ]\neq \mathbb{E}[\gamma '\xi ]\) for some \(\gamma ,\gamma '\in \mathcal{M}\). If this is the case for all \(\xi \in L^{p}(X)\), then the minimal extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq931_HTML.gif is the set \(F_{X}\) of fixed points of \(X\); see Example 3.8. Thus it is not feasible to come up with a nontrivial minimal extension of the superlinear expectation if \(C=\{0\}\).
A possible way to ensure nonemptiness of the minimal extension is to apply it to random sets \(X\) from \(L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) with a cone \(C\) having interior points, since then at least one of \(h(X,v)\) and \(h(X,-v)\) is almost surely infinite for all \(v\in \mathbb{S}^{d-1}\). The minimal extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq941_HTML.gif is given by
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ34_HTML.png
(6.8)
The following result implies in particular that the union on the right-hand side of (6.8) is a convex set; cf. (4.1).
Theorem 6.10
Let https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq942_HTML.gif be a scalarly upper semicontinuous law-invariant normalised reduced maximal superlinear expectation, and let the probability space be non-atomic. Then the minimal extension https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq943_HTML.gif given by (6.8) is a law-invariant superlinear expectation.
Proof
Let \(x\) and \(x'\) belong to the union on the right-hand side of (6.8) (without closure). Then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq946_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq947_HTML.gif for \(\xi ,\xi '\in L^{p}(X)\), and the superlinearity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq949_HTML.gif yields that
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbq_HTML.png
for each \(t\in [0,1]\). Since \(t\xi +(1-t)\xi '\) is a selection of \(X\), the convexity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq953_HTML.gif easily follows. Additivity on deterministic singletons, monotonicity and homogeneity are evident from (6.8). If \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\) is deterministic, the dilatation-monotonicity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq955_HTML.gif (see Corollary 6.5) yields that
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbr_HTML.png
For the superadditivity property, consider \(x\) and \(y\) from the nonclosed right-hand side of (6.8) for \(X\) and \(Y\), respectively. Then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq960_HTML.gif for some \(\xi \in L^{p}(X)\) and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq962_HTML.gif for some \(\xi '\in L^{p}(Y)\). Hence,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbs_HTML.png
Finally, let \(\mathfrak{F}_{X}\) be the \(\sigma \)-algebra generated by \(X\), that is, \(\mathfrak{F}_{X}\) is generated by the events \(\{X\cap K\neq \varnothing \}\) for all compact sets \(K\) in \(\mathbb{R}^{d}\). The convexity of \(X\) implies that \(\mathbb{E}[\xi |\mathfrak{F}_{X}]\) is a selection of \(X\) for any \(\xi \in L^{p}(X)\). By the dilatation-monotonicity from Corollary 6.5, it is possible to replace \(\xi \in L^{p}(X)\) in (6.8) with an \(\mathfrak{F}_{X}\)-measurable \(p\)-integrable selection of \(X\). The families of \(\mathfrak{F}_{X}\)-measurable selections of \(X\) and \(\mathfrak{F}_{Y}\)-measurable selections of \(Y\) coincide for two identically distributed random sets \(X\) and \(Y\); see Molchanov [25, Proposition 1.4.5]. □
Below we establish the upper semicontinuity of the minimal extension.
Theorem 6.11
Assume that \(p\in (1,\infty ]\), https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq986_HTML.gif satisfies the conditions imposed in Theorem 6.10, and that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq987_HTML.gif for all nontrivial \(\xi \in L^{p}(C)\). Then the minimal extension https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq989_HTML.gif is scalarly upper semicontinuous.
Proof
It suffices to omit the closure in (6.8) and consider https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq990_HTML.gif with \(x_{n}\to x\) and \(X_{n}\to X\) scalarly in \(\sigma (L^{p},L^{q})\). For each \(n\in \mathbb{N}\), there exists a \(\xi _{n}\in L^{p}(X_{n})\) such that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq996_HTML.gif .
Assume first that \(p\in (1,\infty )\) and \(\sup _{n\in \mathbb{N}}\mathbb{E}[|\xi _{n}|^{p}]<\infty \). Then \((\xi _{n})_{n\in \mathbb{N}}\) is relatively compact in \(\sigma (L^{p},L^{q})\). Without loss of generality (if necessary, passing to subsequences), assume that \((\xi _{n})\) converges to \(\xi \) in \(\sigma (L^{p},L^{q})\). Since \(\langle \xi _{n},\zeta \rangle \leq h(X_{n},\zeta )\) for all \(\zeta \in L^{q}( C^{o} )\), taking expectations, letting \(n\to \infty \) and using the convergence \(\xi _{n}\to \xi \) and \(X_{n}\to X\) yields that \(\mathbb{E}[h(\xi ,\zeta )]\leq \mathbb{E}[h(X,\zeta )]\). By Lemma 2.4, \(\xi \) is a selection of \(X\). By the upper semicontinuity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1012_HTML.gif , the upper limit of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1013_HTML.gif is a subset of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1014_HTML.gif . Hence https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1015_HTML.gif for some \(\xi \in L^{p}(X)\) so that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1017_HTML.gif .
Assume now that \(\|\xi _{n}\|_{p}^{p}=\mathbb{E}[| \xi _{n} |^{p}]\to \infty \). Let \(\xi '_{n}=\xi _{n}/\|\xi _{n}\|_{p}\). This sequence is bounded in the \(L^{p}\)-norm, and so we can assume without loss of generality that \(\xi '_{n}\to \xi '\) in \(\sigma (L^{p},L^{q})\). Since
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbt_HTML.png
the upper semicontinuity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1023_HTML.gif yields that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1024_HTML.gif . For each \(\zeta \in L^{q}( C^{o} )\), we have \(\langle \xi _{n},\zeta \rangle \leq h(X_{n},\zeta )\). Dividing by \(\|\xi _{n}\|_{p}\), taking expectations and letting \(n\to \infty \) yields that \(\mathbb{E}[\langle \xi ',\zeta \rangle ]\leq 0\). Thus \(\xi '\in C\) almost surely. Given that \(\mathbb{E}[\|\xi '\|]=1\), this contradicts the fact that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1032_HTML.gif contains the origin.
The proof for \(p=\infty \) follows the exact same steps, splitting the cases when \(\sup _{n\in \mathbb{N}} | \xi _{n} |\) is essentially bounded (in which case the sequence is relatively compact in \(\sigma (L^{\infty },L^{1})\)) and when the essential supremum of \((\xi _{n})\) converges to infinity. □
The case \(p=1\) is excluded in Theorem 6.11 since relative compactness in \(L^{1}\) requires uniform integrability, which is a stronger condition than boundedness in \(L^{1}\).
The exact calculation of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1040_HTML.gif involves working with all \(p\)-integrable selections of \(X\), which is a very rich family even in simple cases like \(X=\xi +C\). Since
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ35_HTML.png
(6.9)
the superlinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1044_HTML.gif yields a computationally tractable upper bound on https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1045_HTML.gif .
Example 6.12
Consider \(\xi \in L^{p}(\mathbb{R}^{d})\) and a deterministic \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\). Assume that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1048_HTML.gif in (6.8) satisfies the conditions of Corollary 6.5. Then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ36_HTML.png
(6.10)
where \(L^{p}(F,\sigma (\xi ))\) is the family of selections of \(F\) which are measurable with respect to the \(\sigma \)-algebra \(\sigma (\xi )\) generated by \(\xi \). Indeed, for each \(\xi '\in L^{p}(F)\), the dilatation-monotonicity of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1055_HTML.gif yields that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1056_HTML.gif . It remains to note that \(\mathbb{E}[\xi '|\sigma (\xi )]\in L^{p}(F,\sigma (\xi ))\).
Note that the minimal extension https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1058_HTML.gif of a reduced maximal superlinear expectation is not necessarily a maximal superlinear expectation itself. The following result describes its reduced maximal extension.
Theorem 6.13
Assume that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1059_HTML.gif is defined by (6.8), where https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1060_HTML.gif is a scalarly upper semicontinuous reduced maximal superlinear expectation with representation (6.6). Then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1061_HTML.gif for all \(v\in \mathbb{S}^{d-1}\cap C^{o} \) and \(\beta \in L^{p}(\mathbb{R})\), and the reduced maximal extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1064_HTML.gif coincides with https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1065_HTML.gif .
Proof
By (6.3), https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1066_HTML.gif . In view of (6.9), it suffices to show that each \(x\in H_{v}(\mathtt{u}(\beta ))\) also belongs to https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1068_HTML.gif . Let \(y\) be the projection of \(x\) onto the subspace orthogonal to \(v\). It suffices to show that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1072_HTML.gif . Noticing that \(H_{v}(\beta )-y=H_{v}(\beta )\), it is possible to assume that \(x=tv\) for \(t\leq \mathtt{u}(\beta )\). Consider \(\xi =\beta v\). Then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbu_HTML.png
Since \(\langle tv,w\rangle \leq \langle v,w\rangle \mathtt{u}(\beta )\), we deduce that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1078_HTML.gif . Since https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1079_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1080_HTML.gif coincide on half-spaces, the reduced maximal extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1081_HTML.gif is
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equbv_HTML.png
 □
In general, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1082_HTML.gif may be a strict subset of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1083_HTML.gif as the following example shows; so superlinear expectations are not necessarily exact even on rather simple random sets of the type \(\xi +C\).
Example 6.14
Consider \(\xi \in \mathbb{R}^{2}\) which takes with equal probabilities two possible values: the origin and \(a=(a_{1},a_{2})\). Let \(X=\xi +C\), where \(C\) is the cone containing \(\mathbb{R}_{-}^{2}\) and with points \((1,-\pi )\) and \((-\pi ',1)\) on its boundary such that \(\pi ,\pi '>1\).
Let \(\mathcal{M}_{v}=\mathcal{M}\) be the family from Example 5.15 and let \(\mathtt{u}\) be the superlinear expectation with the representing set ℳ. For each \(\beta \in L^{1}(\mathbb{R})\), \(\mathtt{u}(\beta )\) equals the average of the \(t\)-quantiles of \(\beta \) over \(t\in (0,\alpha )\). If \(\alpha \in (0,1/2]\) and \(\beta \) takes two values with equal probabilities, then \(\mathtt{u}(\beta )\) is the smaller value of \(\beta \). Then https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1104_HTML.gif so that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1105_HTML.gif coincides with https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1106_HTML.gif in this case.
Now assume that \(\alpha \in (1/2,1)\). If \(\beta \) with equal probabilities takes two values \(t\) and \(s\), then \(\mathtt{u}(\beta )=\max (t,s)-|t-s|/(2\alpha )\) and
$$ \mathtt{u}(\langle \xi ,v\rangle )=\max (\langle a,v\rangle ,0 ) - \frac{1}{2\alpha } |\langle a,v\rangle | $$
for all \(v\) from \(C^{o}\). Since \(C\) is a Riesz cone, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1115_HTML.gif for some \(x\); see Example 6.7. For \(v\in C^{o} \), the linear function \(x \mapsto \langle x,v\rangle \) is dominated by \(\frac{1}{2\alpha }\langle a,v\rangle \) if \(\langle a,v\rangle <0\) and by \((1-\frac{1}{2\alpha })\langle a,v\rangle \) otherwise. By an elementary calculation,
$$ x=\frac{1}{2\alpha }a+\bigg(\frac{1}{\alpha }-1\bigg) \frac{a_{1}\pi '+a_{2}}{\pi \pi '-1} (-\pi ',1). $$
In view of Example 6.12, for the minimal extension, it suffices to consider selections of \(C\) which are measurable with respect to the \(\sigma \)-algebra \(\sigma (\xi )\) generated by \(\xi \); these selections take two values from the boundary of \(C\) with equal probabilities. The minimal extension https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1127_HTML.gif can be found via (6.10), letting \(\xi '\) with equal probabilities take two values \(y=(y_{1},y_{2})\) and \(z=(z_{1},z_{2})\) on the boundary \(\partial C\) of \(C\). Then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equby_HTML.png
Figure 1 shows https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1133_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1134_HTML.gif for \(\pi =\pi '=2\), \(a=(1,-1)\) and \(\alpha =0.7\). It shows that the minimal extension may indeed be a strict subset of the underlying reduced maximal superlinear expectation.

7 Applications

7.1 Depth-trimmed regions and outliers

Consider a sublinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1141_HTML.gif restricted to the family of \(p\)-integrable singletons and let \(C=\{0\}\). The map https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1144_HTML.gif satisfies the properties of depth-trimmed regions imposed by Cascos [5], which are those from Zuo and Serfling [35] augmented by monotonicity and subadditivity.
Therefore, the sublinear expectation provides a rather generic construction of a depth-trimmed region associated with a random vector \(\xi \in L^{p}(\mathbb{R}^{d})\). In statistical applications, points outside https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1146_HTML.gif or its empirical variant are regarded as outliers. The subadditivity property (3.1) means that if a point is not an outlier for the convolution of two samples, then there is a way to obtain this point as the sum of two non-outliers for the original samples.
Example 7.1
Fix \(\alpha \in (0,1)\). For \(\beta \in L^{1}(\mathbb{R})\), define
$$ \mathtt{e}_{\alpha }(\beta )=\alpha ^{-1} \int _{1-\alpha }^{1} q_{\beta }(s) \, ds, $$
where \(q_{\beta }(s)\) is an \(s\)-quantile of \(\beta \) (in case of nonuniqueness, the choice of a particular quantile does not matter because of integration). The risk measure \(r(\beta )=\mathtt{e}_{\alpha }(-\beta )\) is called the average value-at-risk. Denote by https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1153_HTML.gif the corresponding minimal sublinear expectation constructed by (5.6), so that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1154_HTML.gif for all \(u\). The set https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1156_HTML.gif is the zonoid-trimmed region of \(\xi \) at level \(\alpha \); see Cascos [5] and Mosler [28, Sect. 3.1]. This set can be obtained as
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equca_HTML.png
where \(\mathcal{P}_{\alpha }\subseteq L^{1}(\mathbb{R}_{+})\) consists of all random variables with values in \([0,\alpha ^{-1}]\) and expectation 1; see Example 5.15. This setting is a special case of Theorem 5.12 with \(\mathcal{M}=\{t\gamma : \gamma \in \mathcal{P}_{\alpha },t\geq 0\}\). The value of \(\alpha \) controls the size of the zonoid-trimmed region; \(\alpha =1\) yields a single point, being the expectation of \(\xi \). The subadditivity property of zonoid-trimmed regions was first noticed by Cascos and Molchanov [6].
Example 7.2
Let \(X\) be an integrable random closed convex set. Consider the random set \(Y\) in \(\mathbb{R}^{d+1}\) given by the convex hull of the origin and \(\{1\}\times X\). The selection expectation \(Z_{X}=\mathbb{E}[Y]\) is called the lift expectation of \(X\); see Diaye et al. [8]. If \(X=\{\xi \}\) is a singleton, then \(Z_{X}\) is the lift zonoid of \(\xi \); see Mosler [28, Sect. 2.2]. By the definition of the selection expectation, \(Z_{X}\) is the closure of the set of \((\mathbb{E}[\beta ],\mathbb{E}[\beta \xi ])\), where \(\beta \) runs through the family of random variables with values in \([0,1]\). Equivalently, \((\alpha ,x)\) belongs to \(Z_{X}\) if and only if \(x=\alpha \mathbb{E}[\gamma \xi ]\) for \(\gamma \) from the family \(\mathcal{P}_{\alpha }\); see Example 7.1. Thus the minimal extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1183_HTML.gif from Example 7.1 is https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1184_HTML.gif .

7.2 Parametric families of nonlinear expectations

Consider nonlinear expectations https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1185_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1186_HTML.gif such that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1187_HTML.gif for all random closed sets \(X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\). Then it is natural to regard observations of \(X\) that do not lie between the superlinear and sublinear expectation as outliers.
Let \(X_{1},\dots ,X_{n}\) be independent copies of a \(p\)-integrable random closed convex set \(X\). For a sublinear expectation https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1193_HTML.gif ,
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ37_HTML.png
(7.1)
is also a sublinear expectation. The only slightly nontrivial property is the subadditivity, which follows from the fact that
$$ (X_{1}+Y_{1})\cup \cdots \cup (X_{n}+Y_{n}) \subseteq (X_{1}\cup \cdots \cup X_{n})+(Y_{1}\cup \cdots \cup Y_{n}). $$
If \(X_{1}\cap \cdots \cap X_{n}\) is a.s. nonempty, then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equ38_HTML.png
(7.2)
yields a superlinear expectation, noticing that
$$ (X_{1}+Y_{1})\cap \cdots \cap (X_{n}+Y_{n}) \supseteq (X_{1}\cap \cdots \cap X_{n})+(Y_{1}\cap \cdots \cap Y_{n}). $$
We let https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1195_HTML.gif if \(X_{1}\cap \cdots \cap X_{n}\) is empty with positive probability.
Example 7.3
Choosing https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1197_HTML.gif in (7.1) and (7.2) yields a family of nonlinear expectations depending on the parameter \(n\), which are also easy to compute.
It is easily seen that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1199_HTML.gif increases and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1200_HTML.gif decreases as \(n\) increases. Define the depth of \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\) as
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equcd_HTML.png
It is easy to see that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1203_HTML.gif and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1204_HTML.gif . Hence \(F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)\) has depth one if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1206_HTML.gif . Note that https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1207_HTML.gif decreases to the set of fixed points of \(X\) and https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1209_HTML.gif increases to the support of \(X\) as \(n\to \infty \); see Example 3.8. Thus only closed convex sets \(F\) satisfying \(F_{X}\subseteq F\subseteq \operatorname{supp}X\) may have a positive depth.
In order to handle the empirical variant of the preceding concept based on a sample \(X_{1},\dots ,X_{n}\) of independent observations of \(X\), consider a random closed set \(\tilde{X}\) that with equal probabilities takes one of the values \(X_{1},\dots ,X_{n}\). Its distribution can be simulated by sampling one of these sets with possible repetitions. Then it is possible to use the nonlinear expectations of \(\tilde{X}\) in order to assess the depth of any given convex set, including those from the sample.

7.3 Risk and utility of a set-valued portfolio

For a random variable \(\xi \in L^{p}(\mathbb{R})\) interpreted as a financial outcome or gain, the value \(\mathtt{e}(-\xi )\) (equivalently, \(-\mathtt{u}(\xi )\)) is used in finance to assess the risk of \(\xi \). It may be tempting to extend this to the multivariate setting by assuming that the risk is a \(d\)-dimensional function of a random vector \(\xi \in L^{p}(\mathbb{R}^{d})\), with the conventional properties extended coordinatewise. However, in this case the nonlinear expectations (and so the risk) are marginalised, that is, the risk of \(\xi \) splits into a vector of nonlinear expectations applied to the individual components of \(\xi \); see Theorem A.1.
Moreover, an adequate assessment of the financial risk of a vector \(\xi \) is impossible without taking into account exchange rules that can be applied to its components in order to convert \(\xi \) to another financial position. If no exchanges are allowed and only consumption is possible, one arrives at positions being selections of \(X=\xi +\mathbb{R}_{-}^{d}\). On the other hand, if the components of \(\xi \) are expressed in the same currency with unrestricted exchanges and disposal (consumption) of the assets, each position from the half-space \(X=\{x : \sum x_{i}\leq \sum \xi _{i}\}\) is reachable from \(\xi \). Working with the random set \(X\) also eliminates possible nonuniqueness in the choice of \(\xi \) with identical sums.
In view of this, it is natural to consider multivariate financial positions as lower random closed convex sets or, equivalently, those from \(L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) with \(C= \mathbb{R}_{-}^{d}\). The random closed set is said to be acceptable if https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1237_HTML.gif , and the risk of \(X\) is defined as https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1239_HTML.gif . The superadditivity property guarantees that if both \(X\) and \(Y\) are acceptable, then \(X+Y\) is acceptable. This is the classical financial diversification advantage formulated in set-valued terms. The value https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1243_HTML.gif determines the utility of \(X\), exactly corresponding to the classical properties of utility functions being monotone superlinear functions of random variables. In particular, the superadditivity amounts to the fact that the utility of the sum is larger than or equal to the sum of the utilities.
If \(X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))\) and \(C=\mathbb{R}_{-}^{d}\), the minimal extension (6.8) is called the lower set extension of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1247_HTML.gif . If https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1248_HTML.gif is reduced maximal, (6.6) yields that
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equce_HTML.png
where \(\vec{\mathtt{u}}(\xi )=(\mathtt{u}(\xi _{1}),\dots ,\mathtt{u}( \xi _{d}))\) is defined by applying the same superlinear expectation \(\mathtt{u}\) with representing set ℳ to each component of \(\xi \). Then
https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_Equcf_HTML.png
In other words, https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1252_HTML.gif is the closure of the set of all points dominated coordinatewise by the superlinear expectation of at least one selection of \(X\). In Molchanov and Cascos [26], the origin-reflected set https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1254_HTML.gif was called the selection risk measure of \(X\).
For set-valued portfolios \(X=\xi +C\), arising as the sum of a singleton \(\xi \) and a (possibly random) convex cone \(C\), the maximal superlinear expectation (in our terminology), considered a function of \(\xi \) only and not of \(\xi +C\), was studied by Hamel and Heyde [13] and Hamel et al. [14]. However, if \(C\) becomes random, the resulting function of \(\xi \) alone is not necessarily law-invariant. The case of general random set-valued arguments was pursued by Molchanov and Cascos [26].
For the purpose of risk (or utility) assessment, one can use any superlinear expectation. However, the sensible choices are the maximal superlinear expectation in view of its closed form dual representation, and the lower set extension in view of its direct financial interpretation (through its primal representation), meaning the existence of a selection (that is, a financial position) with all components acceptable. Example 6.14 provides the numerical calculation of the reduced maximal and the minimal extension (see Figure 1) using the average quantile utility function. Given that the minimal superlinear expectation may be a strict subset of the maximal one (see Example 6.14), the acceptability of \(X\) under a maximal superlinear expectation may be a weaker requirement than the acceptability under the lower set extension.
From the financial viewpoint, the acceptability of \(X=\xi +C\) (for the payoff \(\xi \in L^{p}(\mathbb{R}^{d})\) and a deterministic cone \(C\) describing the family of portfolios available at price zero) under the lower set extension (that is, the minimal extension) means the existence of an exchange scenario \(\xi '\in L^{p}(C)\) such that \(\xi +\xi '\) has all components acceptable. In other words, by exchanging the components of \(\xi \) and taking into account the transaction costs imposed by the cone \(C\), it is possible to make all components of \(\xi \) individually acceptable. On the other hand, the acceptability of \(X\) under the reduced maximal extension means that \(\langle \xi ,\eta \rangle \) is acceptable for all \(u\) from the dual cone of \(C\), that is, \(\xi \) is acceptable under all price systems determined by \(C\). For instance, this is the case if \(\xi +\xi '\) has all components acceptable since
$$ \langle \xi ,u\rangle =\langle \xi +\xi ',u\rangle -\langle \xi ',u \rangle . $$
The first term on the right-hand side is acceptable since the dual cone of \(C\) is a subset of \(\mathbb{R}_{+}^{d}\), while the second term is nonnegative since \(u\) belongs to the dual cone of \(C\).

Acknowledgements

IM is grateful to Ignacio Cascos for discussions and a collaboration on related works. This work was motivated by the stay of IM at the Universidad Carlos III de Madrid in 2012 supported by the Santander Bank. IM was also supported by the Swiss National Science Foundation grants 200021_153597 and IZ73Z0_152292.
The authors thank the referee for a very careful reading of the manuscript and for suggesting a number of improvements.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix A: Marginalisation of vector-valued sublinear functions

It may be tempting to consider vector-valued functions \(\overline{\mathtt{e}}:L^{p}(\mathbb{R}^{d})\to \mathbb{R}^{d}\) which are sublinear, that is, \(\overline{\mathtt{e}}(x)=x\) for all \(x\in \mathbb{R}^{d}\), \(\overline{\mathtt{e}}(\xi )\leq \overline{\mathtt{e}}(\eta )\) if \(\xi \leq \eta \) a.s., \(\overline{\mathtt{e}}(c\xi )=c\overline{\mathtt{e}}(\xi )\) for all \(c\geq 0\) and
$$ \overline{\mathtt{e}}(\xi +\eta )\leq \overline{\mathtt{e}}(\xi )+ \overline{\mathtt{e}}(\eta ). $$
Such a function may be viewed as the restriction of a sublinear set-valued expectation to the family of sets \(\xi +\mathbb{R}_{-}^{d}\) and letting \(\overline{\mathtt{e}}(\xi )\) be the coordinatewise supremum of https://static-content.springer.com/image/art%3A10.1007%2Fs00780-020-00442-3/MediaObjects/780_2020_442_IEq1292_HTML.gif .
The following result shows that vector-valued sublinear expectations marginalise, that is, they split into sublinear expectations applied to each component of the random vector.
Theorem A.1
If \(\overline{\mathtt{e}}\) is a \(\sigma (L^{p},L^{q})\)-lower semicontinuous vector-valued sublinear expectation for some \(p\in [1,\infty ]\), then
$$ \overline{\mathtt{e}}(\xi )=\big(\mathtt{e}_{1}(\xi _{1}),\dots , \mathtt{e}_{d}(\xi _{d})\big) $$
for a collection of numerical sublinear expectations \(\mathtt{e}_{1},\dots ,\mathtt{e}_{d}\).
Proof
The set \(\mathcal{A}=\{\xi : \overline{\mathtt{e}}(\xi )\leq 0\}\) is a \(\sigma (L^{p},L^{q})\)-closed convex cone in \(L^{p}(\mathbb{R}^{d})\). The polar cone \(\mathcal{A}^{o}\) is the set of all \(\mathbb{R}^{d}\)-valued measures \(\mu =(\mu _{1},\dots ,\mu _{d})\) such that
$$ \int \xi d\mu =\bigg(\int \xi _{1} d\mu _{1},\dots ,\int \xi _{d} d \mu _{d}\bigg)\leq 0 $$
for all \(\xi \in \mathcal{A}\). It is easy to see that each \(\mu \in \mathcal{A}\) has all components nonnegative. The bipolar theorem yields that
$$ \mathcal{A}=\bigg\{ \xi : \int \xi d\mu \leq 0 \;\text{for all} \; \mu \in \mathcal{A}^{o}\bigg\} . $$
Since \(\overline{\mathtt{e}}\) is constant-preserving,
$$ \overline{\mathtt{e}}(\xi +x)-x\leq \overline{\mathtt{e}}(\xi )= \overline{\mathtt{e}}\big((\xi +x)-x\big)\leq \overline{\mathtt{e}}(\xi +x)-x $$
so that \(\overline{\mathtt{e}}(\xi +x)=\overline{\mathtt{e}}(\xi )+x\) for all deterministic \(x\in \mathbb{R}^{d}\). Hence,
$$ \overline{\mathtt{e}}(\xi )=\inf \bigcap _{\mu \in \mathcal{A}^{o}} \bigg\{ y\in \mathbb{R}^{d} : \int \xi d\mu \leq \int y d\mu \bigg\} , $$
(A.1)
where the infimum is taken coordinatewise.
Consider the set \(C_{\mu }=\{y\in \mathbb{R}^{d} : \int \xi d\mu \leq \int y d\mu \}\) for some \(\mu =(\mu _{1},\dots ,\mu _{d})\) from \(\mathcal{A}^{o}\). Let \(\mathcal{A}_{i}^{o}\) denote the family of all nontrivial \(\mu \in \mathcal{A}^{o}\) such that \(\mu _{j}\) vanishes for all \(j\neq i\). Note that if \(\mu \in \mathcal{A}^{o}\), then \((\mu _{1},0,\dots ,0)\in \mathcal{A}_{1}^{o}\), that is, the projections of \(\mathcal{A}^{o}\) and \(\mathcal{A}_{i}^{o}\) on each of the coordinates coincide. If \(\mu \in \mathcal{A}_{1}^{o}\), then
$$ C_{\mu }=\bigg[\int \xi _{1} d\mu _{1},\infty \bigg)\times \mathbb{R} \times \cdots \times \mathbb{R}. $$
Assume that two components of \(\mu \) do not vanish, say \(\mu _{1}\) and \(\mu _{2}\). Then
$$\begin{aligned} C_{\mu }&=\bigg\{ y : \int \xi _{1} d\mu _{1}+\int \xi _{2} d\mu _{2} \leq \int y_{1} d\mu _{1} + \int y_{2} d\mu _{2} \bigg\} \\ & \supseteq \bigg[\int \xi _{1} d\mu _{1},\infty \bigg)\times \bigg[ \int \xi _{2} d\mu _{2},\infty \bigg) \times \mathbb{R}\times \cdots \times \mathbb{R}. \end{aligned}$$
Thus this latter set \(C_{\mu }\) does not influence the coordinatewise infimum in (A.1) in comparison to the sets obtained by letting \(\mu \in \mathcal{A}_{1}^{o}\cup \mathcal{A}_{2}^{o}\). The same argument applies to \(\mu \in \mathcal{A}^{o}\) with more than two nonvanishing components. Thus the intersection in (A.1) can be taken over \(\mu \in \mathcal{A}_{1}^{o}\cup \cdots \cup \mathcal{A}_{d}^{o}\), whence the result. □
A similar result holds for superlinear vector-valued expectations.
Literatur
1.
Zurück zum Zitat Aliprantis, C.D., Tourky, R.: Cones and Duality. Am. Math. Soc., Providence (2007) CrossRef Aliprantis, C.D., Tourky, R.: Cones and Duality. Am. Math. Soc., Providence (2007) CrossRef
2.
Zurück zum Zitat Ararat, Ç., Rudloff, B.: A characterization theorem for Aumann integrals. Set-Valued Var. Anal. 23, 305–318 (2015) MathSciNetCrossRef Ararat, Ç., Rudloff, B.: A characterization theorem for Aumann integrals. Set-Valued Var. Anal. 23, 305–318 (2015) MathSciNetCrossRef
3.
Zurück zum Zitat Artstein, Z., Vitale, R.A.: A strong law of large numbers for random compact sets. Ann. Probab. 3, 879–882 (1975) MathSciNetCrossRef Artstein, Z., Vitale, R.A.: A strong law of large numbers for random compact sets. Ann. Probab. 3, 879–882 (1975) MathSciNetCrossRef
5.
Zurück zum Zitat Cascos, I.: Data depth: multivariate statistics and geometry. In: Kendall, W.S., Molchanov, I. (eds.) New Perspectives in Stochastic Geometry, pp. 398–426. Oxford University Press, Oxford (2010) MATH Cascos, I.: Data depth: multivariate statistics and geometry. In: Kendall, W.S., Molchanov, I. (eds.) New Perspectives in Stochastic Geometry, pp. 398–426. Oxford University Press, Oxford (2010) MATH
6.
Zurück zum Zitat Cascos, I., Molchanov, I.: Multivariate risks and depth-trimmed regions. Finance Stoch. 11, 373–397 (2007) MathSciNetCrossRef Cascos, I., Molchanov, I.: Multivariate risks and depth-trimmed regions. Finance Stoch. 11, 373–397 (2007) MathSciNetCrossRef
7.
Zurück zum Zitat Delbaen, F.: Monetary Utility Functions. Osaka University Press, Osaka (2012) Delbaen, F.: Monetary Utility Functions. Osaka University Press, Osaka (2012)
8.
Zurück zum Zitat Diaye, M.-A., Koshevoy, G.A., Molchanov, I.: Lift expectations of random sets and their applications. Stat. Probab. Lett. 145, 110–117 (2018) CrossRef Diaye, M.-A., Koshevoy, G.A., Molchanov, I.: Lift expectations of random sets and their applications. Stat. Probab. Lett. 145, 110–117 (2018) CrossRef
9.
Zurück zum Zitat Drapeau, S., Hamel, A.H., Kupper, M.: Complete duality for quasiconvex and convex set-valued functions. Set-Valued Var. Anal. 24, 253–275 (2016) MathSciNetCrossRef Drapeau, S., Hamel, A.H., Kupper, M.: Complete duality for quasiconvex and convex set-valued functions. Set-Valued Var. Anal. 24, 253–275 (2016) MathSciNetCrossRef
10.
Zurück zum Zitat Föllmer, H., Schied, A.: Stochastic Finance. An Introduction in Discrete Time, 2nd edn. de Gruyter, Berlin (2004) CrossRef Föllmer, H., Schied, A.: Stochastic Finance. An Introduction in Discrete Time, 2nd edn. de Gruyter, Berlin (2004) CrossRef
11.
Zurück zum Zitat Fuglede, B.: Capacity as a sublinear functional generalizing an integral. Mat.-Fys. Medd. Danske Vid. Selsk. 38, 1–44 (1971) MathSciNetMATH Fuglede, B.: Capacity as a sublinear functional generalizing an integral. Mat.-Fys. Medd. Danske Vid. Selsk. 38, 1–44 (1971) MathSciNetMATH
12.
Zurück zum Zitat Hamel, A.H.: A duality theory for set-valued functions. I. Fenchel conjugation theory. Set-Valued Var. Anal. 17, 153–182 (2009) MathSciNetCrossRef Hamel, A.H.: A duality theory for set-valued functions. I. Fenchel conjugation theory. Set-Valued Var. Anal. 17, 153–182 (2009) MathSciNetCrossRef
13.
14.
Zurück zum Zitat Hamel, A.H., Heyde, F., Rudloff, B.: Set-valued risk measures for conical market models. Math. Financ. Econ. 5, 1–28 (2011) MathSciNetCrossRef Hamel, A.H., Heyde, F., Rudloff, B.: Set-valued risk measures for conical market models. Math. Financ. Econ. 5, 1–28 (2011) MathSciNetCrossRef
15.
Zurück zum Zitat Hess, C.: Set-valued integration and set-valued probability theory: an overview. In: Pap, E. (ed.) Handbook of Measure Theory, pp. 617–673. Elsevier, Amsterdam (2002). Chapter 14 MATH Hess, C.: Set-valued integration and set-valued probability theory: an overview. In: Pap, E. (ed.) Handbook of Measure Theory, pp. 617–673. Elsevier, Amsterdam (2002). Chapter 14 MATH
16.
Zurück zum Zitat Hiai, F., Umegaki, H.: Integrals, conditional expectations, and martingales of multivalued functions. J. Multivar. Anal. 7, 149–182 (1977) MathSciNetCrossRef Hiai, F., Umegaki, H.: Integrals, conditional expectations, and martingales of multivalued functions. J. Multivar. Anal. 7, 149–182 (1977) MathSciNetCrossRef
17.
Zurück zum Zitat Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms, vol. 1. Springer, Berlin (1993) MATH Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms, vol. 1. Springer, Berlin (1993) MATH
18.
Zurück zum Zitat Hu, S., Papageorgiou, N.S.: Handbook of Multivalued Analysis, vol. 1. Kluwer, Dordrecht (1997) CrossRef Hu, S., Papageorgiou, N.S.: Handbook of Multivalued Analysis, vol. 1. Kluwer, Dordrecht (1997) CrossRef
19.
Zurück zum Zitat Kabanov, Y.M., Safarian, M.: Markets with Transaction Costs. Mathematical Theory. Springer, Berlin (2009) MATH Kabanov, Y.M., Safarian, M.: Markets with Transaction Costs. Mathematical Theory. Springer, Berlin (2009) MATH
20.
Zurück zum Zitat Kaina, M., Rüschendorf, L.: On convex risk measures on \(L^{p}\)-spaces. Math. Methods Oper. Res. 69, 475–495 (2009) MathSciNetCrossRef Kaina, M., Rüschendorf, L.: On convex risk measures on \(L^{p}\)-spaces. Math. Methods Oper. Res. 69, 475–495 (2009) MathSciNetCrossRef
21.
Zurück zum Zitat Khan, A.A., Tammer, C., Zǎlinescu, C.: Set-Valued Optimization. An Introduction with Applications. Springer, Heidelberg (2015) MATH Khan, A.A., Tammer, C., Zǎlinescu, C.: Set-Valued Optimization. An Introduction with Applications. Springer, Heidelberg (2015) MATH
22.
Zurück zum Zitat Lépinette, E., Molchanov, I.: Conditional cores and conditional convex hulls of random sets. J. Math. Anal. Appl. 478, 368–392 (2019) MathSciNetCrossRef Lépinette, E., Molchanov, I.: Conditional cores and conditional convex hulls of random sets. J. Math. Anal. Appl. 478, 368–392 (2019) MathSciNetCrossRef
23.
Zurück zum Zitat Lépinette, E., Molchanov, I.: Risk arbitrage and hedging to acceptability under transaction costs. Finance Stoch. 25, 101–132 (2021) Lépinette, E., Molchanov, I.: Risk arbitrage and hedging to acceptability under transaction costs. Finance Stoch. 25, 101–132 (2021)
25.
Zurück zum Zitat Molchanov, I.: Theory of Random Sets, 2nd edn. Springer, London (2017) CrossRef Molchanov, I.: Theory of Random Sets, 2nd edn. Springer, London (2017) CrossRef
26.
Zurück zum Zitat Molchanov, I., Cascos, I.: Multivariate risk measures: a constructive approach based on selections. Math. Finance 26, 867–900 (2016) MathSciNetCrossRef Molchanov, I., Cascos, I.: Multivariate risk measures: a constructive approach based on selections. Math. Finance 26, 867–900 (2016) MathSciNetCrossRef
27.
Zurück zum Zitat Molchanov, I., Molinari, F.: Random Sets in Econometrics. Cambridge University Press, Cambridge (2018) CrossRef Molchanov, I., Molinari, F.: Random Sets in Econometrics. Cambridge University Press, Cambridge (2018) CrossRef
28.
Zurück zum Zitat Mosler, K.: Multivariate Dispersion, Central Regions and Depth. The Lift Zonoid Approach. Lecture Notes in Statistics, vol. 165. Springer, Berlin (2002) CrossRef Mosler, K.: Multivariate Dispersion, Central Regions and Depth. The Lift Zonoid Approach. Lecture Notes in Statistics, vol. 165. Springer, Berlin (2002) CrossRef
29.
Zurück zum Zitat Peng, S.: Nonlinear expectations, nonlinear evaluations and risk measures. In: Frittelli, M., Runggaldier, W. (eds.) Stochastic Methods in Finance. Lecture Notes in Math., vol. 1856, pp. 165–253. Springer, Berlin (2004) CrossRef Peng, S.: Nonlinear expectations, nonlinear evaluations and risk measures. In: Frittelli, M., Runggaldier, W. (eds.) Stochastic Methods in Finance. Lecture Notes in Math., vol. 1856, pp. 165–253. Springer, Berlin (2004) CrossRef
30.
Zurück zum Zitat Peng, S.: Nonlinear Expectations and Stochastic Calculus Under Uncertainty. Springer, Berlin (2019) CrossRef Peng, S.: Nonlinear Expectations and Stochastic Calculus Under Uncertainty. Springer, Berlin (2019) CrossRef
31.
Zurück zum Zitat Pennanen, T., Penner, I.: Hedging of claims with physical delivery under convex transaction costs. SIAM J. Financ. Math. 1, 158–178 (2010) MathSciNetCrossRef Pennanen, T., Penner, I.: Hedging of claims with physical delivery under convex transaction costs. SIAM J. Financ. Math. 1, 158–178 (2010) MathSciNetCrossRef
32.
33.
Zurück zum Zitat Schneider, R.: Convex Bodies. The Brunn–Minkowski Theory, 2nd edn. Cambridge University Press, Cambridge (2014) MATH Schneider, R.: Convex Bodies. The Brunn–Minkowski Theory, 2nd edn. Cambridge University Press, Cambridge (2014) MATH
34.
35.
Metadaten
Titel
Nonlinear expectations of random sets
verfasst von
Ilya Molchanov
Anja Mühlemann
Publikationsdatum
01.01.2021
Verlag
Springer Berlin Heidelberg
Erschienen in
Finance and Stochastics / Ausgabe 1/2021
Print ISSN: 0949-2984
Elektronische ISSN: 1432-1122
DOI
https://doi.org/10.1007/s00780-020-00442-3

Weitere Artikel der Ausgabe 1/2021

Finance and Stochastics 1/2021 Zur Ausgabe

EditorialNotes

Editorial