Skip to main content
Erschienen in: Finance and Stochastics 3/2021

Open Access 14.06.2021

Time-dynamic evaluations under non-monotone information generated by marked point processes

Erschienen in: Finance and Stochastics | Ausgabe 3/2021

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The information dynamics in finance and insurance applications is usually modelled by a filtration. This paper looks at situations where information restrictions apply so that the information dynamics may become non-monotone. A fundamental tool for calculating and managing risks in finance and insurance are martingale representations. We present a general theory that extends classical martingale representations to non-monotone information generated by marked point processes. The central idea is to focus only on those properties that martingales and compensators show on infinitesimally short intervals. While classical martingale representations describe innovations only, our representations have an additional symmetric counterpart that quantifies the effect of information loss. We exemplify the results with examples from life insurance and credit risk.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

The value at time \(t \in [0, T]\) of a financial claim \(\xi \in L^{1}(\Omega , \mathcal{A},P)\) at time \(T \in (0, \infty )\) is commonly calculated by
$$\begin{aligned} B(t) E_{Q}\bigg[ \frac{\xi }{B(T)} \bigg| \mathcal{F}_{t} \bigg], \end{aligned}$$
(1.1)
where \(B\) is the value process of a risk-free asset, \((\mathcal{F}_{t})_{t \geq 0}\) is a filtration that describes the available information at each time \(t\geq 0\), and \(Q\) is some equivalent measure. For studying the time dynamics of the value process, we can exploit the fact that \(t \mapsto E_{Q}[ \xi / B(T) | \mathcal{F}_{t} ]\) is always a martingale.
In this paper, we suppose that information restrictions apply and replace the filtration \((\mathcal{F}_{t})_{t \geq 0}\) by a family of sub-sigma-algebras \((\mathcal{G}_{t})_{t \geq 0}\) that may be non-monotone, i.e., we do not assume that \((\mathcal{G}_{t})_{t \geq 0}\) is a filtration. We focus on modelling frameworks where \((\mathcal{G}_{t})_{t \geq 0}\) is generated by a marked point process, because this allows us to calculate martingale representations explicitly. Our approach seems to work also in more general settings, but a general theory is left to future research.
Information restrictions can be motivated by legal restrictions, data privacy efforts, information summarisation or model simplifications. An example for a legal information restriction is the General Data Protection Regulation 2016/679 of the European Union, which includes in Article 17 a so-called ‘right to erasure’, causing possible information loss.
Example 1.1
Consider life insurance contracts that are evaluated by using big data. Data from activity trackers, social media, etc., can improve individual forecasts of the mortality and morbidity of insured persons. By exercising the ‘right to erasure’ according to the General Data Protection Regulation of the European Union, the policyholder may ask the insurer to delete parts of the health-related data at discretion. Moreover, data providers might implement self-imposed information restrictions for data privacy reasons. For example, users of Google products can opt for an auto-delete of location history and activity data after a fixed time limit. As a result, the evaluation of an insurance liability \(\xi \) according to (1.1) will be restricted to sub-sigma-algebras \((\mathcal{G}_{t})_{t \geq 0}\) that are non-monotone in \(t\) due to data deletions.
Examples of information summarisation can be found in Norberg [22], where summarised life insurance values (retrospective and prospective reserves) are defined that encompass non-monotone information. A popular model simplification is Markovian modelling even when the empirical data does not fully support the Markov assumption.
Example 1.2
We consider a credit rating process. In the Jarrow–Lando–Turnbull model, the filtration \((\mathcal{F}_{t})_{t \geq 0}\) is generated by a finite-state-space Markov chain \((R_{t})_{t \geq 0}\) that represents credit ratings; cf. Jarrow et al. [15]. The Markov property makes it possible to equivalently replace \(\mathcal{F}_{t}\) in (1.1) by the sub-sigma-algebra \(\mathcal{G}_{t}:= \sigma (R_{t})\). The Markov assumption can be motivated by the theoretical idea that a credit rating should fully describe the current risk profile of a prospective debtor so that historical ratings can be ignored. However, empirical data does not always support the Markov property, so that \(E_{Q}[ \xi / B(T) | \mathcal{G}_{t} ]\) may in fact differ from \(E_{Q}[ \xi / B(T) | \mathcal{F}_{t} ]\); cf. Lando and Skodeberg [17]. The information dynamics of \(\mathcal{G}_{t}=\sigma (R_{t})\) is non-monotone in \(t\).
Non-monotone information structures can also be found in Pardoux and Peng [24] and Tang and Wu [27], but in these papers, specific independence assumptions make it possible to go back to filtrations and work with classical martingale representations.
From now on, we skip the subscript \(Q\) in (1.1) and all related expectations. Depending on the application, we interpret \(P\) either as the real-world measure or as a risk-neutral measure.
When we replace the filtration \((\mathcal{F}_{t})_{t \geq 0}\) in (1.1) by some non-monotone information \((\mathcal{G}_{t})_{t \geq 0}\), all the powerful tools from martingale theory for studying the time dynamics of (1.1) are not available any more. In order to fill that gap, this paper derives general representations of the form
$$\begin{aligned} \begin{aligned} E[ \xi | \mathcal{G}_{t}]-E[ \xi | \mathcal{G}_{0}] &= \sum _{I \in \mathcal{N}} \int _{(0,t ]\times E_{I} } G_{I}(u-,u,e) \,(\mu _{I}- \nu _{I}) \big(\mathrm{d}(u,e)\big) \\ &\phantom{=:} + \sum _{I \in \mathcal{N}}\int _{(0,t]\times E_{I}} G_{I}(u,u,e) \,( \rho _{I}- \mu _{I}) \big(\mathrm{d}(u,e)\big), \qquad t \geq 0, \end{aligned} \end{aligned}$$
(1.2)
where \(\xi \) is any integrable random variable, \((\mathcal{G}_{t})_{t \geq 0}\) is a non-monotone family of sigma-algebras generated by an extended marked point process that involves information deletions, \((\mu _{I})_{I \in \mathcal{N}}\) is a set of counting measures that uniquely corresponds to the extended marked point process, \((\nu _{I})_{I \in \mathcal{N}}\) and \((\rho _{I})_{I \in \mathcal{N}}\) are infinitesimal forward and backward compensators of \((\mu _{I})_{I \in \mathcal{N}}\), and the integrands \(G_{I}(u-,u,e)\) and \(G_{I}(u,u,e)\) are adapted to the information at time \(u-\) and time \(u\), respectively. In case that \((\mathcal{G}_{t})_{t \geq 0}\) is increasing, i.e., it is a filtration, the second line in (1.2) is zero and the first line conforms with classical martingale representations. The central idea in this paper is to focus on those properties only that martingales and compensators show on infinitesimally small intervals. We call this the ‘infinitesimal approach’. In principle, the infinitesimal approach is not restricted to point process frameworks, but a fully general theory is beyond the scope of this paper. We further extend our representation results to processes of the form
$$\begin{aligned} t \mapsto E[ X_{t} | \mathcal{G}_{t} ], \qquad t \geq 0, \end{aligned}$$
(1.3)
where \((X_{t})_{t \geq 0}\) is a suitably integrable càdlàg process. In this case, an additional drift term appears on the right-hand side of (1.2).
Martingale representations have various applications in finance and insurance, and this is in particular true for marked point process frameworks:
– If a financial or insurance claim is hedgeable, then explicit hedges can be derived from martingale representations; see e.g. Norberg [23] and Last and Penrose [18].
– Martingale representations are a central tool for constructing and solving backward stochastic differential equations (BSDEs); see e.g. Cohen and Elliott [6], Bandini [1] and Confortola [8]. Many optimal control problems in finance and insurance correspond to a BSDE problem, see e.g. Cohen and Elliott [7] and Delong [11, Chap. 1].
– Martingale representations can serve as additive risk factor decompositions; see Schilling et al. [26]. An insurer needs to additively decompose the surplus from a policy or an insurance portfolio for regulatory reasons; see e.g. Møller and Steffensen [20, Sect. 6]. Additive risk factor decompositions are also used in finance; see e.g. Rosen and Saunders [25].
In all three applications, infinitesimal martingale representations according to (1.2) allow us to include information restrictions into the modelling. We study later a hedging application for the model in Example 1.2. We shall see that estimation and calculation of hedging strategies under inappropriate Markov assumptions may unintentionally replace classical martingales by infinitesimal forward martingales (the first line on the right-hand side of (1.2)), and then the implied hedging error is just the corresponding infinitesimal backward martingale part (the second line in (1.2)). The application of infinitesimal martingale representations in BSDE theory is exemplarily discussed for Example 1.1. We shall see that the integrands in (1.2) correspond to the so-called sum at risk, which is a central quantity in life insurance risk management. In Example 1.1, we also briefly discuss risk factor decompositions. Information deletions upon request for data privacy reasons can provoke arbitrage opportunities, and these can be split off as infinitesimal backward martingales, which is important for dealing with them.
The representation (1.2) implies that \(t \mapsto E[ \xi | \mathcal{G}_{t} ]\) has a (unique) semimartingale modification. More generally, we show that \(t \mapsto E[ X_{t} | \mathcal{G}_{t} ]\) has a (unique) semimartingale modification whenever \(X\) is a semimartingale with integrable variation on compacts. The uniqueness and the semimartingale property are crucial in applications where the time dynamics need to be studied. For example, in life insurance, the differential \(\mathrm{d}E[ X_{t} | \mathcal{G}_{t}]\) might describe the insurer’s current surplus or loss at time \(t\); cf. Norberg [21, 22].
The study of jump process martingales and their representations largely dates back to the 1970s; see e.g. Jacod [14], Boel et al. [2], Chou and Meyer [3], Davis [10] and Elliott [13]. Since then, extensions have been developed in different directions; see e.g. Last and Penrose [18] and Cohen [5]. All these papers stay within the framework of filtrations, i.e., the information dynamics is monotone. The infinitesimal approach we introduce here allows us to go beyond the framework of filtrations. An elegant way to derive the classical martingale representation is a bare-hands approach that starts with the Chou and Meyer construction of the martingale representation for a single jump process, followed by Elliott’s extension to the case of ordered jumps. In this paper, we also use a bare-hands approach, but the classical stopping time concept is not applicable in our non-monotone information setting, so that we need to leave the common paths.
The paper is organised as follows. In Sect. 2, we explain the basic concepts of the infinitesimal approach but avoid technicalities. In Sect. 3, we add technical assumptions and narrow the modelling framework down to pure jump process drivers. Section 4 verifies that (1.2) is indeed a well-defined process. In Sect. 5, we identify infinitesimal compensators for a large class of jump processes. The central result (1.2) is proved in Sect. 6 and extended to processes of the form (1.3) in Sect. 7. In Sect. 8, we take a closer look at Examples 1.1 and 1.2.

2 The infinitesimal approach

The central idea of the infinitesimal approach is to focus only on those properties that martingales and compensators show on infinitesimally short intervals. This section explains the basic ideas under the general assumption that all limits in this section actually exist. Only from the next section on, we narrow the framework down to pure jump process drivers, which is sufficient but not necessary to guarantee the existence of the limits. So in general, the infinitesimal approach is not restricted to jump process frameworks, but it is beyond the scope of this paper to find general conditions for the existence of the limits here.
Let \((\Omega , \mathcal{A}, P)\) be a complete probability space and let \(\mathcal{Z}\subseteq \mathcal{A}\) be the family of its nullsets. Let \(\mathbb{F}=(\mathcal{F}_{t})_{t \geq 0 }\) be a complete and right-continuous filtration on this probability space. We interpret \(\mathcal{F}_{t}\) as the observable information on the time interval \([0,t]\). Suppose that certain pieces of information expire after a finite holding time. By subtracting from \(\mathcal{F}_{t}\) all pieces of information that have expired until time \(t\), we obtain the admissible information at time \(t\). We assume that this admissible information is represented by a family \(\mathbb{G}=(\mathcal{G}_{t})_{t \geq 0 }\) of complete sigma-algebras
$$\begin{aligned} \mathcal{G}_{t} \subseteq \mathcal{F}_{t}, \qquad t \geq 0, \end{aligned}$$
which may be non-monotone in \(t\).
A process \(X\) is adapted to the filtration \(\mathbb{F}\) if \(X_{t}\) is \(\mathcal{F}_{t}\)-measurable for each \(t \geq 0\). Likewise we say that a process \(X\) is adapted to the possibly non-monotone information \(\mathbb{G}\) if \(X_{t}\) is \(\mathcal{G}_{t}\)-measurable for each \(t \geq 0\). In addition to this classical concept, we also take an incremental perspective.
Definition 2.1
We call a process \(X\) incrementally adapted to \(\mathbb{G}\) if the increment \(X_{t}-X_{s}\) is \(\sigma ( \mathcal{G}_{u}, u \in (s,t])\)-measurable for any interval \((s,t] \subseteq [0,\infty )\).
In finance and insurance applications, we think of \(X\) as an aggregated cash flow where the aggregated payments \(X_{t} - X_{s}\) on the interval \((s, t]\) should depend only on the admissible information on \((s, t]\). If \(\mathbb{G}\) is a filtration, incremental adaptedness is equivalent to classical adaptedness, but the two concepts differ for non-monotone information.
An integrable process \(X\) is a martingale with respect to \(\mathbb{F}\) if it is \(\mathbb{F}\)-adapted and
$$\begin{aligned} E[ X_{t} - X_{s} | \mathcal{F}_{s} ] =0 \end{aligned}$$
almost surely for each \(0 \leq s \leq t\). Focusing on infinitesimally short intervals, in particular we have
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ X_{t_{k+1}} - X_{t_{k}} | \mathcal{F}_{t_{k}}] =0 \end{aligned}$$
(2.1)
a.s. for each \(t \geq 0\), where \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) is any increasing sequence (i.e., \(\mathcal{T}_{n}^{{{t}}} \subseteq \mathcal{T}_{n+1}^{{{t}}}\) for all \(n\)) of partitions \(0=t_{0} < t_{1} < \cdots < t_{n} =t \) of the interval \([0, {t]} \) such that the mesh size \(|\mathcal{T}_{n}^{{{t}}}| := \max \{ t_{k}-t_{k-1}: k=1, 2, \ldots \}\) tends to 0 for \(n \rightarrow \infty \). In the literature, we can find for (2.1) the intuitive notation \(E[ \mathrm{d}X_{t} | \mathcal{F}_{t-} ] = 0\).
Definition 2.2
Let \(X\) be incrementally adapted to \(\mathbb{G}\). We say that \(X\) is an infinitesimal forward/backward martingale (IF/IB-martingale) with respect to \(\mathbb{G}\) if for each \(t \geq 0\) and any increasing sequence \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) of partitions of \([0, {t]} \) with \(\lim _{n \rightarrow \infty } |\mathcal{T}_{n}^{{{t}}}|=0\), we have
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ X_{t_{k+1}} - X_{t_{k}} | \mathcal{G}_{t_{k}}] = 0 \end{aligned}$$
or
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ X_{t_{k+1}} - X_{t_{k}} | \mathcal{G}_{t_{k+1}}] =0, \end{aligned}$$
respectively, assuming that the expectations and limits exist.
Suppose now that \(X\) is an \(\mathbb{F}\)-adapted and integrable counting process. The so-called compensator \(C\) of \(X\) is the unique \(\mathbb{F}\)-predictable finite-variation process starting from \(C_{0}=0\) such that \(X-C\) is an \(\mathbb{F}\)-martingale. In particular, \(C\) satisfies the equation
$$\begin{aligned} C_{t} = \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{ {t}}}} E[ X_{t_{k+1}} - X_{t_{k}} | \mathcal{F}_{t_{k}}] \end{aligned}$$
(2.2)
almost surely for each \(t \geq 0\); see Karr [16, Theorem 2.17]. The intuitive notation for (2.2) is \(E[ \mathrm{d}X_{t} | \mathcal{F}_{t-} ] = \mathrm{d}C_{t}\). Furthermore, one can show that the \(\mathbb{F}\)-predictability of \(C\) implies that
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ C_{t_{k+1}} - C_{t_{k}} | \mathcal{F}_{t_{k}}]=C_{t} -C_{0} \end{aligned}$$
(2.3)
almost surely for each \(t \geq 0\), intuitively written as \(E[ \mathrm{d}C_{t} | \mathcal{F}_{t-} ] = \mathrm{d}C_{t}\). The latter fact motivates the following definition.
Definition 2.3
We call \(X\) infinitesimally forward/backward predictable (IF/IB-predictable) with respect to \(\mathbb{G}\) if for each \(t \geq 0\) and any increasing sequence \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) of partitions of \([0, {t]}\) with \(\lim _{n \rightarrow \infty } |\mathcal{T}_{n}^{{{t}}}|=0\), we almost surely have
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ X_{t_{k+1}} - X_{t_{k}} | \mathcal{G}_{t_{k}}] = X_{t}-X_{0} \end{aligned}$$
or
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ X_{t_{k+1}} - X_{t_{k}} | \mathcal{G}_{t_{k+1}}] =X_{t}-X_{0}, \end{aligned}$$
respectively, assuming that the expectations and limits exist.
By combining (2.2) and (2.3), we obtain
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ (X_{t_{k+1}}- C_{t_{k+1}}) - (X_{t_{k}}-C_{t_{k}}) | \mathcal{F}_{t_{k}}] =0 \end{aligned}$$
almost surely for each \(t \geq 0\), which means that the process \(X-C\) is an IF-martingale with respect to \(\mathbb{F}\) according to Definition 2.2.
Definition 2.4
Let \(X\) be incrementally adapted to \(\mathbb{G}\). We say that a process \(C\) is an infinitesimal forward/backward compensator of \(X\) (IF/IB-compensator) with respect to \(\mathbb{G}\) if \(C\) is incrementally adapted to \(\mathbb{G}\) and IF/IB-predictable and \(X-C\) is an IF/IB-martingale with respect to \(\mathbb{G}\), respectively.
Let \(\mathcal{G}_{[t_{k},t_{k+1}]}:=\sigma ( \mathcal{G}_{u}, u \in [t_{k},t_{k+1}])\) for any \(t_{k+1} \geq t_{k} \geq 0\) and \(\xi \in L^{1}(\Omega , \mathcal{A},P)\). Then the construction
$$\begin{aligned} E[ \xi | \mathcal{G}_{t}]-E[ \xi | \mathcal{G}_{0}] &= \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} (E[ \xi | \mathcal{G}_{t_{k+1}}] -E[ \xi | \mathcal{G}_{t_{k}}] ) \\ & = \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{ {t}}}} (E[ \xi | \mathcal{G}_{[t_{k},t_{k}+1]}] -E[ \xi | \mathcal{G}_{t_{k}}] ) \\ & \phantom{=:} - \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} (E[ \xi | \mathcal{G}_{[t_{k},t_{k}+1]}] -E[ \xi | \mathcal{G}_{t_{k+1}}] ) \end{aligned}$$
may yield a decomposition of the process \(t\mapsto E[ \xi | \mathcal{G}_{t}]\) into the difference of an IF-martingale and an IB-martingale, since
$$\begin{aligned} E\big[ E[ \xi | \mathcal{G}_{[t_{k},t_{k}+1]}] -E[ \xi | \mathcal{G}_{t_{k}}] \big| \mathcal{G}_{t_{k}}\big] &=0, \\ E\big[ E[ \xi | \mathcal{G}_{[t_{k},t_{k}+1]}] -E[ \xi | \mathcal{G}_{t_{k+1}}] \big| \mathcal{G}_{t_{k+1}}\big] &=0. \end{aligned}$$
Definition 2.5
We say that \(E[ \xi | \mathcal{G}_{t}]-E[ \xi | \mathcal{G}_{0}] = F_{t} - B_{t}\), \(t \geq 0\), is an infinitesimal martingale representation if \(F\) is an IF-martingale and \(B\) is an IB-martingale with respect to \(\mathbb{G}\).
Suppose now that \(X\) describes a discounted claim process in a finance or insurance application. Then we are typically interested in the process \(t \mapsto E[ X_{t} | \mathcal{F}_{t}]\), which is not necessarily well defined. If \(X\) is a càdlàg process whose suprema on compacts have finite expectations, then there exists a unique càdlàg process \(X^{\mathbb{F}}\), the so-called optional projection of \(X\) with respect to \(\mathbb{F}\), such that
$$\begin{aligned} X^{\mathbb{F}}_{t} = E[ X_{t} | \mathcal{F}_{t}] \end{aligned}$$
almost surely for each \(t \geq 0\). We say here that a process is unique if it is unique up to evanescence. We now expand the concept of optional projections to non-monotone information.
Definition 2.6
Let \(X\) be an integrable càdlàg process. If there exists a unique càdlàg process \(X^{\mathbb{G}}\) such that
$$\begin{aligned} X^{\mathbb{G}}_{t} = E[ X_{t} | \mathcal{G}_{t}] \end{aligned}$$
almost surely for each \(t \geq 0\), we call \(X^{\mathbb{G}}\) the optional projection of \(X\) with respect to \(\mathbb{G}\).
The optional projection \(X^{\mathbb{G}}\) can be decomposed to
$$\begin{aligned} E[ X_{t} | \mathcal{G}_{t}]-E[ X_{0} | \mathcal{G}_{0}] &= \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} (E[ X_{t_{k+1}} | \mathcal{G}_{t_{k+1}}] -E[ X_{t_{k}} | \mathcal{G}_{t_{k}}] ) \\ & = \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{ {t}}}} (E[ X_{t_{k}} | \mathcal{G}_{[t_{k},t_{k}+1]}] -E[ X_{t_{k}} | \mathcal{G}_{t_{k}}] ) \\ & \phantom{=:} - \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} (E[ X_{t_{k}} | \mathcal{G}_{[t_{k},t_{k}+1]}] -E[ X_{t_{k}} | \mathcal{G}_{t_{k+1}}] ) \\ & \phantom{=:} + \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ X_{t_{k+1}}- X_{t_{k}} | \mathcal{G}_{t_{k+1}}], \end{aligned}$$
which may represent a sum of an IF-martingale, an IB-martingale and an IB-compensator with respect to \(\mathbb{G}\). By switching the roles of \(t_{k}\) and \(t_{k+1}\), we can obtain a similar decomposition where the IB-compensator is replaced by an IF-compensator.
Definition 2.7
We call \(E[ X_{t} | \mathcal{G}_{t}]-E[ \xi | \mathcal{G}_{0}] = F_{t} - B_{t} + C_{t}\), \(t \geq 0\), an infinitesimal representation if \(F\) is an IF-martingale, \(B\) is an IB-martingale and \(C\) is either an IB-compensator or an IF-compensator with respect to \(\mathbb{G}\).
As mentioned at the beginning of this section, we simply assumed so far that all the limits discussed here indeed exist. In the next section, we focus on a marked point process framework since this guarantees not only the existence of the limits, but also allows us to calculate the limits explicitly.

3 Jump process framework

In the literature, we can find different approaches for defining a jump process framework. One way is to start with a marked point process \((\tau _{i},\zeta _{i})_{i \in \mathbb{N}}\) on \((\Omega , \mathcal{A}, P)\) with some measurable mark space \((E, \mathcal{E})\), i.e.,
– the mappings \(\tau _{i}:(\Omega , \mathcal{A}) \rightarrow ([0,\infty ], \mathcal{B}([0,\infty ]))\), \(i \in \mathbb{N}\), are random times,
– the mappings \(\zeta _{i}: (\Omega , \mathcal{A}) \rightarrow (E, \mathcal{E})\), \(i \in \mathbb{N}\), are random marks.
Differently from the point process literature, we do not assume here that the random times \((\tau _{i})_{i\in \mathbb{N}}\) are increasing or ordered in any specific way. This gives us useful modelling flexibility; see also the comments at the end of this section. Let \(E\) be a Polish space and \(\mathcal{E}:=\mathcal{B}(E)\) its Borel sigma-algebra. For the sake of a simple notation, we moreover assume that \(\Omega \) is a Polish space and \(\mathcal{A}\) its Borel sigma-algebra. The latter assumption can actually be dropped by observing that all random activity in our model comes from a marked point process that can be embedded into a Polish space. We interpret each \(\zeta _{i}\) as a piece of information that can be observed from time \(\tau _{i}\) on. As motivated in the introduction, we additionally assume that the information pieces \(\zeta _{i}\) are possibly deleted after a finite holding time. Therefore, we expand the marked point process \((\tau _{i},\zeta _{i})_{i \in \mathbb{N}}\) to \((\tau _{i}, \zeta _{i}, \sigma _{i} )_{i \in \mathbb{N}}\), where
– the mappings \(\sigma _{i}:(\Omega , \mathcal{A}) \rightarrow ([0,\infty ], \mathcal{B}([0,\infty ]))\), \(i \in \mathbb{N}\), are random times such that \(\tau _{i} \leq \sigma _{i} \).
We interpret \(\sigma _{i}\) as the deletion time of information piece \(\zeta _{i}\). Note that the random times \((\sigma _{i})_{i \in \mathbb{N}}\) are in general not ordered. For the sake of a more compact notation, we work in the following with the equivalent sequence \((T_{i},Z_{i})_{i \in \mathbb{N}}\) defined as
$$\begin{aligned} T_{2i-1} := \tau _{i}, \quad T_{2i} := \sigma _{i}, \quad Z_{2i-1}:= \zeta _{i}, \quad Z_{2i}:= \zeta _{i}, \qquad i \in \mathbb{N}, \end{aligned}$$
i.e., the random times \(T_{2i-1}\) with odd indices refer to innovations and the consecutive random times \(T_{2i}\) with even indices are the corresponding deletion times. We generally assume that
$$\begin{aligned} E\bigg[ \sum _{i =1}^{\infty } \mathbf{1}_{\{T_{i} \leq t\}} \bigg] < \infty , \qquad t \geq 0, \end{aligned}$$
(3.1)
which will ensure the existence of (infinitesimal) compensators. Condition (3.1) implies that almost surely, there are at most finitely many random times on bounded intervals. Moreover, we assume that
$$\begin{aligned} \begin{aligned}&T_{2i-1}(\omega ) < T_{2i}(\omega ) \qquad \text{for $\omega \in \{T_{2i} < \infty \}, i \in \mathbb{N}$,} \end{aligned} \end{aligned}$$
i.e., a new piece of information is not instantaneously deleted but is available for at least a short amount of time. Based on the sequence \((T_{i},Z_{i})_{i \in \mathbb{N}}\), we generate random counting measures \(\mu _{I}\) via
$$\begin{aligned} \mu _{I}([0,t]\times B ) &:= \mathbf{1}_{\{t \geq T_{i}=T_{j} \,:\, i, j \in I\}\cap \{T_{i}\neq T_{j} \,:\, i\in I, j \not \in I\}} \mathbf{1}_{\{Z_{I} \in B\}} \end{aligned}$$
for \(t \geq 0\), \(B \in \mathcal{E}_{I}\) and finite subsets \(I \subseteq \mathbb{N}\), where
$$\begin{aligned} \mathcal{E}_{I}:=\mathcal{B}(E_{I}), \qquad E_{I}:={E}^{|I|}, \qquad Z_{I}:=(Z_{i})_{i \in I}. \end{aligned}$$
If the different random times \(T_{i}\) never coincide, then we just need to consider the counting measures \(\mu _{\{i\}}\), \(i \in \mathbb{N}\), which describe separate arrivals of the random times \(T_{i}\) and their marks \(Z_{i}\). But if random times can occur simultaneously, then we need the full scale of counting measures \(\mu _{I}\), \(I \subseteq \mathbb{N}\), \(\vert I \vert <\infty \), which cover all kinds of separate and joint events. For each \(I \), the measures \(\{\mu _{I}(\cdot )(\omega ) : \omega \in \Omega \}\) generated by their values on \([0,t] \times B \) form a random counting measure on \(([0, \infty ) \times E_{I} , \mathcal{B}([0, \infty ) \times E_{I} ) )\), i.e.,
– for any fixed \(A \in \mathcal{B}([0,\infty ) \times E_{I})\), the mapping \(\omega \mapsto \mu _{I} ( A)(\omega )\) is measurable from \((\Omega , \mathcal{A})\) to \((\overline{\mathbb{N}}_{0}, \mathcal{B}(\overline{\mathbb{N}}_{0}))\) with \(\overline{\mathbb{N}}_{0}:= \mathbb{N}_{0} \cup \{\infty \}\),
– for almost each \(\omega \in \Omega \), the mapping \(A \mapsto \mu _{I} (A)(\omega )\) is a locally finite measure on \(([0, \infty ) \times E_{I} , \mathcal{B}([0, \infty ) \times E_{I}) )\).
The observable information at time \(t \geq 0\) is given by the complete filtration
$$\begin{aligned} \mathcal{F}_{t} &:= \sigma ( \{ T_{2i-1} \leq s < T_{2i}\} \cap \{ Z_{2i} \in B\} : s \in [0,t], B\in \mathcal{E}, i \in \mathbb{N} ) \vee \mathcal{Z}, \end{aligned}$$
which lets the random times \(T_{i}\), \(i \in \mathbb{N}\), be stopping times. Here the symbol ∨ denotes the sigma-algebra that is generated by the union of the involved sets. The admissible information at time \(t\geq 0 \) is given by the family of sub-sigma-algebras
$$\begin{aligned} \mathcal{G}_{t}=\sigma ( \{ T_{2i-1} \leq t < T_{2i}\} \cap \{ Z_{2i} \in B\}: B\in \mathcal{E}, i \in \mathbb{N} ) \vee \mathcal{Z}. \end{aligned}$$
The admissible information immediately before time \(t> 0 \) is given by the family of sub-sigma-algebras
$$\begin{aligned} \mathcal{G}^{-}_{t} =\sigma ( \{ T_{2i-1} < t \leq T_{2i}\} \cap \{ Z_{2i} \in B\}: B\in \mathcal{E}, i \in \mathbb{N} ) \vee \mathcal{Z}. \end{aligned}$$
Analogously to filtrations, we write \(\mathbb{G}= (\mathcal{G}_{t})_{t \geq 0}\) and \(\mathbb{G}^{-}= (\mathcal{G}^{-}_{t})_{t \geq 0}\).
Remark 3.1
Recall that \(T_{2i-1} \leq T_{2i}\), \(i \in \mathbb{N}\), is the only kind of order that we assume to hold between the random times \(T_{i}\), resulting from the natural assumption \(\tau _{i} \leq \sigma _{i} \), \(i \in \mathbb{N}\). This fact is relevant when an ordering unintentionally reveals additional information. For example, if we have a model where the innovation times \(\tau _{i}\) are ordered, i.e., \(T_{1} < T_{3} < T_{5} < \cdots \), then \(\mathcal{G}_{t}\) reveals among other things the exact number of deletions that have happened until \(t\). This can be an unwanted feature if the number of past deletions is itself a non-admissible piece of information. In many situations, we can avoid such an implied information effect by ordering the pairs \((T_{2i-1},T_{2i})\) in a non-informative way.
Remark 3.2
Without loss of generality, suppose here that \(0 \not \in E \). We define an infinite-dimensional process \((\Gamma _{t})_{t \geq 0} \) by
$$\begin{aligned} \Gamma _{t} := (Z_{2i} \mathbf{1}_{\{T_{2i-1} \leq t < T_{2i} \}})_{i \in \mathbb{N}}. \end{aligned}$$
Then, using the fact that the paths of \((\Gamma _{t})_{t \geq 0} \) are componentwise càdlàg, the information \(\mathcal{G}_{t} \) and \(\mathcal{G}^{-}_{t}\) can be alternatively represented as
$$\begin{aligned} \mathcal{G}_{t} = \sigma ( \Gamma _{t}) \vee \mathcal{Z}, \qquad t \geq 0, \\ \mathcal{G}^{-}_{t} = \sigma ( \Gamma _{t-}) \vee \mathcal{Z}, \qquad t > 0, \end{aligned}$$
where the left limit \(\Gamma _{t-}\) is defined componentwise. However, \(\mathcal{G}^{-}_{t}\) is usually different from the left set-limit \(\mathcal{G}_{t-}\), and the latter set-limit might not even exist. For example, consider a model with only two jumps \(T_{1}\), \(T_{2}\) in finite time and a trivial mark \(Z_{2}=\mathrm{{const}}\). It is not difficult to choose \(T_{1}, T_{2}\) in such a way that the events \(\{T_{1} \leq s < T_{2}\}\) and \(\{T_{1} \leq u < T_{2}\}\) are different for each \(s < u \leq t\). In this case, \(\liminf _{s \uparrow t} \mathcal{G}_{s}= \bigvee _{ s < t} \bigcap _{ s < u < t} \mathcal{G}_{u} \) equals the completed trivial sigma-algebra, whereas \(\limsup _{s \uparrow t} \mathcal{G}_{s}= \bigcap _{ s < t} \bigvee _{ s < u < t} \mathcal{G}_{u} \) equals \(\mathcal{G}^{-}_{t}\).

4 Optional projections

In this section, we study existence and path properties of optional projections. Note that this and all following sections generally assume that we are in the marked point process framework of Sect. 3. Recall also our specific definition of \(\mathcal{G}^{-}_{t}\).
Theorem 4.1
Suppose that \(X=(X_{t})_{t\geq 0}\) is a càdlàg process that satisfies
$$\begin{aligned} E\bigg[ \sup _{0 \leq s \leq t} | X_{s}| \bigg] < \infty , \qquad t \geq 0. \end{aligned}$$
(4.1)
Then the optional projection \(X^{\mathbb{G}}\) according to Definition 2.6exists, and we have \(X^{\mathbb{G}}_{t-}=E[ X_{t-} | \mathcal{G}^{-}_{t}]\) almost surely for each \(t > 0\). If \(X\) has integrable variation on compacts, then \(X^{\mathbb{G}}\) has paths of finite variation on compacts.
It might be surprising here that \(X^{\mathbb{G}}\) is always a càdlàg process, but note that condition (3.1) rules out clusters of jump times in our marked point process framework. Before we turn to the proof of Theorem 4.1, we develop several auxiliary results. Let
$$\begin{aligned} \mathcal{N}&:= \{ M \subseteq \mathbb{N} : \vert M \vert < \infty \}, \\ \mathcal{M}&:= \{ M \subseteq \{1,3,5,\ldots \} : \vert M \vert < \infty \} \end{aligned}$$
be all finite subsets of the natural or the odd natural numbers, and define
$$\begin{aligned} R_{I}&:= \big(Q_{I},(Z_{i})_{i \in I}\big), \qquad I \in \mathcal{N}, \end{aligned}$$
where \(Q_{I}:=\sup \{t \geq 0 : \mu _{I}([0,t] \times E_{I})=0\} \).
Since \(\Omega \) is a Polish space and \(\mathcal{A}\) its Borel sigma-algebra, there exist regular conditional probabilities \(P[ \, \cdot \, | Z_{M}]\) and \(P[ \, \cdot \, | Z_{M},R_{I}]\) on \((\Omega , \mathcal{A})\) for each \(M \in \mathcal{M}\) and \(I \in \mathcal{N}\). As the sets ℳ and \(\mathcal{N}\) are countable, all these conditional probabilities are simultaneously unique up to a joint exception nullset. In this paper, the notation
$$\begin{aligned} P_{M,R_{I}}[\,\cdot \, ] = P[\,\cdot \,| Z_{M},R_{I}] \end{aligned}$$
refers to an arbitrary but fixed regular version of the conditional probability on the right-hand side, and for any integrable random variable \(Z\), we set
$$\begin{aligned} E_{M,R_{I}}[Z] := \int Z\,\mathrm{d}P_{M,R_{I}}, \end{aligned}$$
i.e., \(E_{M,R_{I}}[Z ]\) is the specific version of the conditional expectation \(E[ Z | Z_{M},R_{I}]\) that we obtain by integrating \(Z\) with respect to the specific regular versions that we picked for \(P[\,\cdot \,| Z_{M},R_{I}] \). In case of \(I= \emptyset \), we also use the short forms \(P_{M}=P_{M,R_{\emptyset }}\) and \(E_{M}=E_{M,R_{\emptyset }}\) since \(P_{M,R_{\emptyset }}\) is a version of \(P[\,\cdot \, | Z_{M}]\).
Moreover, with defining \(I-1:= \{i-1: i \in I\}\), the mappings
$$\begin{aligned} P_{M,R_{I}=r}[\,\cdot \,] &:= P[\,\cdot \,| Z_{M_{I}}=z, R_{I}=r] \big|_{z=Z_{M_{I}}}, \\ M_{I}& := M \setminus \big(I \cup (I-1)\big), \end{aligned}$$
(4.2)
refer to arbitrary but fixed regular versions of the factorised conditional expectations on the right-hand side, and for any integrable random variable \(Z\), we define
$$\begin{aligned} E_{M,R_{I}=r}[Z] &:= \int Z\, \mathrm{d}P_{M,R_{I}=r}. \end{aligned}$$
By reducing \(M\) down to \(M_{I}\), we leave out exactly those random variables in \(Z_{M}\) that are already covered by \(R_{I}\). Note that the mapping \(P_{M,R_{I}=r}[\,\cdot \, ]|_{r=R_{I}}\) equals \(P_{M,R_{I}} [\,\cdot \,]\). For \(M \in \mathcal{M}\) and \(t \geq 0\), we define the \(\mathcal{G}_{t}\)-measurable sets
$$\begin{aligned} A^{M}_{t}&:= \bigcap _{i \in M} \{ T_{i} \leq t < T_{i+1}\} \cap \bigcap _{i \not \in M} (\Omega \setminus \{ T_{i} \leq t < T_{i+1}\}) \end{aligned}$$
and corresponding \(\mathbb{G}\)-adapted stochastic processes \(\mathbb{I}^{M}=(\mathbb{I}_{t}^{M})_{t\geq 0}\) via
$$\begin{aligned} \mathbb{I}_{t}^{M} &:= \mathbf{1}_{A^{M}_{t}}, \qquad t \geq 0. \end{aligned}$$
Because of the assumption (3.1), the paths of \(\mathbb{I}^{M}\) have finitely many jumps on compacts only, so that they have left and right limits. Moreover, they are right-continuous by construction, so that the processes \(\mathbb{I}^{M}\) are càdlàg. The left limits can be represented as \(\mathbb{I}_{t-}^{M} = \mathbf{1}_{A^{M}_{t-}}\), where
$$\begin{aligned} A^{M}_{t-}&:= \bigcap _{i \in M} \{ T_{i} < t \leq T_{i+1}\} \cap \bigcap _{i \not \in M} (\Omega \setminus \{ T_{i} < t \leq T_{i+1}\}). \end{aligned}$$
Proposition 4.2
For any integrable random variable \(\xi \) and any sets \(M \in \mathcal{M}\) and \(I \in \mathcal{N}\), we almost surely have
$$\begin{aligned} \mathbb{I}^{M}_{t} E[ \xi | \mathcal{G}_{t} \vee \sigma (R_{I}) ]&= \mathbb{I}^{M}_{t} \frac{ E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t} ]}{E_{M,R_{I}} [ \mathbb{I}^{M}_{t} ]} , \\ \mathbb{I}^{M}_{t-} E[ \xi | \mathcal{G}^{-}_{t} \vee \sigma (R_{I}) ]&= \mathbb{I}^{M}_{t-} \frac{ E_{M,R_{I}}[ \xi \mathbb{I}^{M}_{t-} ]}{E_{M,R_{I}} [ \mathbb{I}^{M}_{t-} ]}, \end{aligned}$$
(4.3)
with the convention that \(0/0:=0\).
Note here that \(\sigma (R_{I})\) equals the trivial sigma-algebra if \(I = \emptyset \). Whenever we have \(E_{M,R_{I}} [ \mathbb{I}^{M}_{t}]=0\) and \(E_{M,R_{I}} [ \mathbb{I}^{M}_{t-} ]=0\), we necessarily have \(E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t} ] =0\) and \(E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t-}]=0\), respectively, so that the right-hand sides of (4.3) are well defined.
Proof of Proposition 4.2
The left-hand sides of (4.3) almost surely equal the conditional expectations that one obtains when the families \(\mathbb{G}\) and \(\mathbb{G}^{-}\) of sigma-algebras are replaced by their non-completed versions. Therefore, in the remaining proof, we ignore the extension by \(\mathcal{Z}\) in the definitions of \(\mathbb{G}\) and \(\mathbb{G}^{-}\).
For each \(H \in \sigma (Z_{M}) \), there exists a \(G \in \mathcal{G}_{t} \) such that \(H \cap A_{t}^{M} = G \cap A_{t}^{M} \), and for each \(G \in \mathcal{G}_{t} \), there exists an \(H \in \sigma (Z_{M}) \) such that \(G \cap A_{t}^{M} = H \cap A_{t}^{M}\). Thus
$$\begin{aligned} \big(\sigma (Z_{M}) \vee \sigma (R_{I})\big) \cap A_{t}^{M} = \big( \mathcal{G}_{t} \vee \sigma (R_{I})\big)\cap A_{t}^{M} \subseteq \mathcal{G}_{t} \vee \sigma (R_{I}), \qquad t \geq 0. \end{aligned}$$
(4.4)
This implies that the random variable \(\mathbb{I}^{M}_{t} \frac{E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t} ]}{E_{M,R_{I}}[ \mathbb{I}^{M}_{t}]}\) is \((\mathcal{G}_{t}\vee \sigma (R_{I}))\)-measurable, and for each \(G \in \mathcal{G}_{t} \vee \sigma (R_{I})\), we obtain
$$\begin{aligned} E\bigg[ \mathbf{1}_{G} \mathbb{I}^{M}_{t} \frac{ E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t} ]}{E_{M,R_{I}} [ \mathbb{I}^{M}_{t} ]} \bigg] & = E\bigg[ E_{M,R_{I}} \Big[ \mathbf{1}_{H} \mathbb{I}^{M}_{t} \frac{ E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t} ]}{E_{M,R_{I}} [ \mathbb{I}^{M}_{t} ]} \Big] \bigg] \\ & = E\big[ \mathbf{1}_{H} E_{M,R_{I}} [ \xi \mathbb{I}^{M}_{t} ] \big] \\ & = E[ \mathbf{1}_{G} \mathbb{I}^{M}_{t} \xi ] \\ & = E\big[ \mathbf{1}_{G} \mathbb{I}^{M}_{t} E[ \xi | \mathcal{G}_{t} \vee \sigma (R_{I}) ] \big], \end{aligned}$$
i.e., the first equation in (4.3) holds. By replacing (4.4) by
$$\begin{aligned} \big(\sigma (Z_{M}) \vee \sigma (R_{I})\big) \cap A_{t-}^{M} = \big( \mathcal{G}^{-}_{t} \vee \sigma (R_{I})\big)\cap A_{t-}^{M} \subseteq \mathcal{G}^{-}_{t} \vee \sigma (R_{I}), \quad t \geq 0, \end{aligned}$$
(4.5)
we can analogously show that the second equation in (4.3) holds. □
Lemma 4.3
For each \(M \in \mathcal{M}, I \in \mathcal{N}, r \in [0,\infty) \times E_{I} \) and each càdlàg process \(X\) that satisfies condition (4.1), the stochastic processes
$$\begin{aligned} & t \mapsto E_{M,R_{I}}[X_{t} \mathbb{I}_{t}^{M} ], \\ & t \mapsto E_{M,R_{I}=r}[X_{t} \mathbb{I}_{t}^{M} ] \end{aligned}$$
have càdlàg paths. Moreover, their left limits can be obtained by replacing \(X_{t} \mathbb{I}_{t}^{M}\) by \(X_{t-} \mathbb{I}_{t-}^{M}\).
Proof
Apply the dominated convergence theorem. □
Proposition 4.4
With the conventions \(0/0:=0\) and \(1/0:=\infty \), we have for each \(M\in \mathcal{M} \) almost surely that
$$\begin{aligned} \sup _{t \in [0,\infty )} \frac{\mathbb{I}_{t}^{M}}{E_{M} [ \mathbb{I}^{M}_{t} ]} < \infty . \end{aligned}$$
Proof
Let \(\tau \) and \(\sigma \) be any two nonnegative random times such that \(\tau \leq \sigma \). At first we are going to show that
$$\begin{aligned} Z:= \sup _{t \in [0,\infty )} \frac{\mathbf{1}_{\{\tau \leq t < \sigma \}}}{E[ \mathbf{1}_{\{\tau \leq t < \sigma \}} ]} < \infty \end{aligned}$$
(4.6)
almost surely. For each \((t,s) \in [0,\infty )^{2}\), we define the unbounded rectangles
$$ A_{(t,s)}:= \{ (t',s') : t'\leq t, s< s'\} $$
and the countably generated set
$$\begin{aligned} B&:= \bigcup _{(t,s) \in \beta } A_{(t,s)}, \qquad \beta := \{(t,s) \in \mathbb{Q}_{+}^{2} : t< s, P[(\tau ,\sigma ) \in A_{(t,s)}]=0 \} . \end{aligned}$$
Let \(\partial B\) and \(B^{\circ }\) be the boundary and the interior of \(B\). Any line of the form
$$ L_{x}:= \{ (x,x)+ \lambda \, (1,-1): \lambda \in \mathbb{R}\} $$
intersects \(\partial B\) at most at one point, since for any two points \(y,y' \in L_{x}\) with \(y\neq y'\), we either have \(y \in A_{y'}^{\circ }\) or \(y' \in A_{y}^{\circ }\). Therefore the set
$$\begin{aligned} \gamma :=\bigg(\bigcup _{x \in \mathbb{Q}_{+}} L_{x}\bigg) \cap \partial B \cap \{ (t,s) \in [0,\infty )^{2} : P[(\tau ,\sigma ) \in A_{(t,s)}]=0 \} \end{aligned}$$
is countable, and
$$\begin{aligned} C&:= \bigcup _{(t,s) \in \gamma } A_{(t,s)} \end{aligned}$$
is countably generated. The sets \(N_{B}=\{(\tau ,\sigma ) \in B\}\) and \(N_{C}=\{(\tau ,\sigma ) \in C\}\) are both nullsets since they equal countable unions of nullsets.
Suppose now that \(Z(\omega ) = \infty \) for an arbitrary but fixed \(\omega \in \Omega \). We necessarily have \(\tau (\omega ) < \sigma (\omega )\). Since \(t \mapsto E[ \mathbf{1}_{\{\tau \leq t < \sigma \}} ]\) is a càdlàg function, at least one of the following statements is true:
1.
\(E[ \mathbf{1}_{\{\tau \leq u < \sigma \}} ]=0\) for some \(u \in (\tau (\omega ),\sigma (\omega ))\).
 
2.
\(E[ \mathbf{1}_{\{\tau < u \leq \sigma \}} ]=0\) for some \(u \in (\tau (\omega ),\sigma (\omega ))\).
 
3.
\(E[ \mathbf{1}_{\{\tau \leq u < \sigma \}} ]=0\) for \(u = \tau (\omega )\).
 
4.
\(E[ \mathbf{1}_{\{\tau < u \leq \sigma \}} ]=0\) for \(u = \sigma (\omega )\).
 
In case (1), we have \(P[(\tau ,\sigma )\in A_{(u,u)}] =E[ \mathbf{1}_{\{\tau \leq u < \sigma \}} ] =0\) and \((\tau (\omega ),\sigma (\omega ))\) is in \(A_{(u,u)}^{\circ }\), so that we can conclude that \(\omega \in N_{B}\).
In case (2), we can argue analogously to case (1), but need to replace the definition of \(A_{(t,s)}\) by \(\{ (t',s') : t'< t, s\leq s'\}\) and define a corresponding nullset \(N_{B}'\). We obtain that \(\omega \in N_{B}'\).
In case (3), we have \(P[(\tau ,\sigma )\in A_{(\tau (\omega ),\tau (\omega ))}]=E[ \mathbf{1}_{\{\tau \leq \tau (\omega ) < \sigma \}} ] =0\) as well as \((\tau (\omega ),\sigma (\omega )) \in A_{(\tau (\omega ),\tau ( \omega ))} \subseteq B \cup \partial B\). If \((\tau (\omega ),\sigma (\omega )) \in B\), then \(\omega \in N_{B}\). If \((\tau (\omega ),\sigma (\omega )) \in \partial B \), then the whole line segment
$$ \big\{ \theta \big(\tau (\omega ), \tau (\omega )\big) + (1-\theta ) \big(\tau (\omega ), \sigma (\omega )\big) : \theta \in (0,1)\big\} $$
is in \(\partial B\) because of \((\tau (\omega ), \tau (\omega )) \in \partial B\) and the rectangular shape of the sets \(A_{(t,s)}\). On this line, there is at least one intersection with \(C\), so that we can conclude that \(\omega \in N_{C}\).
In case (4), we can argue similarly to case (3), but need to replace the definition of \(A_{(t,s)}\) by \(\{ (t',s') : t'< t, s\leq s'\}\) and define corresponding nullsets \(N_{B}'\) and \(N_{C}'\).
All in all, we have \(P[Y=\infty ] \leq P[N_{B} \cup N_{C}\cup N_{B}' \cup N_{C}']=0\), i.e., (4.6) holds.
Now let \(M \in \mathcal{M}\) be arbitrary but fixed and choose \(\tau \) and \(\sigma \) as the random times where \(\mathbb{I}_{t}^{M}\) jumps from zero to one and jumps back to zero, respectively. Suppose that \(P_{Z_{M}=z}\) is a regular version of \(P[\, \cdot \,| Z_{M} = z]\) with corresponding expectation \(E_{Z_{M}=z}[\,\cdot \,]\). Then from (4.6), we can conclude that
$$\begin{aligned} P_{Z_{M}=z}\bigg[ \sup _{t \in [0,\infty )} \frac{\mathbb{I}_{t}^{M}}{E_{Z_{M}=z} [ \mathbb{I}_{t}^{M} ]} = \infty \bigg] =0 \end{aligned}$$
for each choice of \(z\). Replacing both \(z\) by \(Z_{M}\), where we use the insertion rule for conditional expectations for the inner \(z\), and taking the unconditional expectation on both sides of the equation, we end up with
$$\begin{aligned} P\bigg[ \sup _{t \in [0,\infty )} \frac{\mathbb{I}_{t}^{M}}{E_{Z_{M}} [ \mathbb{I}_{t}^{M} ]} = \infty \bigg] =0. \end{aligned}$$
 □
Proof of Theorem 4.1
Motivated by Proposition 4.2, we set
$$\begin{aligned} Y_{t}:= \sum _{M \in \mathcal{M}} \mathbb{I}^{M}_{t} \frac{ E_{M} [ X_{t} \mathbb{I}^{M}_{t} ]}{E_{M} [ \mathbb{I}^{M}_{t} ]}, \qquad t \geq 0, \end{aligned}$$
since this process almost surely equals \(X^{\mathbb{G}}_{t}\) for each \(t \geq 0\). Note that there are at most a countable number of conditional expectations involved; so the corresponding regular versions are simultaneously unique up to evanescence. For each compact interval \([0,t]\) and almost each \(\omega \in \Omega \), the set
$$\begin{aligned} \mathcal{M}_{t}(\omega ):= \{ M \in \mathcal{M}: \mathbb{I}^{M}_{u}( \omega ) =1 \textrm{ for at least one }u \in [0,t]\} \end{aligned}$$
(4.7)
is finite due to the assumption (3.1). If \(E_{M} [ \mathbb{I}^{M}_{t} ](\omega )\neq 0\), Lemma 4.3 yields that
$$\begin{aligned} \lim _{\varepsilon \downarrow 0} Y_{t+\varepsilon }(\omega ) &= \sum _{M \in \mathcal{M}_{t+1}(\omega )} \lim _{\varepsilon \downarrow 0} \mathbb{I}^{M}_{t+\varepsilon }(\omega ) \frac{ E_{M} [ X_{t+\varepsilon } \mathbb{I}^{M}_{t+\varepsilon } ](\omega )}{E_{M} [ \mathbb{I}^{M}_{t+\varepsilon } ](\omega )} \\ &=\sum _{M \in \mathcal{M}_{t+1}(\omega )} \mathbb{I}^{M}_{t}(\omega ) \frac{ E_{M} [ X_{t} \mathbb{I}^{M}_{t} ](\omega )}{E_{M} [ \mathbb{I}^{M}_{t} ](\omega )} \\ & = Y_{t}(\omega ). \end{aligned}$$
(4.8)
If \(E_{M} [ \mathbb{I}^{M}_{t} ](\omega )=0\), Proposition 4.4 implies that \(\mathbb{I}^{M}_{t}=0\) for almost all \(\omega \in \Omega \), where the exception nullset does not depend on the choice of \(t\). So (4.8) is almost surely true on \([0,\infty )\), since \(\mathbb{I}^{M}_{t}(\omega )=0\) implies that there is a whole interval \([t,t+\epsilon _{\omega })\) where the right-continuous jump path \(s \mapsto \mathbb{I}^{M}_{s}(\omega )\) is constantly zero. Similarly, we can show that the process \(Y\) almost surely has left limits, which are of the form
$$\begin{aligned} Y_{t-}= \sum _{M \in \mathcal{M}} \mathbb{I}^{M}_{t-} \frac{ E_{M} [ X_{t-} \mathbb{I}^{M}_{t-} ]}{E_{M} [ \mathbb{I}^{M}_{t-} ]}, \qquad t > 0. \end{aligned}$$
According to Proposition 4.2, \(Y_{t-}\) almost surely equals \(E[ X_{t-} | \mathcal{G}^{-}_{t}]\). As càdlàg processes are uniquely defined by their values on countable dense subsets of the time line, our choice for \(X^{\mathbb{G}}\) is almost surely the only possible modification of \((E[X_{t} | \mathcal{G}_{t}])_{t\geq 0}\).
The variation of \(Y\) on \([0,t]\) is bounded by
$$\begin{aligned} &\sum _{M \in \mathcal{M}_{t}} \sup _{\mathcal{T}^{t}} \sum _{\mathcal{T}^{t}} \bigg| \mathbb{I}^{M}_{t_{k+1}} \frac{ E_{M} [ X_{t_{k+1}} \mathbb{I}^{M}_{t_{k+1}} ]}{E_{M} [ \mathbb{I}^{M}_{t_{k+1}} ]} -\mathbb{I}^{M}_{t_{k}} \frac{ E_{M} [ X_{t_{k}} \mathbb{I}^{M}_{t_{k}} ]}{E_{M} [ \mathbb{I}^{M}_{t_{k}} ]} \bigg| \\ &\leq \sum _{M \in \mathcal{M}_{t}} \sup _{\mathcal{T}^{t}} \sum _{\mathcal{T}^{t}} \bigg( \bigg| \frac{ \mathbb{I}^{M}_{t_{k+1}}}{E_{M} [ \mathbb{I}^{M}_{t_{k+1}} ]}- \frac{ \mathbb{I}^{M}_{t_{k}}}{E_{M} [ \mathbb{I}^{M}_{t_{k}} ]} \bigg| E_{M} [ |X_{t_{k+1}}| \mathbb{I}^{M}_{t_{k+1}}] \\ & \phantom{\leq \sum _{M \in \mathcal{M}_{t}} \sup _{\mathcal{T}^{t}} \sum _{\mathcal{T}^{t}} \bigg(} + \frac{ \mathbb{I}^{M}_{t_{k}}}{E_{M} [ \mathbb{I}^{M}_{t_{k}} ]}E_{M} [ |X_{t_{k+1}} \mathbb{I}^{M}_{t_{k+1}}- X_{t_{k}} \mathbb{I}^{M}_{t_{k}}| ] \bigg), \end{aligned}$$
where \(\mathcal{T}^{t}\) is any partition of \([0,t]\). As \(C_{M}(\omega ):= \sup _{t} \mathbb{I}^{M}_{t}(\omega )/E_{M} [ \mathbb{I}^{M}_{t} ](\omega )\) is finite for almost each \(\omega \in \Omega \) (see Proposition 4.4) and the variation of \(L_{M}(s):= E_{M}[\mathbb{I}^{M}_{s}] \) is bounded by 2, the latter bound is dominated by
$$\begin{aligned} & \sum _{M \in \mathcal{M}_{t}} \bigg(\Big( 2 C_{M} + \int _{[0,t]} \mathbb{I}^{M}_{s} \frac{1}{L_{M}(s)L_{M}(s-)} \mathrm{d}|L_{M}|(s) \Big) E_{M} \Big[\sup _{0\leq s \leq t} |X_{s}|\Big] \\ & \hphantom{\sum _{M \in \mathcal{M}_{t}} \bigg(}{} + C_{M} E_{M} \bigg[2\, \sup _{0\leq s \leq t} |X_{s}| + \int _{[0,t]} \mathbb{I}^{M}_{s} \mathrm{d}|X|_{s} \bigg] \bigg) \\ & \leq \sum _{M \in \mathcal{M}_{t}} \bigg( ( 2 C_{M} + 2\,t \, C^{2}_{M} ) E_{M} \bigg[ \int _{[0,t]} \mathrm{d}|X|_{s} \bigg] + 3 \, C_{M} E_{M} \bigg[\int _{[0,t]} \mathrm{d}|X|_{s} \bigg]\bigg), \end{aligned}$$
which is finite for almost each \(\omega \in \Omega \) since \(X\) has integrable variation on compacts and \(\mathcal{M}_{t}(\omega )\) is finite. □

5 Infinitesimal compensators

In this section, we derive infinitesimal compensators for a large class of incrementally adapted jump processes, in particular for the counting processes \(t \mapsto \mu _{I}([0,t]\times B)\) for any \(I \in \mathcal{N}\) and \(B\in \mathcal{E}_{I}\). Under the conventions \(0/0:=0\) and (4.2), let
$$\begin{aligned} &\nu _{I} ([0,t] \times B ) := \sum _{M \in \mathcal{M}} \int _{(0,t] \times B} \mathbb{I}^{M}_{u-} \frac{ P_{M,R_{I}=(u,e)}[A_{u-}^{M}]}{P_{M}[A_{u-}^{M}] } P_{M}^{R_{I}} \big(\mathrm{d}(u,e) \big), \\ & \rho _{I}([0,t] \times B ) := \sum _{M \in \mathcal{M}} \int _{(0,t] \times B} \mathbb{I}^{M}_{u} \frac{ P_{M,R_{I}=(u,e)}[A_{u}^{M}]}{P_{M}[A_{u}^{M}] } P_{M}^{R_{I}} \big(\mathrm{d}(u,e) \big) \end{aligned}$$
for \(t \geq 0\), \(B \in \mathcal{E}_{I}\), \(I \in \mathcal{N}\).
Proposition 5.1
For each \(I \in \mathbb{N}\), the mappings \(\nu _{I}\) and \(\rho _{I}\) can be uniquely extended to random measures on \(([0, \infty ) \times E_{I}, \mathcal{B}([0, \infty ) \times E_{I}))\).
The proof of the proposition is given below. In the following, we use the notation
$$\begin{aligned} F \bullet \kappa \,\big((0,t]\times B\big) := \int _{(0,t ]\times B} F(u,e) \,\kappa \big(\mathrm{d}(u,e)\big) \end{aligned}$$
for random measures \(\kappa \) and integrable random functions \(F\).
Theorem 5.2
Suppose that the mappings \((t,e,\omega ) \mapsto F_{I}(t,e)(\omega )\), \(I \in \mathcal{N}\), are jointly measurable and satisfy
$$\begin{aligned} E\bigg[ \int _{(0,t ]\times E_{I} } |F_{I}(u,e)| \, \mu _{I} \big( \mathrm{d}(u,e)\big) \bigg] < \infty . \end{aligned}$$
(5.1)
If \(F_{I}(t,e)\) is \(\mathcal{G}_{t}^{-}\)-measurable for each \((t,e)\), then for each \(B \in \mathcal{E}_{I}\), the jump process
$$\begin{aligned} &t \mapsto F_{I}\bullet \mu _{I}\big((0,t]\times B\big) \end{aligned}$$
has the IF-compensator
$$\begin{aligned} &t \mapsto F_{I}\bullet \nu _{I}\big((0,t]\times B\big). \end{aligned}$$
If \(F_{I}(t,e)\) is \(\mathcal{G}_{t}\)-measurable for each \((t,e)\), then for each \(B \in \mathcal{E}_{I}\), the jump process
$$\begin{aligned} &t \mapsto F_{I}\bullet \mu _{I}\big((0,t]\times B\big) \end{aligned}$$
has the IB-compensator
$$\begin{aligned} &t \mapsto F_{I}\bullet \rho _{I}\big((0,t]\times B\big). \end{aligned}$$
By choosing \(F_{I}\equiv 1\), Theorem 5.2 yields in particular that \(\nu _{I}\) is the IF-compensator and \(\rho _{I}\) is the IB-compensator of the counting process \(\mu _{I}\). In intuitive notation, we write this fact as
$$\begin{aligned} E[ \mu _{I}( \mathrm{d}t \times B) | \mathcal{G}^{-}_{t}] &= \nu _{I}( \mathrm{d}t \times B), \qquad B \in \mathcal{E}_{I}, \\ E[ \mu _{I}( \mathrm{d}t \times B) | \mathcal{G}_{t}] &= \rho _{I}( \mathrm{d}t \times B), \qquad B \in \mathcal{E}_{I} . \end{aligned}$$
The proofs of Proposition 5.1 and Theorem 5.2 follow now in several steps.
Lemma 5.3
For each \(M \in \mathcal{M}\) and \(t \geq 0\), we almost surely have
$$\begin{aligned} \begin{aligned} \sum _{I \in \mathcal{N}} \int _{[0,t]\times E_{I}} P_{M}^{R_{I}} \big(\mathrm{d}(u,e) \big) < \infty . \end{aligned} \end{aligned}$$
(5.2)
Proof
For each \(t \geq 0\) and \(M \in \mathcal{M}\), (3.1) implies that
$$\begin{aligned} E_{M} \bigg[ \sum _{j=1}^{\infty } \mathbf{1}_{\{T_{j} \leq t\}} \bigg] < \infty \end{aligned}$$
almost surely. Therefore, applying the monotone convergence theorem yields
$$\begin{aligned} \infty > E_{M} \bigg[ \sum _{i=1}^{\infty } \mathbf{1}_{\{T_{i} \leq t \}} \bigg] &\geq E_{M} \bigg[ \sum _{I \in \mathcal{N}} \mu _{I}([0,t] \times E_{I}) \bigg] \\ & = \sum _{I \in \mathcal{N}} E_{M} \bigg[ \mu _{I}([0,t]\times E_{I}) \bigg] \\ &= \sum _{I \in \mathcal{N}} \int _{[0,t]\times E_{I} } P_{M}^{R_{I}} \big(\mathrm{d}(u,e) \big) \end{aligned}$$
almost surely for each \(M \in \mathcal{M}\) and \(t \geq 0\). □
Proof of Proposition 5.1
The processes \((\mathbb{I}^{M}_{u-})\) and \((P_{M}[A^{M}_{u-}])\) are jointly measurable with respect to \((u,\omega )\) since they are left-continuous in \(u\); see Lemma 4.3. The mapping \((u,e,\omega ) \mapsto P_{M,R_{I}=(u,e)}[A_{u-}^{M}]\) is jointly measurable with respect to \((u,e,\omega )\) since \(P_{M,R_{I}=(u,e)}[A_{s-}^{M}]\) is left-continuous in \(s\) and jointly measurable with respect to \((u,e,\omega )\in [0,\infty )^{|I|}\times E_{I}\times \Omega \); see Lemma 4.3. Thus for any fixed \(A \in \mathcal{B}([0, \infty ) \times E_{I})\), the mapping \(\omega \mapsto \nu _{I} ( A)(\omega )\) is measurable. Moreover, for almost each \(\omega \in \Omega \), the mapping \(A \mapsto \nu _{I} (A)(\omega )\) is a locally finite measure on \(([0, \infty ) \times E_{I} , \mathcal{B}([0, \infty ) \times E_{I} ))\). This can be seen by combining Proposition 4.4 and (5.2) and using the fact that \(P_{M, R_{I}=(u,e)}[A_{u-}^{M}]\) is bounded by 1. Hence \(\nu _{I}\) has a unique extension to a random measure on \(([0, \infty ) \times E_{I} , \mathcal{B}([0, \infty ) \times E_{I}))\). Similar conclusions hold for the mappings \(\rho _{I}\). □
Proposition 5.4
Suppose that the mappings \((t,e,\omega ) \mapsto F_{I}(t,e)(\omega )\), \(I \in \mathcal{N}\), are jointly measurable and satisfy (5.1). For each \(t >0\) and \(B \in \mathcal{E}_{I} \), we almost surely have
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E\big[F_{I}\bullet \mu _{I}\big((t_{k},t_{k+1} ]\times B\big) \big| \mathcal{G}_{t_{k}} \big]&=G_{I}\bullet \nu _{I}\big((0,t]\times B\big) , \\ \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E\big[F_{I}\bullet \mu _{I}\big((t_{k},t_{k+1} ]\times B\big) \big| \mathcal{G}_{t_{k+1}} \big]&=H_{I} \bullet \rho _{I}\big((0,t]\times B\big) \end{aligned}$$
for any increasing sequence \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) of partitions of \([0, {t]}\) with \(\lim _{n \rightarrow \infty } |\mathcal{T}_{n}^{{{t}}}|=0\) and for \(G_{I}\) and \(H_{I}\) defined by
$$\begin{aligned} G_{I}(u,e) &:= \sum _{M \in \mathcal{M}} \mathbb{I}^{M}_{u-} \frac{E_{M,R_{I}=(u,e)}[ \mathbb{I}^{M}_{u-} F_{I}(u,e) ] }{E_{M,R_{I}=(u,e)}[ \mathbb{I}^{M}_{u-} ]}, \\ H_{I}(u,e) &:= \sum _{M \in \mathcal{M}} \mathbb{I}^{M}_{u} \frac{E_{M,R_{I}=(u,e)}[ \mathbb{I}^{M}_{u} F_{I}(u,e) ] }{E_{M,R_{I}=(u,e)}[ \mathbb{I}^{M}_{u} ]}. \end{aligned}$$
Proof
By decomposing \(F\) into a positive part \(F^{+}\) and a negative part \(F^{-}\), it suffices to prove the first equation for the nonnegative mappings \(F^{+}\) and \(F^{-}\) only. Therefore, without loss of generality, we suppose from now on that \(F\) is nonnegative.
Let \(\mathcal{M}_{t}=\mathcal{M}_{t}(\omega )\) be defined as in (4.7). In the following, we use the notation \(J_{k}:=(t_{k},t_{k+1}]\). Since \(\sum _{M \in \mathcal{M}_{t}} \mathbb{I}^{M}_{t_{k}}=1\) for any \(t_{k}\), applying (4.3), the monotone convergence theorem and the law of total probability gives
$$\begin{aligned} & E[ F_{I}\bullet \mu _{I}(J_{k} \times B ) | \mathcal{G}_{t_{k}} ] \\ & = \sum _{M \in \mathcal{M}_{t}} \mathbb{I}^{M}_{t_{k}} \frac{ E_{M} [ \mathbb{I}^{M}_{t_{k}} F_{I}\bullet \mu _{I}(J_{k} \times B ) ]}{E_{M} [ \mathbb{I}^{M}_{t_{k}} ]} \\ & = \sum _{M \in \mathcal{M}_{t}} \int _{J_{k}\times E_{I} } \mathbb{I}^{M}_{t_{k}} \frac{E_{M,R_{I}=(u,e) } [ \mathbb{I}^{M}_{t_{k}} F_{I}\bullet \mu _{I}(J_{k} \times B ) ] }{P_{M}[A_{t_{k}}^{M}] } P^{R_{I}}_{M}\big(\mathrm{d}(u,e) \big) \end{aligned}$$
for almost each \(\omega \in \Omega \). For \(u \in (0,t]\), let \(J^{u}\) be the unique interval \((t_{k},t_{k+1}]\) from \(\mathcal{T}_{n}^{{{t}}}\) such that \(t_{k}< u \leq t_{k+1}\), and let \(t(u)\) be the left end point of \(J^{u}\). Then we can write
$$\begin{aligned} &\sum _{\mathcal{T}_{n}^{{{t}}}} E[ F_{I}\bullet \mu _{I}(J_{k} \times B) | \mathcal{G}_{t_{k}} ] \\ &= \sum _{M \in \mathcal{M}_{t}} \int _{(0,t] \times E_{I}} \mathbb{I}_{t(u)}^{M} \frac{ E_{M,R_{I}=(u,e)} [\mathbb{I}_{t(u)}^{M} F\bullet \mu _{I}(J^{u} \times B ) ] }{P_{M}[A_{t(u)}^{M}] } P_{M}^{R_{I}}\big(\mathrm{d}(u,e) \big). \end{aligned}$$
Taking the limit for \(n\rightarrow \infty \), we obtain for almost each \(\omega \in \Omega \) that
$$\begin{aligned} \begin{aligned} &\lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ F_{I}\bullet \mu _{I}(J_{k} \times B) | \mathcal{G}_{t_{k}} ] \\ &= \sum _{M \in \mathcal{M}_{t}} \int _{(0,t] \times E_{I}} \lim _{n \rightarrow \infty } \mathbb{I}_{t(u)}^{M} \frac{ E_{M,R_{I}=(u,e)} [\mathbb{I}_{t(u)}^{M} F_{I}\bullet \mu _{I}(J^{u} \times B ) ] }{P_{M}[A_{t(u)}^{M}] } P_{M}^{R_{I}}\big(\mathrm{d}(u,e) \big), \end{aligned} \end{aligned}$$
(5.3)
using that \(\mathcal{M}_{t}\) is finite for almost each \(\omega \) and applying the monotone convergence and the dominated convergence theorem. Note that Proposition 4.4, the assumption (5.1) and \(0 \leq \mathbb{I}_{t(u)}^{M} F_{I}\bullet \mu _{I}(J^{u} \times B ) \leq F_{I}\bullet \mu _{I}((0,t] \times B )\) ensure the existence of an integrable majorant. For \(n \rightarrow \infty \), we have \(t(u) \uparrow u\) and \(J^{u} \downarrow \{u\}\); so the dominated convergence theorem implies that
$$\begin{aligned} &\lim _{n \rightarrow \infty } E_{M,R_{I}=(u,e)} [ \mathbb{I}_{t(u)}^{M} F_{I}\bullet \mu _{I}(J^{u} \times B ) ] \\ &= E_{M,R_{I}=(u,e)} [ \mathbb{I}_{u-}^{M} \mathbf{1}_{B}(e) F_{I}(u,e) \mu _{I}(\{u\} \times \{e\} ) ] \\ &= \mathbf{1}_{B}(e) E_{M,R_{I}=(u,e)} [ \mathbb{I}_{u-}^{M} F_{I}(u,e) ]. \end{aligned}$$
In summary, the right-hand side of (5.3) equals the integral \(G_{I}\bullet \nu _{I}((0,t ]\times B)\), and we can conclude that the first equation in Proposition 5.4 holds. The proof of the second is similar. □
Proposition 5.5
Under the assumptions of Proposition 5.4, for each \(t \geq 0\) and \(B \in \mathcal{E}_{I} \), we almost surely have
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E\big[ G_{I}\bullet \nu _{I}\big((t_{k},t_{k+1}] \times B\big) \big| \mathcal{G}_{t_{k}} \big]& = G_{I} \bullet \nu _{I}\big((0,t ]\times B\big), \\ \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E\big[ H_{I}\bullet \rho _{I}\big((t_{k},t_{k+1}] \times B\big) \big| \mathcal{G}_{t_{k+1}} \big] &= H_{I} \bullet \rho _{I}\big((0,t ]\times B\big) \end{aligned}$$
for any increasing sequence \((\mathcal{T}_{n}^{{{t}}})_{n\in \mathbb{N}}\) of partitions of \([0, {t]}\) with \(\lim _{n \rightarrow \infty } | \mathcal{T}_{n}^{{{t}}}|=0\).
Proof
By decomposing \(G\) into a positive part \(G^{+}\) and a negative part \(G^{-}\), it suffices to prove the first equation for the nonnegative mappings \(G^{+}\) and \(G^{-}\) only. Therefore, without loss of generality, we suppose from now on that \(G\) is nonnegative.
From the definition of \(\nu _{I}\) and the monotone convergence theorem, we get
$$\begin{aligned} & E\big[ G_{I}\bullet \nu _{I}\big((0,t]\times E_{I} \big) \big] \\ & = \sum _{M \in \mathcal{M}} E\bigg[ E_{M}\Big[ \int _{(0,t] \times E_{I}} G_{I}(u,e) \mathbb{I}^{M}_{u-} \frac{ P_{M,R_{I}=(u,e)}[A_{u-}^{M}]}{P_{M}[A_{u-}^{M}] } P^{R_{I}}_{M} \big(\mathrm{d}(u,e) \big)\Big] \bigg]. \end{aligned}$$
From Proposition 4.2, we know that \(G_{I}(u,e)\) is \(\mathcal{G}^{-}_{u}\)-measurable for each \((u,e)\). This fact and (4.5) imply that
$$\begin{aligned} G_{I}(u,e) \mathbb{I}^{M}_{u-} P_{M,R_{I}=(u,e)}[A_{u-}^{M}] = \mathbb{I}^{M}_{u-} E_{M,R_{I}=(u,e)}[\mathbb{I}^{M}_{u-} G_{I}(u,e)] . \end{aligned}$$
Applying the Fubini–Tonelli theorem and the monotone convergence theorem gives
$$\begin{aligned} &E[ G_{I}\bullet \nu _{I}((0,t]\times E_{I} ) ] \\ &=\sum _{M \in \mathcal{M}} E\bigg[ \int _{(0,t] \times E_{I}} E_{M}[ \mathbb{I}^{M}_{u-} ] \frac{ E_{M,R_{I}=(u,e)}[\mathbb{I}_{Q_{I}-}^{M} G_{I}(Q_{I},Z_{I})]}{P_{M}[A_{u-}^{M}] } P_{M}^{R_{I}}\big(\mathrm{d}(u,e) \big) \bigg] \\ &=\sum _{M \in \mathcal{M}}E\Big[ E_{M}\big[ \mathbb{I}^{M}_{Q_{I}-} G_{I}(Q_{I},Z_{I}) \mu _{I}\big((0,t]\times E_{I}\big)\big] \Big] \\ &= E\bigg[ G_{I}\bullet \mu _{I}\big((0,t] \times E_{I} \big) \bigg]. \end{aligned}$$
The latter expectation is finite according to (5.1). Hence for each \(M \in \mathcal{M}\), we almost surely have
$$\begin{aligned} E_{M}\big[ G_{I}\bullet \nu _{I}\big((0,t]\times E_{I} \big) \big] &< \infty , \\ G_{I}\bullet \nu _{I}\big((0,t]\times E_{I} \big) &< \infty . \end{aligned}$$
(5.4)
Let \(J_{k}:=(t_{k},t_{k+1}]\). From the dominated convergence theorem, we obtain
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} \mathbb{I}^{M}_{t_{k}} G_{I}\bullet \nu _{I}(J_{k} \times B ) = ( \mathbb{I}^{M}_{\cdot -}G_{I})\bullet \nu _{I}\big((0,t] \times B \big) \end{aligned}$$
since \(\mathbb{I}^{M}\) is bounded by 1 and because of the second line in (5.4). By using the first line in (5.4), the dominated convergence theorem moreover yields
$$\begin{aligned} &\lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E_{M}[\mathbb{I}^{M}_{t_{k}} G_{I}\bullet \nu _{I}(J_{k} \times B )] = E_{M}\big[(\mathbb{I}^{M}_{\cdot -}G_{I})\bullet \nu \big((0,t] \times B \big) \big] . \end{aligned}$$
By applying the Fubini–Tonelli theorem, we can show that the last term equals
$$\begin{aligned} \int _{(0,t] \times B} E_{M}[\mathbb{I}^{M}_{u-} G_{I}(u,e)] \frac{P_{M, R_{I}=(u,e)}[A_{u-}^{M}]}{P_{M}[A_{u-}^{M}]} P_{M}^{R_{I}} \big(\mathrm{d}(u,e) \big). \end{aligned}$$
Using Proposition 4.4 and the dominated convergence theorem, we therefore obtain
$$\begin{aligned} & \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} \mathbb{I}^{M}_{t_{k}} \frac{ E_{M} [ \mathbb{I}^{M}_{t_{k}} G_{I}\bullet \nu _{I}(J_{k} \times B) ]}{E_{M} [ \mathbb{I}^{M}_{t_{k}} ]} \\ &= \int _{(0,t] \times B} \frac{ \mathbb{I}^{M}_{u-} }{E[\mathbb{I}^{M}_{u-}]} E_{M}[\mathbb{I}^{M}_{u-} G_{I}(u,e)] \frac{P_{M, R_{I}=(u,e)}[A_{u-}^{M}]}{P_{M}[A_{u-}^{M}]} P_{M}^{R_{I}} \big(\mathrm{d}(u,e) \big) \\ & = \int _{(0,t] \times B} \mathbb{I}^{M}_{u-} G_{I}(u,e) \frac{P_{M, R_{I}=(u,e)}[A_{u-}^{M}]}{P_{M}[A_{u-}^{M}]} P_{M}^{R_{I}} \big(\mathrm{d}(u,e) \big), \end{aligned}$$
where the second equality uses that (4.5) and the \(\mathcal{G}^{-}_{u}\)-measurability of \(G_{I}(u,e)\) allows us to pull \(G_{I}(u,e)\) out of the conditional expectation \(E_{M}[\mathbb{I}^{M}_{u-} G_{I}(u,e)] \). Summing the latter equation over \(M \in \mathcal{M}_{t}\) for \(\mathcal{M}_{t}\) as in (4.7) and applying Proposition 4.2, we obtain
$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E\big[ G_{I}\bullet \nu _{I}\big((t_{k},t_{k+1}] \times B\big) \big| \mathcal{G}_{t_{k}} \big] = G_{I} \bullet \nu _{I}\big((0,t ]\times B\big) \end{aligned}$$
almost surely. Thus we can conclude that the first equation in Proposition 5.5 holds. The proof of the second is similar. □
Proof of Theorem 5.2
Let \(G_{I}\) and \(H_{I}\) be defined as in Proposition 5.4. If \(F_{I}(t,e)\) is \(\mathcal{G}_{t}^{-}\)-measurable for each \((t,e)\), then Proposition 4.2 implies that \(G_{I}(t,e)=F_{I}(t,e)\) almost surely. Similarly, if \(F_{I}(t,e)\) is \(\mathcal{G}_{t}\)-measurable for each \((t,e)\), we have almost surely that \(H_{I}(t,e)=F_{I}(t,e)\). With this fact and by subtracting the limit equations in Propositions 5.4 and 5.5, we obtain that
$$\begin{aligned} &G_{I}\bullet \mu _{I} ([0,t] \times B)-G_{I}\bullet \nu _{I} ([0,t] \times B), \\ &H_{I}\bullet \mu _{I} ([0,t] \times B)-H_{I}\bullet \rho _{I} ([0,t] \times B) \end{aligned}$$
satisfy the defining limit equations for IF/IB-martingales. IF/IB-predictability of the compensators follows from Proposition 5.5. Note that all involved processes are incrementally adapted to \(\mathbb{G}\) because of (4.4) and (4.5). □

6 Infinitesimal martingale representations

Suppose that \(\lambda _{I}\) is the compensator of \(\mu _{I}\) with respect to \(\mathbb{F}\). For each integrable random variable \(\xi \), the classical martingale representation theorem yields that the martingale \(X_{t}=E[ \xi | \mathcal{F}_{t} ]\), \(t \geq 0\), can be represented as
$$\begin{aligned} X_{t} = X_{0} + \sum _{I \in \mathcal{N}} \int _{(0,t]\times E_{I}} F_{I}(u,e) \Big(\mu _{I}\big(\mathrm{d}(u,e)\big) - \lambda _{I}\big( \mathrm{d}(u,e)\big)\Big), \end{aligned}$$
(6.1)
where the mapping \((u,e,\omega ) \mapsto F(u,e)(\omega )\) is jointly measurable and the mapping \(\omega \mapsto F(u,e)(\omega )\) is \(\mathcal{F}_{u-}\)-measurable for each \((u,e)\); see e.g. Karr [16, Theorem 2.34]. We now extend this result to the non-monotone information \(\mathbb{G}\).
Theorem 6.1
Let \(\xi \) be an integrable random variable. Then for each \(t \geq 0\), (1.2) holds almost surely for
$$\begin{aligned} G_{I}(s, u,e) &:= \sum _{M \in \mathcal{M}} \mathbb{I}^{M}_{s} \bigg( \frac{E_{M,R_{I}=(u,e)}[ \mathbb{I}^{M}_{s} \xi ] }{E_{M,R_{I}=(u,e)}[ \mathbb{I}^{M}_{s} ]} - \frac{E_{M}[ \mathbb{I}^{M}_{u-}\mathbb{I}^{M}_{u} \xi ] }{E_{M}[ \mathbb{I}^{M}_{u-}\mathbb{I}^{M}_{u}]} \bigg). \end{aligned}$$
(6.2)
For each \(I \in \mathcal{N}\) and \(e \in E_{I}\), the process \(u \mapsto G_{I}(u-,u,e)\) is \(\mathbb{G}^{-}\)-adapted and the process \(u \mapsto G_{I}(u,u,e)\) is \(\mathbb{G}\)-adapted.
If the mappings \(F_{I}(u,e)=G_{I}(u-,u,E)\) and \(F_{I}(u,e)=G_{I}(u,u,e)\) both satisfy the integrability condition in Theorem 5.2, then the representation (1.2) is a sum of IF-martingales and IB-martingales with respect to \(\mathbb{G}\). In the case of \(\mathbb{F}= \mathbb{G}\), we have \(\nu _{I} = \lambda _{I}, \rho _{I} =\mu _{I}\) and (1.2) equals (6.1); so (1.2) is a generalisation of (6.1).
The proof of Theorem 6.1 is given below. Recall that our notation uses the convention (4.2).
Lemma 6.2
Let \(\xi \) be an integrable random variable. Then for each \(t \geq 0\), we have
$$\begin{aligned} &E_{M}[ \mathbb{I}_{t}^{M} \xi ] -E_{M}[ \mathbb{I}^{M}_{0} \xi ] \\ & = \sum _{I \in \mathcal{N}} \int _{(0,t] \times E_{I}} E_{M,R_{I}=(u,e)} [(\mathbb{I}^{M}_{u}-\mathbb{I}^{M}_{u-}) \xi ] P^{R_{I}}_{M}\big( \mathrm{d}(u,e) \big). \end{aligned}$$
(6.3)
Proof
As (6.3) is additive in \(\xi \), it suffices to show the equation for nonnegative and bounded random variables \(\xi \) only. The general case then follows from monotone convergence applied to both parts of the sequence \(\xi _{n} := (\xi _{n} \wedge n)^{+} - (-\xi _{n} \wedge n)^{+} \), \(n \in \mathbb{N}\). Therefore, in the remaining proof, we suppose that \(0 \leq \xi \leq C \) for a finite real number \(C\).
Let \(U_{t_{k}}(\omega ):= \sup \{ s \in (t_{k} ,\infty ) : T_{j}(\omega ) \not \in (t_{k},s), j \in \mathbb{N}\}\), i.e., \(U_{t_{k}}\) is the time of the first occurrence of a random time strictly after \(t_{k}\). Since \(1=\sum _{I \in \mathcal{N}} \mathbf{1}_{\{U_{t_{k}}=Q_{I}\}} \), we can conclude that
$$\begin{aligned} & E_{M}[ \mathbb{I}_{t}^{M} \xi ] -E_{M}[ \mathbb{I}^{M}_{0} \xi ] \\ &= \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{ {t}}}} \sum _{I \in \mathcal{N}} E_{M} \big[ \mathbf{1}_{ \{U_{t_{k}}=Q_{I}\}}(\mathbb{I}^{M}_{t_{k+1}}-\mathbb{I}^{M}_{t_{k}})\, \xi \big] \\ & = \lim _{n \rightarrow \infty } \sum _{I \in \mathcal{N}} \sum _{ \mathcal{T}_{n}^{{{t}}}} \int _{B_{I,k}} E_{M, R_{I}=(u,e)} \big[\mathbf{1}_{\{U_{t_{k}}=Q_{I}\}}(\mathbb{I}^{M}_{t_{k+1}}-\mathbb{I}^{M}_{t_{k}})\, \xi \big] P^{R_{I}}_{M}\big(\mathrm{d}(u,e) \big), \end{aligned}$$
(6.4)
for \(B_{I,k}=(t_{k},t_{k+1}] \times E_{I}\), where we use the fact that
$$ \mathbf{1}_{\{U_{t_{k}}=Q_{I}\}}(\mathbb{I}^{M}_{t_{k+1}}-\mathbb{I}^{M}_{t_{k}}) = 0 $$
unless \(t_{k}< Q_{I} \leq t_{k+1}\). Because of (5.2) and
$$ \big| E_{M, R_{I}=(u,e)} \big[\mathbf{1}_{\{U_{t_{k}}=Q_{I}\}}( \mathbb{I}^{M}_{t_{k+1}}-\mathbb{I}^{M}_{t_{k}})\, \xi \big] \big| \leq 2 C, $$
we can apply the dominated convergence theorem on the last line in (6.4), which leads to (6.3). Note here that
$$ \mathbf{1}_{\{U_{t_{k}}=Q_{I}=u\}}(\mathbb{I}^{M}_{t_{k+1}}-\mathbb{I}^{M}_{t_{k}}) \longrightarrow \mathbf{1}_{\{Q_{I}=u\}} (\mathbb{I}_{u}^{M}- \mathbb{I}^{M}_{u-}) $$
for \(t_{k+1} \downarrow u\) and \(t_{k} \uparrow u\) implies that
$$\begin{aligned} &E_{M,R_{I}=(u,e)} \big[\mathbf{1}_{\{U_{t_{k}}=Q_{I}\}}(\mathbb{I}^{M}_{t_{k+1}}-\mathbb{I}^{M}_{t_{k}}) \,\xi \big] \longrightarrow E_{M,R_{I}=(u,e)} [(\mathbb{I}_{u}^{M}- \mathbb{I}^{M}_{u-}) \, \xi ]. \vspace{-3pt} \end{aligned}$$
 □
Proof of Theorem 6.1
Fix \(M \in \mathcal{M}\) and define \(M+1:= \{i+1 : i \in M\}\). If \(\mathbb{I}^{M}_{u-}=1\), then only random times from the index set
$$\begin{aligned} M':=(\{1,3,\ldots \} \setminus M) \cup (M+1) \end{aligned}$$
can occur at time \(u\). If \(\mathbb{I}^{M}_{u}=1\), then only random times from the index set
$$\begin{aligned} M'':=M \cup \big(\{2,4,\ldots \} \setminus (M+1)\big) \end{aligned}$$
can occur at time \(u\). Therefore (6.3) can be represented as
$$\begin{aligned} E_{M}[ \mathbb{I}_{t}^{M} \xi ] = K_{t} + L_{t}, \qquad t\geq 0, \end{aligned}$$
where
$$\begin{aligned} K_{t} &:= \sum _{ I \subseteq M'} \int _{(0,t] \times E_{I}} E_{M,R_{I}=(u,e)} [(\mathbb{I}^{M}_{u}-\mathbb{I}^{M}_{u-}) \xi ] P^{R_{I}}_{M}\big( \mathrm{d}(u,e) \big) +E_{M}[ \mathbb{I}_{0}^{M} \xi ] \\ &\phantom{:}=- \sum _{I \subseteq M'} \int _{(0,t] \times E_{I}} E_{M,R_{I}=(u,e)} [\mathbb{I}^{M}_{u-} \xi ] P^{R_{I}}_{M}\big(\mathrm{d}(u,e) \big) +E_{M}[ \mathbb{I}_{0}^{M} \xi ] , \\ L_{t} &:= \sum _{I \subseteq M''} \int _{(0,t] \times E_{I}} E_{M,R_{I}=(u,e)} [(\mathbb{I}^{M}_{u}-\mathbb{I}^{M}_{u-}) \xi ] P^{R_{I}}_{M}\big( \mathrm{d}(u,e) \big) \\ &\phantom{:}= \sum _{I \subseteq M''} \int _{(0,t] \times E_{I}} E_{M,R_{I}=(u,e)} [\mathbb{I}^{M}_{u} \xi ] P^{R_{I}}_{M}\big(\mathrm{d}(u,e) \big). \end{aligned}$$
Furthermore, using \(M'\cap M'' = \emptyset \), we can show that
$$\begin{aligned} E_{M}[ \mathbb{I}_{t}^{M} \xi ] -E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} \xi ] &= \sum _{ I \subseteq M''} E_{M}[ \mathbb{I}_{t}^{M} \mathbf{1}_{\{Q_{I}=t\}} \xi ] \\ & = \sum _{ I \subseteq M''} \int _{\{t\} \times E_{I}} E_{M,R_{I}=(u,e)} [\mathbb{I}^{M}_{u} \xi ] P^{R_{I}}_{M}\big(\mathrm{d}(u,e) \big) \\ & = \Delta L_{t} \end{aligned}$$
for \(t>0\), which implies that \(E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} \xi ]= K_{t}+L_{t-}\). Analogously we obtain
$$\begin{aligned} E_{M}[ \mathbb{I}_{t-}^{M} \xi ] -E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} \xi ] = - \Delta K_{t}. \end{aligned}$$
In the specific case \(\xi =1\), we write \(k_{t}\) and \(\ell _{t}\) instead of \(K_{t}\) and \(L_{t}\). By applying integration by parts pathwise for each \(\omega \in \Omega \), we get that
$$\begin{aligned} & ( K_{t} +L_{t-}) \,\mathrm{d}\mathbb{I}_{t}^{M} + \mathbb{I}_{t-}^{M} \mathrm{d}K_{t} + \mathbb{I}_{t}^{M} \mathrm{d}L_{t} \\ &= \mathrm{d}( \mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ] ) \\ &= \mathrm{d}\bigg( \frac{\mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ]}{E_{M}[ \mathbb{I}_{t}^{M} ]}E_{M}[ \mathbb{I}_{t}^{M} ]\bigg) \\ & = (k_{t} + \ell _{t-})\, \mathrm{d}\bigg( \frac{\mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ]}{E_{M}[ \mathbb{I}_{t}^{M} ]} \bigg) + \frac{\mathbb{I}_{t-}^{M} E_{M}[ \mathbb{I}_{t-}^{M} \xi ] }{E_{M}[ \mathbb{I}_{t-}^{M} ]} \mathrm{d}k_{t}+ \frac{\mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ] }{E_{M}[ \mathbb{I}_{t}^{M} ]} \mathrm{d}\ell _{t} \\ & = (k_{t} + \ell _{t-})\, \mathrm{d}\bigg( \frac{\mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ]}{E_{M}[ \mathbb{I}_{t}^{M} ]} \bigg) + \mathbb{I}_{t-}^{M} \frac{ K_{t-}+L_{t-}}{k_{t-}+\ell _{t-}} \mathrm{d}k_{t}+ \mathbb{I}_{t}^{M} \frac{ K_{t}+L_{t}}{k_{t}+\ell _{t}} \mathrm{d}\ell _{t}. \end{aligned}$$
The equation formed by the first and last line can be rewritten as
$$\begin{aligned} & ( K_{t} +L_{t-}) \,\mathrm{d}\mathbb{I}_{t}^{M} + \frac{k_{t} +\ell _{t-}}{k_{t-} +\ell _{t-}} \mathbb{I}_{t-}^{M} \mathrm{d}K_{t} + \frac{k_{t} +\ell _{t-}}{k_{t} +\ell _{t}} \mathbb{I}_{t}^{M} \mathrm{d}L_{t} \\ & = (k_{t} + \ell _{t-})\, \mathrm{d}\bigg( \frac{\mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ]}{E_{M}[ \mathbb{I}_{t}^{M} ]} \bigg) + \mathbb{I}_{t-}^{M} \frac{ K_{t}+L_{t-}}{k_{t-}+\ell _{t-}} \mathrm{d}k_{t}+ \mathbb{I}_{t}^{M} \frac{ K_{t}+L_{t-}}{k_{t}+\ell _{t}} \mathrm{d}\ell _{t}, \end{aligned}$$
because of
$$\begin{aligned} &\bigg(\frac{k_{t} +\ell _{t-}}{k_{t-} +\ell _{t-}} -1 \bigg) \mathbb{I}_{t-}^{M} \Delta K_{t} + \bigg( \frac{k_{t} +\ell _{t-}}{k_{t} +\ell _{t}} -1 \bigg) \mathbb{I}_{t}^{M} \Delta L_{t} \\ &=\mathbb{I}_{t-}^{M} \bigg( \frac{ K_{t}+L_{t-}}{k_{t-}+\ell _{t-}}- \frac{ K_{t-}+L_{t-}}{k_{t-}+\ell _{t-}}\bigg) \Delta k_{t}+ \mathbb{I}_{t}^{M} \bigg( \frac{ K_{t}+L_{t-}}{k_{t}+\ell _{t}}- \frac{ K_{t}+L_{t-}}{k_{t}+\ell _{t}}\bigg) \Delta k_{t}. \end{aligned}$$
With the convention \(0/0:=0\) and by using the Radon–Nikodým theorem, we may multiply by \((k_{t}+\ell _{t-})^{-1}\) on both sides, which leads to
$$\begin{aligned} & \frac{E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} \xi ] }{E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} ]} \,\mathrm{d}\mathbb{I}^{M}_{t} + \frac{\mathbb{I}_{t-}^{M} }{E_{M}[ \mathbb{I}_{t-}^{M} ]}\mathrm{d}K_{t} + \frac{\mathbb{I}_{t}^{M} }{E_{M}[ \mathbb{I}_{t}^{M} ]}\mathrm{d}L_{t} \\ & = \mathrm{d}\bigg( \frac{\mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ]}{E_{M}[ \mathbb{I}_{t}^{M} ]} \bigg) + \frac{E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} \xi ] }{E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} ]} \bigg( \frac{\mathbb{I}_{t-}^{M} }{E_{M}[ \mathbb{I}_{t-}^{M} ]} \mathrm{d}k_{t} + \frac{\mathbb{I}_{t}^{M} }{E_{M}[ \mathbb{I}_{t}^{M} ]}\mathrm{d} \ell _{t} \bigg). \end{aligned}$$
Because of (6.3) and \(\mathrm{d}\mathbb{I}_{t}^{M} = \sum _{I \in \mathcal{N}} ( \mathbb{I}_{t}^{M} -\mathbb{I}_{t-}^{M} ) \mu _{I} (\mathrm{d}t \times E_{I} )\), the last equation can be rewritten to
$$\begin{aligned} & \sum _{I \in \mathcal{N}} \frac{ E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} \xi ] }{ E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} ]} (\mathbb{I}_{t}^{M} - \mathbb{I}_{t-}^{M} ) \mu _{I}(\mathrm{d}t \times E_{I} ) \\ & \phantom{=:}- \sum _{I \in \mathcal{N}} \int _{E_{I}} \mathbb{I}_{t-}^{M} \frac{E_{M,R_{I}=(t,e)}[ \mathbb{I}_{t-}^{M} \xi ] }{E_{M,R_{I}=(t,e)}[ \mathbb{I}_{t-}^{M} ]} \nu _{I} (\mathrm{d}t \times \mathrm{d}e ) \\ & \phantom{=:} + \sum _{I \in \mathcal{N}} \int _{E_{I}} \mathbb{I}_{t}^{M} \frac{E_{M,R_{I}=(t,e)}[ \mathbb{I}_{t}^{M} \xi ] }{E_{M,R_{I}=(t,e)}[ \mathbb{I}_{t}^{M} ]} \rho _{I} (\mathrm{d}t \times \mathrm{d}e ) \\ &= \mathrm{d}\bigg( \frac{\mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ]}{E_{M}[ \mathbb{I}_{t}^{M} ]} \bigg) \\ &\phantom{=:} + \sum _{I \in \mathcal{N}} \frac{ E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} \xi ] }{ E_{M}[ \mathbb{I}_{t-}^{M} \mathbb{I}_{t}^{M} ]} \big(- \mathbb{I}_{t-}^{M} \nu _{I}( \mathrm{d}t \times E_{I} ) + \mathbb{I}_{t}^{M} \rho _{I}( \mathrm{d}t \times E_{I} ) \big). \end{aligned}$$
(6.5)
Let \(I \in \mathcal{N}\) be arbitrary but fixed. Then for each \(M \in \mathcal{M}\), there exists an \(\tilde{M} \in \mathcal{M}\), and for each \(\tilde{M} \in \mathcal{M}\), there exists an \(M \in \mathcal{M}\) such that
$$\begin{aligned} \mathbb{I}_{t-}^{M} \mu _{I}( \mathrm{d}t \times \mathrm{d}e )= \mathbb{I}_{t}^{\tilde{M}} \mu _{I}( \mathrm{d}t \times \mathrm{d}e ). \end{aligned}$$
As a consequence, for almost each \(\omega \in \Omega \), we have
$$\begin{aligned} \begin{aligned} 0 = \sum _{M \in \mathcal{M}} \sum _{I \in \mathcal{N}} \int _{E_{I}} \bigg( &\mathbb{I}_{t}^{M} \frac{E_{M,R_{I}=(t,e)}[ \mathbb{I}_{t}^{M} \xi ] }{E_{M,R_{I}=(t,e)}[ \mathbb{I}_{t}^{M} ]} - \mathbb{I}_{t-}^{M} \frac{E_{M,R_{I}=(t,e)}[ \mathbb{I}_{t-}^{M} \xi ] }{E_{M,R_{j}=(t,e)}[ \mathbb{I}_{t-}^{M} ]} \bigg) \, \mu _{I}( \mathrm{d}t \times \mathrm{d}e ). \end{aligned} \end{aligned}$$
(6.6)
Because of
$$\begin{aligned} \mathrm{d}E[ \xi | \mathcal{G}_{t}] & = \sum _{M \in \mathcal{M}} \mathrm{d}\bigg( \frac{\mathbb{I}_{t}^{M} E_{M}[ \mathbb{I}_{t}^{M} \xi ]}{E_{M}[ \mathbb{I}_{t}^{M} ]} \bigg), \end{aligned}$$
summing (6.5) over \(M \in \mathcal{M}\) and adding (6.6) yields for almost each \(\omega \in \Omega \) that we have (1.2) and (6.2) after rearranging the addends. By applying Proposition 4.2, we can see that \(G_{I}(u-,u,e)\) is \(\mathcal{G}_{u-}\)-measurable and \(G_{I}(u,u,e)\) is \(\mathcal{G}_{u}\)-measurable for each \((I,e)\). □

7 Infinitesimal representations for optional projections

Suppose that \(X\) is a càdlàg process that satisfies (4.1) and such that \(X_{t}-X_{0}\) is \(\mathcal{F}_{t}\)-measurable for each \(t \geq 0\). Then the optional projection of \(X\) with respect to \(\mathbb{F}\) can be represented as
$$ \mathrm{d}E[ X_{t} | \mathcal{F}_{t}] = \mathrm{d}X_{t} + \sum _{I \in \mathcal{N}}\int _{E_{I} } F_{I}(t,e) \big( \mu _{I}(\mathrm{d}t \times \mathrm{d}e) - \lambda _{I}(\mathrm{d}t \times \mathrm{d}e ) \big) $$
(7.1)
for random mappings \(F_{I}(t,e)\) that are \(\mathcal{F}_{t-}\)-measurable for each \((t,I,e)\). In order to see this, apply the classical martingale representation theorem on the \(\mathbb{F}\)-martingale
$$\begin{aligned} E[ X_{0} | \mathcal{F}_{t}]-E[ X_{0} | \mathcal{F}_{0}]=E[ X_{t} | \mathcal{F}_{t}]-E[ X_{0} | \mathcal{F}_{0}] - ( X_{t} -X_{0}), \qquad t \geq 0, \end{aligned}$$
and rearrange the addends. The following theorem extends (7.1) to non-monotone information settings.
Theorem 7.1
Let \(X\) be a càdlàg process that satisfies (4.1) and has an IB-compensator with respect to \(\mathbb{G}\), denoted as \(X^{IB}\). Then
$$\begin{aligned} E[ X_{t} | \mathcal{G}_{t}]-E[ X_{0} | \mathcal{G}_{0}] &= X^{IB}_{t} + \sum _{I \in \mathcal{N}} \int _{(0,t ]\times E_{I} } G_{I}(u-,u,e) \,(\mu _{I}-\nu _{I}) \big(\mathrm{d}(u,e)\big) \\ &\quad + \sum _{I \in \mathcal{N}} \int _{(0,t]\times E_{I}} G_{I}(u,u,e) \,(\rho _{I}- \mu _{I}) \big(\mathrm{d}(u,e)\big) \end{aligned}$$
(7.2)
almost surely with
$$\begin{aligned} \begin{aligned} G_{I}(s, u,e) &:= \sum _{M \in \mathcal{M}} \mathbb{I}^{M}_{s} \bigg( \frac{E_{M,R_{I}=(u,e)}[ \mathbb{I}^{M}_{s} X_{u-} ] }{E_{M,R_{I}=(u,e)}[ \mathbb{I}^{M}_{s} ]} - \frac{E_{M}[ \mathbb{I}^{M}_{u-}\mathbb{I}^{M}_{u} X_{u-} ] }{E_{M}[ \mathbb{I}^{M}_{u-}\mathbb{I}^{M}_{u} ]} \bigg). \end{aligned} \end{aligned}$$
(7.3)
If \(X\) has an IF-compensator with respect to \(\mathbb{G}\), denoted as \(X^{IF}\), then (7.2) still holds but with \(X_{t}^{IB}\) replaced by \(X_{t}^{IF}\) and \(X_{u-}\) replaced by \(X_{u}\) in (7.3).
By applying Proposition 4.2, we can see that \(G_{I}(u-,u,e)\) is \(\mathcal{G}^{-}_{u}\)-measurable and \(G_{I}(u,u,e)\) is \(\mathcal{G}_{u}\)-measurable. Hence the integrals in the first and second line of (7.2) describe IF-martingales and IB-martingales with respect to \(\mathbb{G}\) if the mappings \(F_{I}(u,e)=G_{I}(u-,u,e)\) and \(F_{I}=G_{I}(u,u,e)\) both satisfy the integrability condition (5.1); see the comments below Theorem 5.2.
In the special case \(\mathbb{G}= \mathbb{F}\), we have \(\nu _{I} = \lambda _{I}\), \(\rho _{I} =\mu _{I}\), \(X=X^{IB}\) and the representations (7.2) and (7.1) are equivalent, i.e., (7.2) is a generalisation of (7.1).
Even if \(\mathbb{G} \neq \mathbb{F}\), we can still have \(X=X^{IB}\) or \(X=X^{IB}\). The following example presents non-trivial processes \(X\) that equal their IB-compensators or their IF-compensators.
Example 7.2
Let \(h(M,t)(\omega ): \mathcal{M} \times [0,\infty ) \times \Omega \rightarrow \mathbb{R}\) be measurable and suppose that \(|h(M,t)| \leq Z\) for an integrable majorant \(Z\). Let \(\gamma \) be the sum of the Lebesgue measure and a countable number of Dirac measures,
$$\begin{aligned} \gamma (B )= \lambda (B) + \sum _{i=1}^{\infty } \delta _{t_{i}}(B), \qquad B \in \mathcal{B}\big([0,\infty )\big), \end{aligned}$$
for deterministic time points \(0\leq t_{1} < t_{2} < \cdots \) that increase to infinity. Then the càdlàg process \(X\) defined by
$$\begin{aligned} X_{t}:= \sum _{M \in \mathcal{M}} \int _{[0,t]} \mathbb{I}_{s}^{M} h(M,s) \,\gamma (\mathrm{d}s) \end{aligned}$$
has the IB-compensator
$$\begin{aligned} X^{IB}_{t}= \int _{(0,t]}\sum _{M \in \mathcal{M}} \mathbb{I}_{s}^{M}E[ h(M,s)| \mathcal{G}_{s}] \, \gamma (\mathrm{d}s). \end{aligned}$$
In order to see this, apply Proposition 4.2, the dominated convergence theorem, Proposition 4.4 and Lemma 4.3 in order to obtain that
$$\begin{aligned} &\lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[X_{t_{k+1}}- X_{t_{k}} |\mathcal{G}_{t_{k+1}}] \\ &= \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{ {t}}}} \int _{(t_{k},t_{k+1}]} \sum _{M \in \mathcal{M}_{t}} \mathbb{I}^{M}_{t_{k+1}} E \bigg[ \sum _{\tilde{M} \in \mathcal{M}} h(\tilde{M},s) \mathbb{I}^{ \tilde{M}}_{s} \bigg|\mathcal{G}_{t_{k+1}}\bigg] \gamma (\mathrm{d}s) \\ &= \sum _{M \in \mathcal{M}_{t}} \lim _{n \rightarrow \infty } \sum _{ \mathcal{T}_{n}^{{{t}}}} \int _{(t_{k},t_{k+1}]}\mathbb{I}^{M}_{t_{k+1}} \frac{E_{M} [ \sum _{\tilde{M} \in \mathcal{M}} \mathbb{I}^{M}_{t_{k+1}} h(\tilde{M},s) \mathbb{I}^{\tilde{M}}_{s} ]}{E_{M}[ \mathbb{I}^{M}_{t_{k+1}} ]} \gamma (\mathrm{d}s) \\ &= \sum _{M \in \mathcal{M}_{t}} \int _{(0,t]}\mathbb{I}^{M}_{s} \frac{E_{M}[ h(M,s) \mathbb{I}^{M}_{s} ]}{E_{M}[ \mathbb{I}^{M}_{s} ]} \gamma (\mathrm{d}s) \\ & = \sum _{M \in \mathcal{M}} \int _{(0,t]}\mathbb{I}^{M}_{s} E[ h(M,s)| \mathcal{G}_{s}] \, \gamma (\mathrm{d}s) \end{aligned}$$
almost surely, where \(\mathcal{M}_{t}\) is defined as in (4.7). If \(s\mapsto h(M,s)\) is \(\mathbb{G}\)-adapted for each \(M\), we have \(X=X^{IB}\). Likewise we can show that the càdlàg process
$$\begin{aligned} Y_{t}:= \sum _{M \in \mathcal{M}} \int _{[0,t]} \mathbb{I}_{s-}^{M} h(M,s) \,\gamma (\mathrm{d}s) \end{aligned}$$
has the IF-compensator
$$\begin{aligned} Y^{IF}_{t}= \int _{(0,t]}\sum _{M \in \mathcal{M}} \mathbb{I}_{s-}^{M}E[ h(M,s)| \mathcal{G}_{s-}] \, \gamma (\mathrm{d}s). \end{aligned}$$
If \(s\mapsto h(M,s)\) is \(\mathbb{G}^{-}\)-adapted for each \(M\), we have \(Y=Y^{IF}\).
Proof of Theorem 7.1
The theorem follows from the additive decomposition
$$\begin{aligned} E[ X_{t} | \mathcal{G}_{t}]-E[ X_{0} | \mathcal{G}_{0}] &= \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} (E[ X_{t_{k+1}} | \mathcal{G}_{t_{k+1}}] - E[ X_{t_{k}} | \mathcal{G}_{t_{k}}]) \\ & = \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{ {t}}}} E[ X_{t_{k+1}}-X_{t_{k}} | \mathcal{G}_{t_{k+1}}] \\ & \phantom{=:} + \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} (E[ X_{t_{k}} | \mathcal{G}_{t_{k+1}}] - E[ X_{t_{k}} | \mathcal{G}_{t_{k}}]) \end{aligned}$$
and from applying Theorem 6.1 for each summand \(E[ X_{t_{k}} | \mathcal{G}_{t_{k+1}}] - E[ X_{t_{k}} | \mathcal{G}_{t_{k}}]\). The sum \(\sum _{\mathcal{T}_{n}^{{{t}}}} (E[ X_{t_{k}} | \mathcal{G}_{t_{k+1}}] - E[ X_{t_{k}} | \mathcal{G}_{t_{k}}])\) has a representation of the form (1.2) in case of \(t_{k} < s \leq t_{k+1}\) for \(G_{I}(s, u,e)\) defined by (7.3). Because of the càdlàg property of \(X\), by applying the dominated convergence theorem pathwise for almost each \(\omega \in \Omega \), we end up with (7.2) and (7.3). The alternative decomposition
$$\begin{aligned} E[ X_{t} | \mathcal{G}_{t}]-E[ X_{0} | \mathcal{G}_{0}] & = \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} E[ X_{t_{k+1}}-X_{t_{k}} | \mathcal{G}_{t_{k}}] \\ & \phantom{=:} + \lim _{n \rightarrow \infty } \sum _{\mathcal{T}_{n}^{{{t}}}} (E[ X_{t_{k+1}} | \mathcal{G}_{t_{k+1}}] - E[ X_{t_{k+1}} | \mathcal{G}_{t_{k}}]) \end{aligned}$$
leads to the second variant with \(X^{IB}\) replaced by \(X^{IF}\) and \(X_{u-}\) by \(X_{u}\) in (7.3). □
Remark 7.3
Without loss of generality, suppose here that \(0 \not \in E\). Motivated by Remark 3.2, for any \(t > 0\) and any integrable random variable \(\xi \), define for \(e \in E_{I}\),
$$\begin{aligned} E[ \xi |\mathcal{G}_{t}, R_{I}=(t,e) ]&:= E[ \xi |(\Gamma _{t}, R_{I})= \cdot \,]\big(\Gamma _{t},(t,e)\big), \\ E[ \xi |\mathcal{G}_{t-}, R_{I}=(t,e) ]&:= E[ \xi |(\Gamma _{t-}, R_{I})= \cdot \,]\big(\Gamma _{t-},(t,e)\big) , \\ E[ \xi |\mathcal{G}_{t}, J_{t}=0 ]&:= E[ \xi |(\Gamma _{t}, J_{t})= \cdot \,](\Gamma _{t},0) , \\ E[ \xi |\mathcal{G}_{t-}, J_{t}=0 ]&:= E[ \xi |(\Gamma _{t-}, J_{t})= \cdot \,](\Gamma _{t-},0) , \end{aligned}$$
where \(J_{t}:= \sum _{I \in \mathcal{N}} \mu _{I}(\{t\}\times E_{I})\) indicates whether there is a stopping event at time \(t\). One can then show that the integrands in (7.2) are almost surely equal to
$$\begin{aligned} G_{I}(t-, t,e) &= E[ X_{t-} |\mathcal{G}_{t-}, R_{I}=(t,e) ] - E[ X_{t-} |\mathcal{G}_{t-}, J_{t}=0 ], \\ G_{I}(t, t,e) &= E[ X_{t-} |\mathcal{G}_{t}, R_{I}=(t,e) ] - E[ X_{t-} |\mathcal{G}_{t}, J_{t}=0 ] \end{aligned}$$
for each \(t>0\), \(I \in \mathcal{N}\) and \(e \in E_{I}\). The differences on the right-hand side have intuitive interpretations. The first line describes the difference in expectation between a change scenario and a remain scenario if we are currently at time \(t-\) and are looking forward in time. Similarly, the second line describes the difference in expectation between a change scenario and a remain scenario if we are currently at time \(t\) and are looking backward in time. In (7.2), these differences in expectation are integrated with respect to the compensated forward and backward scenario dynamics.

8 Examples

Here we come back to Examples 1.1 and 1.2 and show how our infinitesimal martingale representations can be applied in life insurance and credit risk modelling.
Example 8.1
Consider a life insurance contract where the insurer collects health-related information about the insured with the aim to improve forecasts of the individual future insurance liabilities. For example, this can involve data from activity trackers or social media. Here, the marked point process includes the time of death \(\tau _{1}\), which is recorded as \(\zeta _{1}:=\tau _{1}\), and further health-related information \((\tau _{i}, \zeta _{i})_{i \geq 2}\). Upon request of the policyholder with reference to the ‘right to erasure’ according to the General Data Protection Regulation of the European Union, or as a self-imposed data privacy effort of the data provider, the insurer deletes parts of the health-related data at certain time points, i.e., we expand \((\tau _{i}, \zeta _{i})_{i \geq 2}\) by deletion times \((\sigma _{i})_{i \geq 2}\). For completeness, we define \(\sigma _{1}:= \infty \).
In the classical insurance modelling without data deletion, the time dynamics of the expected future insurance payments is commonly described by Thiele’s equation; see e.g. Møller [19] and Djehiche and Löfdahl [12]. Suppose that \(A_{t}\) gives the aggregated benefit cash flow of the life insurance contract on \([0,t]\), including survival benefits with rate \(a(t)\) and a death benefit of \(\alpha (t)\) upon death at time \(t\), i.e.,
$$\begin{aligned} A_{t} = \int _{0}^{t} \mathbf{1}_{\{\tau _{1} > s\}} a(s) \mathrm{d}s + \int _{[0,t]\times E} \alpha (s) \, \mu _{\{1\}}\big(\mathrm{d}(s,e) \big), \qquad t \geq 0. \end{aligned}$$
We assume here that \(a:[0,\infty ) \rightarrow \mathbb{R}\) and \(\alpha :[0,\infty ) \rightarrow \mathbb{R}\) are bounded. For a given interest intensity \(\phi :[0,\infty ) \rightarrow [0,\infty )\) and a finite contract horizon \(T\), the process
$$\begin{aligned} X_{t} :=\int _{(t, T]} e^{-\int _{t}^{s} \phi (u) \mathrm{d}u} \mathrm{d}A_{s} \end{aligned}$$
describes the discounted future liabilities of the insurer seen from time \(t\). As the càdlàg process \(X=(X_{t})_{t \geq 0}\) is neither adapted to \(\mathbb{F}\) nor to \(\mathbb{G}\), an insurer has to work with the optional projection instead (the so-called prospective reserve), i.e., the insurer aims to calculate
$$\begin{aligned} X^{\mathbb{F}}_{t} = E[ X_{t} | \mathcal{F}_{t} ], \qquad t \geq 0, \end{aligned}$$
in case that there is no data deletion and
$$\begin{aligned} X^{\mathbb{G}}_{t} = E[ X_{t} | \mathcal{G}_{t} ], \qquad t \geq 0, \end{aligned}$$
in case that information deletions may occur. The process \(X^{\mathbb{G}}\) is a well-defined càdlàg process according to Theorem 4.1.
By applying (7.1) and Itô’s lemma, we can derive the so-called stochastic Thiele equation
$$\begin{aligned} \mathrm{d}X^{\mathbb{F}}_{t} & = - \mathrm{d}A_{t} + \phi (t)\,X^{ \mathbb{F}}_{t-} \mathrm{d}t + \sum _{I} \int _{E_{I} } F_{I}(t,e) \, (\mu _{I} -\lambda _{I} )( \mathrm{d}t \times \mathrm{d}e ) \end{aligned}$$
(8.1)
with terminal condition \(X^{\mathbb{F}}_{T}=0\); cf. Møller [19, Eq. (2.17)]. The integrand \(F_{I}(t,e)\), which almost surely equals
$$ F_{I}(t,e) =E[ X_{t-} |\mathcal{F}_{t-}, R_{I}=(t,e) ] - E[ X_{t-} | \mathcal{F}_{t-}, J_{t}=0 ] $$
according to Remark 7.3, is a key quantity in life insurance risk management and is known as sum at risk. Equation (8.1) can be interpreted as a backward stochastic differential equation (BSDE) with solution \((X^{\mathbb{F}},(F_{I})_{I})\); see Djehiche and Löfdahl [12] for Markovian and Christiansen and Djehiche [4] for non-Markovian models. The BSDE (8.1) is in particular relevant if the life insurance payments \(a\) and \(\alpha \) depend on the current policy value so that the insurance cash flow \(A\) is only implicitly defined.
By applying Theorem 7.1 and Itô’s lemma and using the fact that the process \(A\) equals its own IB-compensator (since \(\sigma _{1} = \infty \)), we are able to derive an analogous equation for \(X^{\mathbb{G}}\), namely
$$\begin{aligned} \mathrm{d}X^{\mathbb{G}}_{t} &= - \mathrm{d}A_{t} + \phi (t)\,X^{ \mathbb{G}}_{t-} \mathrm{d}t + \sum _{I}\int _{E_{I} } G_{I}(t-,t,e) \, (\mu _{I} -\nu _{I} )( \mathrm{d}t \times \mathrm{d}e ) \\ &\phantom{=:}+ \sum _{I} \int _{E_{I} } G_{I}(t,t,e) \, (\rho _{I} -\mu _{I} )( \mathrm{d}t \times \mathrm{d}e ) \end{aligned}$$
(8.2)
with terminal condition \(X^{\mathbb{G}}_{T}=0\). This equation can be interpreted as a new type of BSDE with solution \((X^{\mathbb{G}},(G_{I})_{I})\), featuring an IF-martingale and an IB-martingale instead of a classical martingale. The IF-martingale in the first line describes the impact of new information on the optional projection \(X^{\mathbb{G}}\). The IB-martingale in the second line quantifies the effect on \(X^{\mathbb{G}}\) of information deletions. The integrands \(G_{I}(t-,t,e)\) and \(G_{I}(t,t,e)\), which are almost surely equal to
$$\begin{aligned} G_{I}(t-, t,e) &= E[ X_{t-} |\mathcal{G}_{t-}, R_{I}=(t,e) ] - E[ X_{t-} |\mathcal{G}_{t-}, J_{t}=0 ], \\ G_{I}(t, t,e) &= E[ X_{t-} |\mathcal{G}_{t}, R_{I}=(t,e) ] - E[ X_{t-} |\mathcal{G}_{t}, J_{t}=0 ] \end{aligned}$$
according to Remark 7.3, generalise the classical definition of the sum at risk. They are needed in life insurance risk management for sensitivity analyses, safe-side calculations, contract modifications and surplus decompositions.
If the policyholder may decide about data deletions at discretion, then the resulting value changes of the insurance contract can be systematically exploited by the policyholder, leading to a kind of data privacy arbitrage. Since it is the IB-martingale in (8.2) that measures the value changes due to data deletions at times \((\sigma _{i})_{i \geq 2}\), it represents the potential data privacy arbitrage. A simple solution for avoiding data privacy arbitrage could be to charge the IB-martingale as a fee upon a data deletion request. The fee can also be negative and represents then a bonus payment. However, more complex risk sharing schemes will be needed in insurance practice that moreover distinguish between different causes for data deletions. By following the concept of Schilling et al. [26] to interpret martingale representations as risk factor decompositions, we may interpret the infinitesimal martingale parts in (8.2) as an additive surplus decomposition that can distinguish between numerous kinds of jump events \(\mu _{I}\), \(I\subseteq \mathbb{N}\), \(\vert I \vert <\infty \). Such an additive decomposition of the insurer’s surplus is an important step for aligning insurance risk management to the digital age.
Example 8.2
A popular approximation concept in credit rating modelling is to pretend that the credit rating process is Markovian even if the empirical data does not fully support this assumption. Suppose that credit ratings are updated at integer times only. By setting \(\tau _{i}:=i-1\) and \(\sigma _{i}:=i\) for \(i \in \mathbb{N}\) and defining \(\zeta _{i}\) as the credit rating at time \(\tau _{i}\), the rating process \(R=(R_{t})_{t \geq 0}\) has the representation
$$\begin{aligned} R_{t}= \sum _{i=1}^{\infty } \zeta _{i} \mathbf{1}_{\{\tau _{i} \leq t < \sigma _{i} \}}, \qquad t \geq 0, \end{aligned}$$
and satisfies
$$ \mathcal{G}_{t}=\sigma (R_{t}) \vee \mathcal{Z}, \qquad t \geq 0. $$
The jumps of the process \(R\) correspond to the random counting measures \(\mu _{I}\). In the Jarrow–Lando–Turnbull model, the rating space \(E\) is finite, \((R_{i})_{i \in \mathbb{N}_{0}}\) is assumed to be a Markov chain, and
$$\begin{aligned} Q[R_{i+1}= r_{i+1}| R_{i}=r_{i} ] = \pi (i,r_{i})\, P[R_{i+1}= r_{i+1}| R_{i}=r_{i} ] \end{aligned}$$
for \(r_{i},r_{i+1} \in E\) and \(i \in \mathbb{N}_{0}\), where \(Q\) is the risk-neutral measure and \(\pi \) is a deterministic function on \(\mathbb{N}_{0} \times E\). The latter formula allows us to estimate \(Q\) from market data by a two step method. First, the transition probabilities \(P[R_{i+1}= r_{i+1}| R_{i}=r_{i} ]\) are estimated from observed credit rating time series. Then the function \(\pi \) is calibrated such that the risk-neutral values of credit rating derivatives conform with observed market prices. Once we have \(Q\), we can use the (classical) martingale representation (6.1) in order to explicitly construct hedges for financial claims \(\xi \); see e.g. Last and Penrose [18]. For example, by arguing analogously to (8.1), the claim \(\xi =h(R_{T})\) has the martingale representation
$$\begin{aligned} \begin{aligned} h(R_{T}) - B(0) E_{Q}\bigg[ \frac{h(R_{T})}{B(T)} \bigg| \mathcal{F}_{0} \bigg] &= \int _{(0,T]} B(t-) E_{Q}\bigg[ \frac{h(R_{T})}{B(T)} \bigg| \mathcal{F}_{t-} \bigg] \, \frac{B(\mathrm{d}t)}{B(t-)} \\ &\phantom{=:} + \sum _{I \in \mathcal{N}} \int _{(0,T]\times E_{I}} F_{I}(t,e) ( \mu _{I}- \lambda _{I} )(\mathrm{d}t \times \mathrm{d}e). \end{aligned} \end{aligned}$$
(8.3)
The integral in the first line describes the investments in the risk-free asset \(B\). The second line corresponds to risky investments. It can be rewritten in terms of the tradable assets in a complete financial market; cf. Last and Penrose [18, Sect. 5], which yields a trading strategy that can be used to replicate the claim \(\xi \).
A standard estimator for the state occupation probabilities of the Markov chain \(R\) with respect to \(P\) is the Aalen–Johansen estimator, which directly corresponds to the Nelson–Aalen estimator for the compensators \(\lambda _{I}\) of the random counting measures \(\mu _{I}\). Under the assumption that \(R\) is Markovian, the Nelson–Aalen estimator consistently estimates \(\lambda _{I}=\nu _{I}\). If \(R\) is not Markovian, then the Nelson–Aalen estimator still consistently estimates \(\nu _{I}\), see Datta and Satten [9], but now \(\nu _{I} \neq \lambda _{I}\). In other words, if we ignore the information beyond \(\mathbb{G}\) in the estimation of \(\lambda _{I}\) due to an incorrect Markov assumption, then we actually estimate the infinitesimal forward compensator \(\nu _{I}\) instead of the classical compensator \(\lambda _{I}\). Similarly, ignoring the information beyond \(\mathbb{G}\) upon estimating \(F_{I}\) and (1.1) from market data means that we unintentionally end up with the integrands
$$\begin{aligned} G_{I}(t-,t,e) &= \frac{B(t)}{B(T)} \big(E_{Q}[ h(R_{T}) |\mathcal{G}_{t-}, R_{I}=(t,e) ] -E_{Q}[ h(R_{T}) |\mathcal{G}_{t-}, J_{t}=0 ]\big) \end{aligned}$$
and \(B(t-) E_{Q}[ h(R_{T})/B(T) |\mathcal{G}_{t}] \) rather than the integrands
$$\begin{aligned} F_{I}(t,e) &=\frac{B(t)}{B(T)} \big(E_{Q}[ h(R_{T}) |\mathcal{F}_{t-}, R_{I}=(t,e) ] -E_{Q}[ h(R_{T}) |\mathcal{F}_{t-}, J_{t}=0 ]\big) \end{aligned}$$
and \(B(t-) E_{Q}[ h(R_{T})/B(T) |\mathcal{F}_{t}] \). For the correct interpretation of the latter conditional expectations with mixed conditions, see Remark 7.3. All in all, by ignoring the information beyond \(\mathbb{G}\) in the estimation and calculation of (8.3) due to an incorrect Markov assumption, we unintentionally end up with
$$\begin{aligned} &\int _{(0,T]} B(t-)E_{Q}\bigg[ \frac{h(R_{T})}{B(T)} \bigg| \mathcal{G}_{t-} \bigg] \, \frac{B(\mathrm{d}t)}{B(t-)} \\ &+ \sum _{I \in \mathcal{N}} \int _{(0,T]\times E_{I}} G_{I}(t-,t,e) ( \mu _{I}- \nu _{I} )(\mathrm{d}t \times \mathrm{d}e) \end{aligned}$$
(8.4)
instead of the right-hand side of (8.3). This unintentional modification distorts the replicating trading strategy for the claim \(h(R_{T})\) which was included in (8.3). Do we still correctly replicate \(h(R_{T})\)? By applying Theorem 6.1 instead of (6.1) and using that \(\mathcal{F}_{0}=\mathcal{G}_{0}\) and \(\mathcal{G}_{T}= \sigma (R_{T}) \vee \mathcal{Z}\), we get analogously to (8.3) that
$$\begin{aligned} \begin{aligned} h(R_{T}) - B(0) E_{Q}\bigg[ \frac{h(R_{T})}{B(T)} \bigg| \mathcal{F}_{0} \bigg] &= \int _{(0,T]}B(t-)E_{Q}\bigg[ \frac{h(R_{T})}{B(T)} \bigg| \mathcal{G}_{t-} \bigg] \, \frac{B(\mathrm{d}t)}{B(t-)} \\ &\phantom{=:} + \sum _{I \in \mathcal{N}} \int _{(0,T]\times E_{I}} \! \!G_{I}(t-,t,e) (\mu _{I}- \nu _{I} )(\mathrm{d}t \times \mathrm{d}e) \\ &\phantom{=:} + \sum _{I \in \mathcal{N}} \int _{(0,T]\times E_{I}} \!\! G_{I}(t,t,e) (\rho _{I}- \mu _{I} )(\mathrm{d}t \times \mathrm{d}e). \end{aligned} \end{aligned}$$
(8.5)
Equation (8.5) implies that the distorted trading strategy we might derive from (8.4) is actually not a hedge for \(\xi =h(R_{T})\). The hedging error is given by the third line in (8.5). To sum up, by estimating and calculating (8.3) under an incorrect Markov assumption for \(R\), we unintentionally replace the (classical) \(\mathbb{F}\)-martingale in (8.3) by the \(\mathbb{G}\)-IF-martingale in (8.4) (the risk-free investment is also affected), and the corresponding \(\mathbb{G}\)-IB-martingale is just the hedging error.
Schilling et al. [26] interpret martingale representations as additive risk factor decompositions. Likewise we can read the (infinitesimal) martingale parts in (8.3) and (8.5) as linear risk factor decompositions. The relevance of such decompositions in credit risk modelling is explained in Rosen and Saunders [25].
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Bandini, E.: Existence and uniqueness for backward stochastic differential equations driven by a random measure, possibly non quasi-left continuous. Electron. Commun. Probab. 20, 1–13 (2015) CrossRef Bandini, E.: Existence and uniqueness for backward stochastic differential equations driven by a random measure, possibly non quasi-left continuous. Electron. Commun. Probab. 20, 1–13 (2015) CrossRef
2.
Zurück zum Zitat Boel, R., Varaiya, P., Wong, E.: Martingales on jump processes I: representation results. SIAM J. Control 13, 999–1021 (1975) MathSciNetCrossRef Boel, R., Varaiya, P., Wong, E.: Martingales on jump processes I: representation results. SIAM J. Control 13, 999–1021 (1975) MathSciNetCrossRef
3.
Zurück zum Zitat Chou, C.-S., Meyer, P.A.: Sur la répresentation des martingales comme intégrales stochastiques dans les processus ponctuels. In: Meyer, P.A. (ed.) Séminaire de Probabilités IX. Lecture Notes in Mathematics, vol. 465, pp. 226–236. Springer, Berlin (1975) Chou, C.-S., Meyer, P.A.: Sur la répresentation des martingales comme intégrales stochastiques dans les processus ponctuels. In: Meyer, P.A. (ed.) Séminaire de Probabilités IX. Lecture Notes in Mathematics, vol. 465, pp. 226–236. Springer, Berlin (1975)
4.
Zurück zum Zitat Christiansen, M.C., Djehiche, B.: Nonlinear reserving and multiple contract modifications in life insurance. Insur. Math. Econ. 93, 187–195 (2020) MathSciNetCrossRef Christiansen, M.C., Djehiche, B.: Nonlinear reserving and multiple contract modifications in life insurance. Insur. Math. Econ. 93, 187–195 (2020) MathSciNetCrossRef
6.
Zurück zum Zitat Cohen, S.N., Elliott, R.J.: Solutions of backward stochastic differential equations on Markov chains. Commun. Stoch. Anal. 2, 251–262 (2008) MathSciNetMATH Cohen, S.N., Elliott, R.J.: Solutions of backward stochastic differential equations on Markov chains. Commun. Stoch. Anal. 2, 251–262 (2008) MathSciNetMATH
7.
Zurück zum Zitat Cohen, S.N., Elliott, R.J.: Comparisons for backward stochastic differential equations on Markov chains and related no-arbitrage conditions. Ann. Appl. Probab. 20, 267–311 (2010) MathSciNetCrossRef Cohen, S.N., Elliott, R.J.: Comparisons for backward stochastic differential equations on Markov chains and related no-arbitrage conditions. Ann. Appl. Probab. 20, 267–311 (2010) MathSciNetCrossRef
8.
Zurück zum Zitat Confortola, F.: \(L^{p} \) solution of backward stochastic differential equations driven by a marked point process. Math. Control Signals Syst. 31, 1–32 (2019) MathSciNetCrossRef Confortola, F.: \(L^{p} \) solution of backward stochastic differential equations driven by a marked point process. Math. Control Signals Syst. 31, 1–32 (2019) MathSciNetCrossRef
9.
Zurück zum Zitat Datta, S., Satten, G.A.: Validity of the Aalen–Johansen estimators of stage occupation probabilities and Nelson–Aalen estimators of integrated transition hazards for non-Markov models. Stat. Probab. Lett. 55, 403–411 (2001) MathSciNetCrossRef Datta, S., Satten, G.A.: Validity of the Aalen–Johansen estimators of stage occupation probabilities and Nelson–Aalen estimators of integrated transition hazards for non-Markov models. Stat. Probab. Lett. 55, 403–411 (2001) MathSciNetCrossRef
10.
Zurück zum Zitat Davis, M.H.A.: The representation of martingales of jump processes. SIAM J. Control Optim. 14, 623–638 (1976) MathSciNetCrossRef Davis, M.H.A.: The representation of martingales of jump processes. SIAM J. Control Optim. 14, 623–638 (1976) MathSciNetCrossRef
11.
Zurück zum Zitat Delong, Ł.: Backward Stochastic Differential Equations with Jumps and Their Actuarial and Financial Applications. Springer, London (2013) CrossRef Delong, Ł.: Backward Stochastic Differential Equations with Jumps and Their Actuarial and Financial Applications. Springer, London (2013) CrossRef
12.
Zurück zum Zitat Djehiche, B., Löfdahl, B.: Nonlinear reserving in life insurance: aggregation and mean-field approximation. Insur. Math. Econ. 69, 1–13 (2016) MathSciNetCrossRef Djehiche, B., Löfdahl, B.: Nonlinear reserving in life insurance: aggregation and mean-field approximation. Insur. Math. Econ. 69, 1–13 (2016) MathSciNetCrossRef
13.
Zurück zum Zitat Elliott, R.J.: Stochastic integrals for martingales of a jump process with partially accessible jump times. Z. Wahrscheinlichkeitstheor. Verw. Geb. 36, 213–266 (1976) MathSciNetCrossRef Elliott, R.J.: Stochastic integrals for martingales of a jump process with partially accessible jump times. Z. Wahrscheinlichkeitstheor. Verw. Geb. 36, 213–266 (1976) MathSciNetCrossRef
14.
Zurück zum Zitat Jacod, J.: Multivariate point processes: predictable projections, Radon–Nikodým derivatives, representation of martingales. Z. Wahrscheinlichkeitstheor. Verw. Geb. 31, 235–253 (1975) CrossRef Jacod, J.: Multivariate point processes: predictable projections, Radon–Nikodým derivatives, representation of martingales. Z. Wahrscheinlichkeitstheor. Verw. Geb. 31, 235–253 (1975) CrossRef
15.
Zurück zum Zitat Jarrow, R.A., Lando, D., Turnbull, S.M.: A Markov model for the term structure of credit risk spreads. Rev. Financ. Stud. 10, 481–523 (1997) CrossRef Jarrow, R.A., Lando, D., Turnbull, S.M.: A Markov model for the term structure of credit risk spreads. Rev. Financ. Stud. 10, 481–523 (1997) CrossRef
16.
Zurück zum Zitat Karr, A.: Point Processes and Their Statistical Inference. Dekker, New York (1986) MATH Karr, A.: Point Processes and Their Statistical Inference. Dekker, New York (1986) MATH
17.
Zurück zum Zitat Lando, D., Skodeberg, T.: Analyzing rating transitions and rating drift with continuous observations. J. Bank. Finance 26, 423–444 (2002) CrossRef Lando, D., Skodeberg, T.: Analyzing rating transitions and rating drift with continuous observations. J. Bank. Finance 26, 423–444 (2002) CrossRef
18.
Zurück zum Zitat Last, G., Penrose, M.D.: Martingale representation for Poisson processes with applications to minimal variance hedging. Stoch. Process. Appl. 121, 1588–1606 (2011) MathSciNetCrossRef Last, G., Penrose, M.D.: Martingale representation for Poisson processes with applications to minimal variance hedging. Stoch. Process. Appl. 121, 1588–1606 (2011) MathSciNetCrossRef
19.
20.
Zurück zum Zitat Møller, T., Steffensen, M.: Market-Valuation Methods in Life and Pension Insurance. Cambridge University Press, Cambridge (2007) CrossRef Møller, T., Steffensen, M.: Market-Valuation Methods in Life and Pension Insurance. Cambridge University Press, Cambridge (2007) CrossRef
21.
Zurück zum Zitat Norberg, R.: Hattendorff’s theorem and Thiele’s differential equation generalized. Scand. Actuar. J. 1992, 2–14 (1992) MathSciNetCrossRef Norberg, R.: Hattendorff’s theorem and Thiele’s differential equation generalized. Scand. Actuar. J. 1992, 2–14 (1992) MathSciNetCrossRef
24.
Zurück zum Zitat Pardoux, É., Peng, S.: Backward doubly stochastic differential equations and systems of quasilinear SPDEs. Probab. Theory Relat. Fields 98, 209–227 (1994) MathSciNetCrossRef Pardoux, É., Peng, S.: Backward doubly stochastic differential equations and systems of quasilinear SPDEs. Probab. Theory Relat. Fields 98, 209–227 (1994) MathSciNetCrossRef
25.
Zurück zum Zitat Rosen, D., Saunders, D.: Risk factor contributions in portfolio credit risk models. J. Bank. Finance 34, 336–349 (2010) CrossRef Rosen, D., Saunders, D.: Risk factor contributions in portfolio credit risk models. J. Bank. Finance 34, 336–349 (2010) CrossRef
26.
Zurück zum Zitat Schilling, K., Bauer, D., Christiansen, M.C., Kling, A.: Decomposing dynamic risks into risk components. Manag. Sci. 66, 5485–6064 (2020) CrossRef Schilling, K., Bauer, D., Christiansen, M.C., Kling, A.: Decomposing dynamic risks into risk components. Manag. Sci. 66, 5485–6064 (2020) CrossRef
27.
Zurück zum Zitat Tang, H., Wu, Z.: Backward stochastic differential equations with Markov chains and related asymptotic properties. Adv. Differ. Equ. 285, 1–17 (2013) MathSciNet Tang, H., Wu, Z.: Backward stochastic differential equations with Markov chains and related asymptotic properties. Adv. Differ. Equ. 285, 1–17 (2013) MathSciNet
Metadaten
Titel
Time-dynamic evaluations under non-monotone information generated by marked point processes
Publikationsdatum
14.06.2021
Erschienen in
Finance and Stochastics / Ausgabe 3/2021
Print ISSN: 0949-2984
Elektronische ISSN: 1432-1122
DOI
https://doi.org/10.1007/s00780-021-00456-5

Weitere Artikel der Ausgabe 3/2021

Finance and Stochastics 3/2021 Zur Ausgabe