1 Introduction
The development of the new supervisory regime for insurance companies—Solvency II—took almost a decade. The further development of the International Insurance Capital Standards is currently under way, see, e.g., [
14]. Moreover, EIOPA launched a review of the standard formula (SF) until 2020, see [
6]. The practical, but also regulatory theoretical significance of the SF, which implements the Pillar I requirements of Solvency II, can hardly be overestimated, as the amount of solvency capital required (SCR: solvency capital requirement) restricts the business volume of insurance companies and thus reduces the most significant production factor: own funds. On the other hand, the SCR serves supervisory authorities as a means of achieving their supervisory objectives (e.g. reduction of systemic risks, consumer protection, see, e.g., BaFin [
6]). The majority of insurance companies in Germany (about 90%) use the SF to determine their SCR. Only a minority of about 30 insurance companies make use of their legal right to develop an internal model as an alternative. However, the market share of insurance companies with internal models in Germany is approximately 50%.
A well written and brief summary of the technique of the standard procedures (more precisely, concerning the basic SCR in QIS 5) is provided by Chech [
13]. Sandström’s compendium, see [
42], offers encyclopaedic completeness. Interested readers can find synoptic comparisons of the regulatory components of Solvency II with those of the banking supervisory regulations (Basel III) in Gatzert and Wesker [
24] and Laas and Siegel [
30]. Liebwein gives an overview of the application context of internal models under Solvency II in [
31]. Important comments from a practitioner’s perspective are given by Dacorogna et al. [
15].
The SF has many structural parallels to internal models (IM). In particular, it defines a forecast model whose assumptions, characteristics, and properties can be investigated from both an absolute and a relative perspective—in the light of economic, risk management, supervisory, and stochastic criteria. An absolute perspective is a purely SF immanent evaluation of the SF, i.e. without the use of additional (model) references that go beyond the context reference. A relative perspective compares properties of the SF with alternative models. The following investigations focus on the analysis of the formal, i.e. mathematical properties of the standard formula from an absolute perspective.
The basic insights on quantitative risk management, starting with Morgan’s publication of RiskMetrics [
36]—the inauguration of value-at-risk (VaR) as the most important risk measure in practice—and the seminal work of Artzner et al. [
1] on coherent risk measures, Föllmer and Schied [
20] on convex risk measures, and the work of Heyde et al. [
27] on statistical and robust risk statistics have dominated theoretical and practical discussions on risk management issues, see, e.g., Embrechts et al. [
19]. Heyde et al. [
27] in particular deal with both formal and epistemic aspects of risk measures.
RiskMetrics, see [
36], is based—in the spirit of the Markowitz approach—on an objective model framework, in which the risk of a portfolio
X, quantified as
\(\delta (X)\), is measured by a statistical functional
$$\begin{aligned} \delta (X)=T(F_X), \end{aligned}$$
(1)
where the portfolio
X is modelled by a random vector with distribution function
\(F_X\), i.e.
\(X\sim F_X\). This is explained in more detail in Sect.
3.2. Such an objective model approach implies far-reaching consequences for regulatory acceptance, validation, and above all for the application in a company.
Insurance markets constitute an important example of incomplete markets. Most insurance products cannot be fully replicated through trading strategies, which suggests using utility theory, as it allows decision-specific aspects to be incorporated into decisions under uncertainty. Denuit et al. [
16, p. 88] and Wang et al. [
47] represent a decision problem on the model side by a rank-dependent expected utility of the uncertain cash flow
Y$$\begin{aligned} {\mathbb {H}}_g^u (Y) = - \int _{- \infty }^\infty u(x) \ \mathrm {d} g\big ({\overline{F}}_Y(x)\big ), \end{aligned}$$
(2)
where
\(g\big ({\overline{F}}_Y(\circ )\big )\) denotes the distorted survival function of
Y and
\(u(\circ )\) denotes the utility function of the insurance company. Both
\(g(\circ )\) and
\(u(\circ )\) express decision-specific aspects, such as risk appetite or perception of probabilities. These subjective components go beyond the inevitable subjective elements of an objective model (
1).
A utility theoretical superstructure seems to be most flexible in order to provide interfaces with the scientific theoretical (e.g. Mainzer [
33]), economic (e.g. Straub and Welpe [
44]), and regulatory standard setting (e.g. Barnett and O’Hagan [
7]) literature as well as the basic literature on risk management (e.g. Jaeger et al. [
28], Aven [
3]).
The use of the standard methods implicitly requires not only a logical understanding of them, but supervision also requires this understanding from the Board of Management by law, see the contribution of Stahl, Fahr et al. [
43], in the commentary of the VAG (German Insurance law). The company’s own risk and solvency assessment (ORSA, see, e.g., the book by Gorge [
25]), which is required by the supervisory authorities, forms the linchpin of proof of this understanding. In this context, EIOPA published a key document, see [
18], summarizing the most important assumptions underlying the SF. The aim of that document is to enable insurance companies using the SF to provide evidence of the adequacy of the SF within the framework of ORSA.
In addition to the work on formal aspects of risk models already cited, more fundamental work on the theory of the concept of risk is also tacitly included. We have already mentioned the works of Aven [
4], Jaeger et al. [
28], and Douady et al. [
45]. In particular, we follow these authors in their differentiation of epistemic and aleatoric components in the modelling of uncertainty.
Following this introduction, Sect.
2 presents the progress made in determining the SCR with the introduction of Solvency II in comparison with Solvency I, as well as key structural elements of the SF. The latter show in particular that the SF, given the information in
\(t=0\), is deterministic. Based on the decomposition of uncertainty into an epistemic and an aleatoric component, the formal properties of the SF are analysed in relation to given axioms of risk and capital functions. Section
3.2 analyses the stochastic model underneath the SF. The main finding is that the SF defines incoherent, subjective probabilities. Furthermore, the SF is analysed with regard to its (epistemically interpreted) axioms. It is observed that the SF is neither a coherent nor a translation invariant capital functional. Section
4 summarizes the main results.
Performing a risk analysis requires at least an aggregated assessment of the risk on the level of the consequences. This is provided by the fundamental concept of risk measures. In the following definition, \({\mathcal {B}}\) denotes the vector space of all bounded functions, defined on a fixed domain. The elements of \({\mathcal {B}}\) can be interpreted as losses or pay-off functions (with appropriate modifications) of financial instruments.
This definition also encompasses deterministic approaches and does not necessarily assume an underlying stochastic structure. The latter would summarize the aleatoric previous knowledge.
However, thanks to Knight’s seminal work, see [
29], a stochastic framework
\((\Omega , {\mathcal {A}}, {\mathbb {P}}) \) for describing risk and uncertainty is an established standard. So it comes as no surprise that Knight’s fundamental approach is also reflected in the literature on risk management applications. Nevertheless, uncertainty can also be represented without the use of a stochastic model. To this end, Augustin et al. [
2] define the set of acceptable/desirable games as follows:
3.1 SF and uncertainty—aleatoric aspects
For the upcoming discussion of the stochastic model related to the SF, the following passage from the EIOPA document [
18, p. 42] provides important insights, since it explicitly states that the SF stochastic model is only based on vague prior knowledge:
“Originally in the design of the SCR for non-life insurance underwriting risk, the lognormal distribution acted prominently as a vehicle to model a skew bell-shaped probability distribution. This implied a function of \(\sigma \) that should amount more or less to the value \(3 \sigma \). Later it was decided just to focus on this simple factor and downsizing the explicit assumption of an exact lognormal probability distribution”.
Thus the explanations in CEIOPS technical documents, e.g. [
11,
12], serve primarily to justify a calibration proposal rather than to specify a stochastic model. A number of parameter adjustments were also made as part of the QIS studies. Furthermore, vague stochastic modelling has the disadvantage (or, depending on the purpose, the advantage) that some models can hardly be falsified.
The following investigations show to what extent the model approaches expressed by (
1) and (
2) help to understand the SF. In particular, it will be shown that the stochastic component of the SF should be interpreted as
subjective probabilities.
In the spirit of Sandström’s book [
42], the SF may be regarded as an objective, stochastic model (see Sect.
3.2) based on a probability space
$$\begin{aligned} (\Omega , \mathcal {A}, \mathbb {F}_\theta ), \quad \theta \in \Theta \subseteq \mathbb {R}^p, \end{aligned}$$
(7)
which differs from internal models mainly in the (presumably conservative, i.e. prudent) calibration of the parameter
\(\theta \).
In the following, we only consider the first layer in the hierarchy tree (visualized in Fig.
1) of the basic SCR, so as not to strain the notation. As already mentioned, the SF aggregates the SCR of vector
\(\mathbf{L }\) at this level using a correlation matrix
\({\mathcal {C}}\) by the so-called square-root formula:
$$\begin{aligned} \delta ({\overline{L}}) := \sqrt{ b \ {\mathcal {C}} \ b^t },\quad \ b := \big ( \delta (L_1), \ldots , \delta (L_5)\big ), \end{aligned}$$
which is a first indication for a stochastic model in the sense of (
7) (
\(b^t\) denotes the transposed vector of
b). This motivates a closer investigation of the probabilistic model underneath the SF. For the distribution of the aggregated losses,
\({\overline{L}}=L_1+\cdots +L_5\), or that of each category
\(L_i\), the SF (in
\(t_0\)) determines only
\(\alpha \)-quantiles at the level
\(\alpha =99.5\%\) by
\( \delta _{t_0}(L_i)\). If
\({\mathbb {F}}\) denotes the set of all distribution functions on
\({\mathbb {R}}\), the set
$$\begin{aligned} {\mathbb {F}}_\alpha = \big \{ (F_1, \ldots , F_5) \, |\, F_i \in {\mathbb {F}} \ \text {and} \ F^{-1}_i(\alpha ) = \delta (L_i) \big \} \end{aligned}$$
(8)
denotes the previous knowledge of the regulator about the marginal distributions
\(L_i\) of
\({\mathbf {L}}\). In addition, the SF specifies the
\(\alpha \)-quantile of
\({\overline{L}}= \sum _{i=1}^5L_i\):
$$\begin{aligned} {\mathbb {S}}_\alpha = \big \{ F \in {\mathbb {F}} \ | \ F^{-1}(\alpha ) =\delta ({\overline{L}}) \big \}. \end{aligned}$$
(9)
By the pair
$$\begin{aligned} {\mathbb {V}}_\alpha = ({\mathbb {S}}_\alpha , {\mathbb {F}}_\alpha ), \end{aligned}$$
(10)
we denote the entire aleatoric previous knowledge of the regulator. Obviously,
\({\mathbb {V}}_\alpha \) specifies a large non-parametric model. In other words, the previous knowledge is rather vague and insofar minimal, as this is sufficient to define or interpret the SCR (or
\(\delta _{t_0}\)) as value-at-risk as in (
3). Furthermore, (
10) allows a smooth aggregation of at least partial internal models into the SF.
The interpretation (
12) corresponds to the prescriptive-normative character of the SF. Previous knowledge
\({\mathbb {V}}_\alpha \) is vague both from the perspective of the regulator and of an insurance company.
This provides the supervisory standard
\( \delta (X)\) with an additional quality that explicates the
concept of uncertainty. With the stochastic model construction, the regulatory standard
\( \delta (X)\), which can initially be implemented, converts into an
ideal standard, i.e. its compliance can no longer be checked with certainty, see Barnett and O’ Hagan [
7, p. 21ff]. When using ideal standards, their validation plays a key role. Barnett and O’ Hagan [
7, p. 28], even require that ideal standards should only be used together with a given framework of validation. In any case, the model reference
\({\mathbb {V}}_\alpha \) implies additional legitimation compared to
\( \delta (X)\).
The vague aleatoric knowledge of the SF makes backtesting with statistical methods almost impossible: Indeed, the insurance company generates a time series of forecast intervals and associated BoF realizations with the SF at quarterly intervals, which results in a sequence of realizations of a Bernoulli-distributed random variable based on which it can be checked whether the 99.5% quantile is violated or not. However, there is not enough data to draw reliable statistical conclusions on such an extreme probability in a Bernoulli experiment. As far as the use of statistical tests is concerned, the SF is at the limit of non-falsifiability.
In [
22], Frezal investigated the assumptions of the five QIS studies and their impact on SCR requirements for non-life risk categories. In our terminology, this corresponds to the analysis of the available information
\({\mathcal {K}}_t\) over time and the validity of:
$$\begin{aligned} \text {Pr}\big ( BoF_1 \in ( - \infty , BoF_0 - SCR ] \ | \ {\mathcal {K}}_t\big ) = 1 - \alpha , \end{aligned}$$
for
\(t=t_1,\ldots ,t_5\). He critically evaluates the high influence of
\({\mathcal {K}}_t\), since he implicitly assumes an objective model. He therefore concludes that the SF is not risk-based, since risk orders are not time-invariant, i.e.
\(\delta _{t_j}(X) < \delta _{t_j}(Y)\) does not follow from
\(\delta _{t_i}(X) < \delta _{t_i}(Y)\) for
\( i \ne j \). Thus, the use of the concept of subjective probabilities is necessary in order to justify the regulatory choices. Frezal’s work questions whether the SF is unbiased in the sense of Tversky and Kahnemann [
46], i.e. meets the criteria of:
3.2 Axiomatics of the SF
The following (axiomatic) criteria formulate (mostly desirable) properties of risk measures, see Denuit et al. [
16], Föllmer and Weber [
21], and Rüschendorf [
41]. In the following, they are used to assess the axiomatic properties of the SF. In addition to their formal and mathematical relevance, these axioms have a substantial significance, as they make the existing epistemic knowledge of the respective purpose or causes operational when these risk measures are applied in risk management. Thus, the importance of fulfilling these axioms goes beyond the purely mathematical purpose and reflects the level of understanding of the necessities of risk management implemented in the SF.
We consider the axioms with regard to their fulfillment by the probability functional and the capital functional.
3.2.1 Analysis of the probability functional
1.
Law invariance: If the random variables
L and
M are equal in distribution (i.e.
\(L{\mathop {=}\limits ^{d}} M\)), then
\(\delta (L) = \delta (M)\) must hold for a law-invariant risk measure. This criterion states that the risk measure is a function of the distribution function
\(F_L\) and is linked to the representation of a risk measure as a statistical functional (
1). As Denuit et al., [
16, p. 64], explain, this approach is based on the interpretation of objectivity when
$$\begin{aligned} \delta (L)=T(F_L) \end{aligned}$$
(13)
can be estimated using the empirical distribution function
\({\widehat{F}}_n(l)\):
$$\begin{aligned} {\widehat{\delta }}(L)=T({\widehat{F}}_L)=T\big ({\widehat{F}}_n(l)\big ). \end{aligned}$$
Since the latter can be determined from data, this allows the model approach associated with (
13) to be interpreted as objective. More generally, this interpretation assumes an objective, stochastic model
\((\Omega , {\mathcal {A}}, {\mathbb {F}}), \) which models the aleatoric previous knowledge about
L. As already shown, the aleatoric model of the SF with
\({\mathbb {V}}_\alpha \) uses only vague or subjective probabilities, see Augustin et al. [
2], Baudrit and Dubois [
8], and Aven et al. [
5]. In addition,
\(L{\mathop {=}\limits ^{d}} M\) defines an equivalence relation on the set of random variables. The same applies to
\({\mathbb {V}}_\alpha \):
$$\begin{aligned} L \simeq _\alpha M \iff F^{-1}_L (\alpha ) = F^{-1}_M (\alpha ). \end{aligned}$$
Obviously, the equivalence classes belonging to the two relations (equal in distribution versus sharing only some
\(\alpha \)-quantile) are of different size. This is not only an interpretation of the degree of aleatoric previous knowledge, but also shows the granularity that is a basic prerequisite for stochastic modelling.
2.
No inappropriately low assessment of risk: This is formulated in Denuit et al. [
16] by the requirement
$$\begin{aligned} \delta (L) \geqslant {\mathbb {E}} [L]. \end{aligned}$$
(14)
As is well known, the value-at-risk as a risk measure does not necessarily meet this requirement, see, e.g., Denuit et al. [
16]. Hence, this criterion cannot formally be fulfilled by the SF, although the high level
\(\alpha \) in Solvency II is indicative of its validity in practice. For many cases of practical relevance, moreover, (
14) can be proven within the framework of ORSA.
3.
Monotonicity:
$$\begin{aligned} \text {Pr}(L \leqslant {\tilde{L}}) =1 \implies \delta (L) \leqslant \delta ({\tilde{L}}). \end{aligned}$$
Pfeiffer showed that neither the standard deviation nor the standard deviation principle meet the criterion of monotonicity, see [
38]. Since for the non-life category, the premium and reserve risk use the standard deviation as a risk measure,
\(\delta (X)\) within the context of the SF cannot meet the monotonicity condition. Further details, especially regarding assumptions on the log-normal distribution, can be found in Hamel and Pfeifer [
26].
3.2.2 Analysis of the capital functional
1.
No inappropriately high assessment of risk: For
\(L\sim F_L\), Denuit et al. [
16] use the criterion
$$\begin{aligned} \delta (L) \leqslant F_L^{-1}(1). \end{aligned}$$
(15)
For all
\(F_L \in {\mathbb {V}}_\alpha \), (
15) is formally true at each level, where the SCR is calculated, see Fig.
1. However, the validity of this relation cannot be taken for granted because it is hardly to be backtested against reality. This argument is also true for the way how the SCRs of different risk categories or levels are aggregated by means of the correlation matrix.
2.
Translativity:
$$\begin{aligned} \delta (\psi _B-c) = \delta (\psi _B) - c ;\quad c \in {\mathbb {R}}. \end{aligned}$$
(16)
Epistemically, the role of capital,
c in (
16), in risk management and the regulation of financial markets can hardly be overestimated. In the representation of these systems by cybernetic feedback loops, capital takes over the function of a regulator, i.e. it allows the system to be kept in balance (homeostasis).
It is therefore not surprising that the requirement (
16) is a central one. Pflug and Römisch [
40, p. 39] concretize the above explanations with regard to coherent capital functions. These determine the capital amount
c necessary for the acceptability of the position
\(\psi _B\), so that
\(\psi _B+c\) is acceptable, i.e.:
\( \psi _B+c \in {\mathcal {D}} \).
3.
Subadditivity: \(\delta (\psi _{B_1} + \psi _{B_2}) \leqslant \delta (\psi _{B_1}) + \delta (\psi _{B_2})\).
4.
Positively homogeneous: \(\delta (c X) = c \delta (X);\, c>0\).
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.