Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2022 | OriginalPaper | Buchkapitel

2. Financial and Actuarial Background

verfasst von : Moshe Arye Milevsky

Erschienen in: How to Build a Modern Tontine

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter summarizes the main technical background on financial and actuarial modelling required, with particular emphasis on present values, stochastic investment returns, the Gompertz law of mortality and the valuation of temporary life annuities which are very close cousins to modern tontines. To be clear, this chapter is a review of background and notation. It’s definitely not the place to visit to learn anything new.
This chapter summarizes the main technical background on financial and actuarial modelling required, with particular emphasis on present values, stochastic investment returns, the Gompertz law of mortality and the valuation of temporary life annuities which are very close cousins to modern tontines. To be clear, this chapter is a review of background and notation. It’s definitely not the place to visit to learn anything new.

2.1 Setting the Stage

Figure 2.1 is a summary or schematic which lays out the plan for the next few chapters and the algorithmic objective of this book. The modern tontine simulation I am building combines two different types of stochastic models, one for investment returns and the evolution of a portfolio (in the top left corner) and the other for projecting the number of survivors and future mortality rates more generally (in the top right corner). Those two projections or random generators (i.e. the top corners of the triangle) are then combined deterministically with a very specific sharing rule that feeds into the bottom corner, the periodic dividends or payout to investors. The point is to get to the bottom of the triangle and give investors a sense or range of what they can expect if they elect to participate in a modern tontine.
Philosophically and for most of this book I will assume that investment returns are the so-called Lognormally distributed and that mortality obeys the Gompertz law. Both of which will be carefully explained if you have never heard of them before, and carefully defended if what you heard hasn’t been kind. More importantly, the sharing rule is designed to amortize and distribute the market value of the underlying fund over a fixed and shrinking time horizon, for example, 30 years. Now, in Chap. 5 I will augment the algorithm and allow for death benefits and voluntary surrenders as yet another way for people to leave the fund with (some of) their money. That will require some tinkering and adjustments to the sharing rule, since it can’t be as generous. Then, Chap. 6 will focus on the upper left-hand corner and go beyond Lognormal returns, hoping to assuage the investment experts. Chapter 7 (focusing on the upper right-hand corner) will go beyond Gompertz, hoping to assuage the actuaries and demographers. But the over-arching objective of the R-script is to move from the dual base modelling assumptions to the bottom corner, projected dividends and payouts.
In other words, the algorithm or R-script is designed in a modular manner so that if you (e.g. the CFA) do not approve of my upper left-hand corner, then you can use your own economic forecasts and financial scenarios in that black box. Likewise, if you (e.g. the FSA) do not like my models for life and death, you can include your favourite actuarial basis or N-parameter mortality models. Just as importantly, both groups (CFAs and FSAs) can independently tinker with the tops and then merge their favourite models into the bottom corner of the triangle. I should also note that I will be assuming independence between markets (left) and mortality (right), despite the growing evidence (and pandemic) those two might have complex, non-linear and lagged dependencies. The advanced reader (e.g. the PhD) might want to introduce some (lagged) dependencies between the two main sources of uncertainty or might dispense with my sharing rule altogether, and devise their own. The only thing that shouldn’t change is the upside down triangle, or the flowchart in Fig. 2.1.
What remains to be done here before the coding begins is to introduce some basic notation, terminology and stochastic ideas at the core of the top two corners of this triangle. The objective in the next few pages of this chapter then is to get all the mathematics out of the way so that the next few chapters can progress with as little formality, symbols or math as possible. Needless to say, please treat the next few pages as a review or a crash-course, as opposed to a proper stand-alone introduction to stochastic modelling for investments and mortality.

2.2 Investment Returns: Notation and Moments

Here I will focus on investment returns (only) and use the symbol \(\tilde {R}[i,j]\) to denote the random effective periodic investment return during the j’th period, for the i’th simulation run. So, if we are generating 10,000 annual investment returns over a period of 30 years, the counter (row) value of i ranges from a minimal value i = 1 to maximal value of i = 10, 000 and the counter (column) value of j ranges from j = 1 to j = 30. In this case it would create 300,000 random numbers in total.
For greater clarity, what I mean by effective periodic investment return is that if you invest $100 at the beginning of the period, then it will be worth \(\$100 \times (1+\tilde {R})\) at the end of the period, where \(\tilde {R}\) is shorthand notation for the full \(\tilde {R}[i,j]\). In the tontine simulation, I will for the most part assume that \((1+\tilde {R}[i,j])\) is lognormally distributed, which is synonymous with: \(\ln [1+\tilde {R}[i,j]]\) being normally distributed. Indeed, another name for the quantity: \(\ln [1+\tilde {R[i,j]}]\) is the continuously compounded (CC) investment return (IR), denoted by its own: \(\tilde {r}[i,j]\), where \(\tilde {R}[i,j]=e^{\tilde {r}[i,j]}-1\).
The mean or expected value of the (theoretical) CC-IR is equal to and denoted by ν, and the standard deviation of the (theoretical) CC-IR is σ. Remember that the underlying normality assumption imposes very tight restrictions on the 3rd and 4th moment, namely that the skewness is zero and the kurtosis is 3, both of which I will now describe and derive via their moments.
Computationally, the first central moment of the CC-IR in the j’th period is denoted by and computed as follows:
$$\displaystyle \begin{aligned} M\!1[j] := \frac{1}{N} \sum_{i=1}^{N} \tilde{r}[i,j] \; \sim \nu \end{aligned} $$
(2.1)
This is (a fancy name for) the average or mean, which when computed over a large sample size N should be very close to the (theoretical) parameter ν. To further clarify notation, when j = 1, which is the first period (month, quarter, year), the first central moment of the CC-IR is: \(M\!1[1] = \frac {1}{N} \sum _{i=1}^{N} \tilde {r}[i,1]\), and in the second period it’s: \(M\!1[2] = \frac {1}{N} \sum _{i=1}^{N} \tilde {r}[i,2]\), etc. Therefore, if the simulation model operates over T distinct periods, then a simulation run should generate T values of \(M\!1\), all of which should be (very) close to ν and converge to ν when N →.
Now, in theory we could simulate a very complicated process for \(\tilde {R}[i,j]\) over the time period j (e.g. GARCH, Jump Diffusion, Stochastic Volatility), under which the \(\tilde {r}[i,j]\) values are not independent and identically distributed (i.i.d.) normal variables. Even then, we would use the same notation and would compute logarithms of the one-plus investment return: \(\ln [1+\tilde {R}]\), before any sample statistics were computed. The convention is to work with the CC-IR, even if it isn’t normal.
Moving on, the second central moment of the CC-IR during the j’th period is defined as follows:
$$\displaystyle \begin{aligned} M\!2[j] := \frac{1}{N} \sum_{i=1}^{N} \left( \tilde{r}[i,j] - M\!1[j] \right)^2 \end{aligned} $$
(2.2)
This is the variance, which implies that the standard deviation (a.k.a. volatility) would be: \(\sqrt {M\!2[j]} \sim \sigma \). Note that in this formulation we divide by N, not (N − 1), which is yet another convention. I’ll emphasize (again) that in the core simulation model the return-generating process is: (i) stationary, and (ii) normal, so the volatility \(\sqrt {M\!2[j]}\) is unchanged over time or periods.
Moving on to the higher moments for the purpose of defining skewness and kurtosis, the third central moment is computed as follows:
$$\displaystyle \begin{aligned} M\!3[j] := \frac{1}{N} \sum_{i=1}^{N} \left( \tilde{r}[i,j] - M\!1[j] \right)^3 \end{aligned} $$
(2.3)
The rhythm or pattern should by now be familiar. We subtract off the first central moment and then raise to the power of whatever central moment we are interested in computing. This leads to the fourth (and final, for our purposes) central moment of the CC-IR, which is defined and computed as:
$$\displaystyle \begin{aligned} M\!4[j] := \frac{1}{N} \sum_{i=1}^{N} \left( \tilde{r}[i,j] - M\!1[j] \right)^4 \end{aligned} $$
(2.4)
Every period (month, quarter, year) in the simulation will be associated with these four quantities, \(M\!1\), \(M\!2\), \(M\!3\) and \(M\!4\), which are then combined and scaled for the skewness and kurtosis. Investment strategies that have a large option component, that is puts and calls, will be associated with very different higher moments and will obviously impact tontine dividends. Formally, the Fisher coefficient of skewness is:
$$\displaystyle \begin{aligned} \mathtt{skewness}[j] \; = \; \frac{ M\!3[j] }{ M\!2[j]^{3/2}} \end{aligned} $$
(2.5)
It’s the third central moment scaled by the standard deviation to the power of three. Finally, the coefficient of kurtosis is computed as follows:
$$\displaystyle \begin{aligned} \mathtt{kurtosis}[j] \; = \; \frac{ M\!4[j] }{ M\!2[j]^2} \end{aligned} $$
(2.6)
I sum up with the following three reminders. First, the (theoretical) skewness of \(\tilde {r}\) is zero, and the (theoretical) kurtosis is 3, when the CC-IR \(\tilde {r}\) is normally distributed. But in a finite sample (especially with small values of N) the estimates might deviate from (0) to (3). Second, and just as importantly, the effective periodic investment return which I first denoted by \(\tilde {R}\) will not generate a skewness value of zero, or a kurtosis value of 3. In all likelihood the skewness of \(\tilde {R}\) itself will be positive, due to the exponentiation of \(\tilde {r}\). Third and in conclusion, skewness and kurtosis are defined across the columns and not the rows of the vector \(\tilde {R}\), but after the logs have been taken.

2.3 But Why Logarithms?

There are a number of reasons why all investment variables are computed—and investment statistics are compiled—based on the logarithm of one-plus return, and why I prefer to use \(r:=\ln [1+R]\), instead of R. Some of those reasons are based on historical conventions and others are driven by convenience.
Historically, the most famous formula in finance, namely the Black-Scholes-Merton (1973) equation for the value of an option is expressed in terms of the annual risk-free interest rate ν and the volatility σ, both of which are the continuously compounded rates. In other words, they assume that the logarithm of the underlying security process obeys a Brownian motion with standard deviation σ. Hence, to use the BSM formula you have to estimate and then use the standard deviation of the logarithm of the (stock, commodity, currency) price, plus one. Sure, F. Black, M. Scholes and R. Merton could have re-written their formula in terms of the standard deviation of the actual security price—not the one-plus logarithm—but it would have been messier and unnecessary.
Ergo, if the second moment is computed after logarithms, then for consistency it makes sense to do the same for skewness and kurtosis. Of course, when your (volatility) numbers and (investment) returns are small, these two methods (with and without logs) will lead to statistical estimates that are quite close and perhaps indistinguishable from each other, but when rates get higher it matters.
Beyond historic tradition and convention, there are other reasons to (only) work with the natural (not base ten!) logarithm of one-plus effective investment returns. That has to do with lower and upper bounds. The effective investment return can (at worst) be minus 100%, or R = −1; the entire capital is lost. But in theory its upper bound could be infinity. Mathematically, the random variable R ∈ [−1, +). This awkward truncation at minus one makes it difficult to fit a continuous univariate distribution to the underlying investment return process. Taking logarithms solves this modelling problem, since \(\ln [1+R]\) as R →−1 is −, and \(\ln [1+R]\) as R → is + . The logarithm of one-plus effective investment return lives on \(r:=\ln [1+R] \in (-\infty , +\infty )\) which enables a wider class of probability distributions, with the most common being normal.
Finally, when working with log returns, the growth is additive. This means that if the continuously compounded investment return (CC-IR) during one period is r 1 and the CC-IR during a subsequent period is r 2, we can add them up without having to worry about compounding periods. It’s just easier to work with. The same thing goes for variance σ 2. The annual variance is the sum of the monthly variances. The monthly volatility is \(\sigma / \sqrt {12}\), etc. That only works with logarithms. In fact, get used to working with logarithms because in the next section, when I discuss models of mortality, they will also be expressed in terms of logs.

2.4 Gompertz Survival Probabilities

A key ingredient of the modern tontine simulation presented and developed in this book is the survival probability curve, that is the probability an individual investor who is alive at age x will survive for the next t years and be entitled to a dividend payout at age (x + t). This is also denoted by \(\Pr [T_x \geq t]\), where T x represents the remaining lifetime random variable. In most of what follows in the next few chapters that critical probability will be constructed based on the so-called Gompertz law of mortality which I will briefly explain.
The main insight of this law of mortality—cited as Gompertz [4]—and for which he is (still) famous is that the natural logarithm of adult mortality hazard rates are linear. If one denotes the instantaneous hazard rate—that is the rate at which people die—by the symbol λ x, where x is age, then λ x = h 0 e gx. Using this formulation h 0 and g are constants, which equivalently means that \(\ln [\lambda _x]=\ln [h_0]+gx\), where h 0 is an (initial) time-zero mortality hazard rate, and g is the mortality growth rate.
Using the above parameterization, which is quite common in the demographic literature, it’s easy to see that mortality rates grow exponentially with (adult) age. For the purpose of what follows (and most of this book) I’ll re-parametrize using a slightly different formulation. In particular, I’ll assume the following functional form for the mortality hazard rate function, a.k.a. the mortality rate:
$$\displaystyle \begin{aligned} \begin{array}{rcl} \lambda_{x} \; & = &\displaystyle \; \frac{1}{b} \mathrm{e}^{(x-m)/b} \\ \ln [\lambda_{x}] \; & = &\displaystyle \; \overbrace{-\ln[b]-m/b}^{\ln[h_0]} + \overbrace{(1/b)}^{g} x {} \end{array} \end{aligned} $$
(2.7)
At first this might seem overly complicated, relative to a simpler “hazard rates are exponential” via λ x = h 0 e gx specification. But there are good reasons for writing the mortality hazard rates in this manner. The parameters (m, b) have a real tangible interpretation associated with ones expectations of life. Nevertheless and regardless of the exact parameter specification, the Gompertz model survival probability can be expressed as follows:
$$\displaystyle \begin{aligned} \Pr[T_x \geq t] = \exp \{ \mathrm{e}^{(x-m)/b} (1-\mathrm{e}^{t/b}) \} {} \end{aligned} $$
(2.8)
Using this formulation, the parameter m represents the modal value (in years) of the distribution and the b parameter represents the dispersion (in years) coefficient. For example, I might say that your random lifetime has a modal value of m = 90 years and a dispersion coefficient of b = 10 years. Note that the modal value isn’t the mean (average) value, and that the dispersion coefficient isn’t quite the standard deviation either, although both are close. This particular specification of the mortality rate is known as the pure Gompertz assumption, which will play a very central role in the modern tontine simulations. I will now create our first function in R-script, which will be used over (and over) again in what follows. It is called the TPXG function, based on Eq. (2.8).

                  
                    TPXG<-function(x,t,m,b){exp(exp((x-m)/b)*(1-exp(t/b)))}
                  
                
Notice that the function has four arguments (x, t, m, b). The first argument is the current (conditioning) age x, the second argument is time t, and the third and fourth arguments are the Gompertz (m, b) parameters. Here are some numerical examples.

                  
                    round(TPXG(65,35,90,10),digits=3)
                  
                  
                    > 0.072
                  
                  
                    round(TPXG(85,15,90,10),digits=3)
                  
                  
                    > 0.121
                  
                
The interpretation is as follows. If you are (a.k.a. conditional on) age x = 65, the probability of surviving to age y = 100, which is t = 35 years, is 7.2%, under a Gompertz model with parameters m = 90 and b = 10. But, if you are already (conditional on) age x = 85, the probability of surviving to age y = 100, which is t = 15 years, is 12.1%. The older you are, the greater the probability of reaching any given age.
Now, for the sake of completeness (and an independent verification) I will compute the 15-year survival probability by asking R to integrate the mortality hazard rate curve numerically. To do this properly (and without error messages) I must set all the Gompertz parameter values (first) and then define the mortality rate function in terms of time (only). Then, I can perform the numerical integration, between t = 0 to t = 15. The result (to 3 digits) is as follows:

                  
                    m<-90; b<-10; x<-85
                  
                  
                    lambda<-function(t){-(1/b)*exp((x+t-m)/b)}
                  
                  
                    round(exp(integrate(lambda,0,15)$value), digits=3)
                  
                  
                    > 0.121
                  
                
The analytic result from Eq. (2.8) is consistent with the brute-force numerical integration displayed above. Note that the lower the Gompertz modal value m, the earlier the person is expected to die and the survival probability to any future age should be lower. To be clear, the modal value of life m is the age at which this person is most likely to die, which is different (and actually higher) than the mean lifetime, denoted by E[T x], in a Gompertz model. I can use numerical routines in R to obtain these values. In particular, I can leverage the extremely useful and fundamental construction of the expectation as the integral of the survival probability function.
$$\displaystyle \begin{aligned} E[T_x] \; = \; \int_0^{\infty} \, \Pr[T_x \geq t] dt. \end{aligned} $$
(2.9)
Here is a short script in R that computes E[T x], using the numerical integration routine I noted earlier, in conjunction with the TPXG(.) function I just defined and created.

                  
                    m<-90; b<-10; x<-65
                  
                  
                    theta<-function(t){TPXG(x,t,m,b)}
                  
                  
                    round((integrate(theta,0,45)$value), digits=2)
                  
                  
                    [1] 21.75
                  
                
The lower bound of integration is zero (years) and the upper bound is 45 (years), after which the survival probability is assumed to be negligible. So, at x = 65, your expected remaining lifetime is 21.75 years, and your expected age at death is 86.75, whereas your modal age at death is m = 90. Note the difference between mean, mode as well as median age at death, which is the (x + t), at which TPXG(x,t,m,b)=0.5.
Finally, the expectation of remaining life at birth can be expressed in terms of Euler’s constant γ e := 0.5772, and the standard deviation can be approximated in terms of: π = 3.1415, both of which are given here without any proof or justification.
$$\displaystyle \begin{aligned} \begin{array}{rcl} E[T_0] \; & = &\displaystyle \; m-b \gamma_{e} \approx m-(0.5772)b, \\ SD[T_0] \; & \approx &\displaystyle \; b \frac{\pi}{\sqrt{6}} \approx (1.28255)b, {} \end{array} \end{aligned} $$
(2.10)
which implies that (at age x = 0) the standard deviation of life is greater than the dispersion coefficient b. Finally, for those readers interested in a deeper understanding of mortality models in general and the Gompertz model in particular, I would suggest you consult the companion (2020) book called: Retirement Income Recipes in R, from which most of this section has been sourced. I will return to mortality models in Chap. 7, where I will compare Gompertz to some discrete mortality tables that are more common and familiar to practicing actuaries.

2.5 Need-to-Know: Annuity Values

2.5.1 Life Only Immediate Annuity

Following standard methodology, which one can find in the referenced books by Promislow [8], Kaas et al. [5], Dickson et al. [3], Bowers et al. [2], or Booth et al. [1], I will use the following to denote the discounted actuarial present value of a life annuity.
$$\displaystyle \begin{aligned} a(x,r) \; = \; \int_{0}^{\infty} e^{-rt} \Pr[T_x \geq t] dt \; = \; \int_{0}^{\infty} e^{-rt} \, ({}_tp_x) dt {} \end{aligned} $$
(2.11)
Under Gompertz mortality, this can be solved analytically, see, for example, Milevsky [6], and leads to the expression:
$$\displaystyle \begin{aligned} a(x,r) \; = \; \frac{ b \varGamma(-rb,e^{(x-m)/b}) }{ \exp\{(m-x)r-e^{(x-m)/b}\} }, {} \end{aligned} $$
(2.12)
where Γ(A, B) is the incomplete Gamma function. For calibration and comparison purposes I now display values for life annuity prices assuming two different interest (valuation) rates: r = 2% and r = 4%. Table 2.1 provides a range of numerical values of actuarial present values: a = a(x, r), for x = 55, x = 65, x = 75 under valuation rates r = 2% and r = 4%.
Table 2.1
Pure life annuity factor values. Gompertz (m = 90, b = 10)
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-00928-0_2/MediaObjects/522117_1_En_2_Figaab_HTML.png
Here is how actuarial present values are converted into actual market or insurance company prices. For example, under an r = 4% interest rate and an insurance loading of 20%, a P = $100, 000 premium at the age of x = 65, would result in a lifetime cash-flow of \(C=\frac {100000}{1.2(13.73359)}=\$5676\) per year, or (closer to continuous time) $109.15 per week, which ceases at death. This is the last I’ll say of insurance loadings, other than to remind readers that most of the algorithms and scripts in this book are presented within a fee-free zone. In practice, loadings, commissions and transaction costs must be incorporated and at the very least deducted from anticipated payouts. Likewise, the transition from actuarial factor a(x, r) to payout rate κ(x, r) := 1∕a(x, r) might involve yet another layer of fees and charges that are for the most part ignored.

2.5.2 Refund at Death IA

Most consumers and annuity buyers are unwilling to “invest” in a life annuity in which all-is-lost at death, and many buyers are asking for some sort of death benefit or at least a refund of the un-earned premium. In the more advanced version of the modern tontine I will allow for and include such a feature, so in this section I will examine the mathematical properties of the corresponding annuity, the cash-refund immediate (or life) annuity. When you think about how the value of the basic annuity factor is defined in Eq. (2.11), the first thing to note is that its price must be defined recursively. Why? Because the periodic cash-flow itself depends on the original price. In the absence of any insurance loading, this can be expressed mathematically as follows:
$$\displaystyle \begin{aligned} {a^{\star}}(x,r) = \int_{0}^{\infty} e^{-rs} \, ({}_sp_x) ds + \int_{0}^{{a^{\star}}(x,r)} \left({a^{\star}}(x,r) \, - \, s \right) e^{-rs} \, ({}_sp_x) \, \lambda_{(x+s)} \, ds, \end{aligned}$$
where the right-hand side represents the actuarial present value of payments. The annuitant receives $1 for life, but if they die early, that is before the entire a (x, r) has been returned, the beneficiary receives the difference between the price paid a (x, r) and payments received prior to death. In other words, death prior to age y = x + a (x, r) triggers a (declining) death benefit.
Notice how the first integral above is the basic income annuity factor a(x, r), and the second integral represents the non-negative life insurance component. The cash-refund value minus the life only value can also be expressed as:
$$\displaystyle \begin{aligned} {a^{\star}}(x,r) - a(x,r) \; = \; \int_{0}^{{a^{\star}}(x,r)} \left({a^{\star}}(x,r) \, - \, s \right) e^{-rs} \, ({}_sp_x) \, \lambda_{(x+s)} \, ds \; \; \geq 0. {} \end{aligned} $$
(2.13)
Solving for the (implicit) variable that determines the value of the cash-refund annuity is best done via a bisection algorithm, or using the built-in solver in R. For more on this, please read the paper referenced as Milevsky and Salisbury [7]. For example, under a Gompertz law of mortality with m = 90, b = 10 and r = 3%, the standard annuity factor at age x = 65 is a(65, 0.03) = 15.25 while the factor with a death benefit feature is a (65, 0.03) = 16.91, both per dollar of lifetime income. See Table 2.2 for more.
Table 2.2
A life annuity with a cash refund at death. Gompertz (m = 90, b = 10)
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-00928-0_2/MediaObjects/522117_1_En_2_Figaac_HTML.png
Note, once again, how in contrast to Eq. (2.12), the a appears on both sides of Eq. (2.13) which leads to an implicit recursivity. The paper cited as Milevsky and Salisbury [7] offers a proof of existence and uniqueness and shows that a (x, r) actually declines in both x and r. In sum, this chapter contains a crash-course assortment of formulas, results and algorithms that will be used in later chapters to simulate scenarios and payouts for the modern tontine. For anything more I refer the reader to the references.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
1.
Zurück zum Zitat Booth, P., Chadburn, R., Haberman, S., James, D., Khorasanee, Z., Plumb, R. H., & Rickayzen, B. (2020). Modern actuarial theory and practice. Chapman & Hall, CRC Press.CrossRef Booth, P., Chadburn, R., Haberman, S., James, D., Khorasanee, Z., Plumb, R. H., & Rickayzen, B. (2020). Modern actuarial theory and practice. Chapman & Hall, CRC Press.CrossRef
2.
Zurück zum Zitat Bowers, N. L., Gerber, H. U., Hickman, J. C., Jones, D. A., & Nesbitt, C. J. (1997). Actuarial mathematics. Schaumburg, IL: Society of Actuaries.MATH Bowers, N. L., Gerber, H. U., Hickman, J. C., Jones, D. A., & Nesbitt, C. J. (1997). Actuarial mathematics. Schaumburg, IL: Society of Actuaries.MATH
3.
Zurück zum Zitat Dickson, D. C. M., Hardy, M. R., & Waters, H. R. (2009). Actuarial mathematics for life contingent risks. Cambridge: Cambridge University Press.CrossRef Dickson, D. C. M., Hardy, M. R., & Waters, H. R. (2009). Actuarial mathematics for life contingent risks. Cambridge: Cambridge University Press.CrossRef
4.
Zurück zum Zitat Gompertz, B. (1825). On the nature of the function expressive of the law of human mortality and on a new mode of determining the value of life contingencies. Philosophical Transactions of the Royal Society of London, 115, 513–583.CrossRef Gompertz, B. (1825). On the nature of the function expressive of the law of human mortality and on a new mode of determining the value of life contingencies. Philosophical Transactions of the Royal Society of London, 115, 513–583.CrossRef
5.
Zurück zum Zitat Kaas, R., Goovaerts, M., Dhaene, J., & Denuit, M. (2008). Modern actuarial risk theory: Using R (Vol. 128). Springer Science & Business Media. Kaas, R., Goovaerts, M., Dhaene, J., & Denuit, M. (2008). Modern actuarial risk theory: Using R (Vol. 128). Springer Science & Business Media.
6.
Zurück zum Zitat Milevsky, M. A. (2020). Retirement income recipes in R: From ruin probabilities to intelligent drawdown. New York: Springer-Nature.CrossRef Milevsky, M. A. (2020). Retirement income recipes in R: From ruin probabilities to intelligent drawdown. New York: Springer-Nature.CrossRef
8.
Zurück zum Zitat Promislow, S. D. (2006). Fundamentals of actuarial mathematics. Toronto: John Wiley & Sons.MATH Promislow, S. D. (2006). Fundamentals of actuarial mathematics. Toronto: John Wiley & Sons.MATH
Metadaten
Titel
Financial and Actuarial Background
verfasst von
Moshe Arye Milevsky
Copyright-Jahr
2022
DOI
https://doi.org/10.1007/978-3-031-00928-0_2