We then pass from an ordered pair of contingent consumption plans to two contingent consumption plans which are jointly considered regardless of the notion of ordered pair. We have then to consider a double risky asset denoted by
$$\begin{aligned} X_{12} = \{_{1}X, \, _{2}X\} \end{aligned}$$
(48)
whose possible monetary values coincide with the contravariant components of an antisymmetric tensor of order 2. We say that
\(X_{12}\) is a portfolio of two risky assets jointly considered that are independent of the notion of ordered pair. Hence, after choosing
\(m^2\) joint probabilities connected with
\(_{1}X \, _{2}X\), we observe that it is necessary to consider four joint distributions characterizing
\(_{1}X \, _{1}X\),
\(_{1}X \, _{2}X\),
\(_{2}X \, _{1}X\) and
\(_{2}X \, _{2}X\), with
$$\begin{aligned}&_{1}X \, _{1}X :I(_{1}X) \times I(_{1}X) \rightarrow {\mathbb {R}}, \end{aligned}$$
(49)
$$\begin{aligned}&_{2}X \, _{2}X :I(_{2}X) \times I(_{2}X) \rightarrow {\mathbb {R}}\end{aligned}$$
(50)
and
$$\begin{aligned} _{2}X \, _{1}X :I(_{2}X) \times I(_{1}X) \rightarrow {\mathbb {R}}, \end{aligned}$$
(51)
in order to release
\(X_{12}\) from the notion of ordered pair. We note that the marginal components of
\(X_{12}\) denoted by
\(_{1}X\) and
\(_{2}X\) are not put near unlike what happens when we jointly consider
\(_{i}X\) and
\(_{j}X\), where we have
\(i, j = 1, 2\). Each probability distribution of a marginal risky asset is viewed as a particular joint distribution. This implies that all off-diagonal joint probabilities of a two-way table, where the number of rows is equal to the one of columns, coincide with 0.
An affine tensor of order 2 representing the possible monetary values of
\(_{1}X \, _{2}X\), where
\(_{1}X \, _{2}X\) corresponds to
\((_{1}X, \, _{2}X)\), is written in the form
$$\begin{aligned} T = {}^{}_{(1)}{\mathbf {x}}^{}_{} \otimes {}^{}_{(2)}{\mathbf {x}}^{}_{} = {}^{}_{(1)}x^{i}{} {}^{}_{(2)}x^{j} {\mathbf {e}}_i \otimes {\mathbf {e}}_j . \end{aligned}$$
(52)
An affine tensor of order 2 representing the possible monetary values of
\(_{2}X \, _{1}X\), where
\(_{2}X \, _{1}X\) corresponds to
\((_{2}X, \, _{1}X)\), is conversely written in the form
$$\begin{aligned} T = {}^{}_{(2)}{\mathbf {x}}^{}_{} \otimes {}^{}_{(1)}{\mathbf {x}}^{}_{} = {}^{}_{(2)}x^{j}{} {}^{}_{(1)}x^{i} {\mathbf {e}}_j \otimes {\mathbf {e}}_i . \end{aligned}$$
(53)
We have written a same affine tensor of order 2 denoted by
T whose
\(m^2\) contravariant components are not the same. If we pass from (
52) to (
53) then we note that the contravariant components whose upper indices are equal do not change. If we pass from (
52) to (
53) then we note that the contravariant components whose upper indices are not equal change. It follows that we write an antisymmetric tensor of order 2 in the form
$$\begin{aligned} T = \sum _{i < j} \left( {}^{}_{(1)}x^{i} {}^{}_{(2)}x^{j} - {}^{}_{(1)}x^{j} {}^{}_{(2)}x^{i} \right) {\mathbf {e}}_i \otimes {\mathbf {e}}_j \end{aligned}$$
(54)
because we have to consider (
52) and (
53) together. We have written
\(i < j\) under the summation symbol since it is easy to realize that if it turns out to be
\(i = j\) then every contravariant component inside parentheses is equal to 0. Hence, we denote by
\({}^{}_{12}x^{}_{}\) an antisymmetric tensor of order 2 logically identifying
\(X_{12}\). We write
$$\begin{aligned} {}^{}_{12}x^{(ij)}_{} = \begin{vmatrix} {}^{}_{(1)}x^{i} \quad&\quad {}^{}_{(1)}x^{j}\\ {}^{}_{(2)}x^{i} \quad&\quad {}^{}_{(2)}x^{j}\\ \end{vmatrix} = {}^{}_{(1)}x^{i} {}^{}_{(2)}x^{j} - {}^{}_{(1)}x^{j} {}^{}_{(2)}x^{i} \end{aligned}$$
(55)
in order to identify the strict contravariant components of it. We have
\(i < j\). The number of such components is overall equal to
$$\begin{aligned} {m \atopwithdelims ()2} . \end{aligned}$$
(56)
The corresponding strict covariant components of
\({}^{}_{12}x^{}_{}\) are given by
$$\begin{aligned} {}^{}_{12}x^{}_{(ij)} = \begin{vmatrix} {}^{}_{(1)}x_{i} \quad&\quad {}^{}_{(1)}x_{j}\\ {}^{}_{(2)}x_{i} \quad&\quad {}^{}_{(2)}x_{j}\\ \end{vmatrix} = \begin{vmatrix} {}^{}_{(1)}x^{j}_{} p_{ji} \quad&\quad {}^{}_{(1)}x^{i}_{} p_{ij}\\ {}^{}_{(2)}x^{j}_{} p_{ji} \quad&\quad {}^{}_{(2)}x^{i}_{} p_{ij}\\ \end{vmatrix} , \end{aligned}$$
(57)
where we have
\(i < j\). By taking all joint probabilities of the joint distribution of
\(_{1}X\) and
\(_{2}X\) into account, such covariant components are obtained by considering some probabilities every time. This means that we could be faced with several vector homographies. Given a two-way table containing all joint probabilities, we always consider those probabilities belonging to a row or column of it in order to obtain a covariant component of an
m-dimensional vector. We do not compute the scalar value of (
57). We have to note that the number of the strict contravariant and covariant components of
\({}^{}_{12}x^{}_{}\) is absolutely unimportant. We always obtain the same outcome independently of such a number. We put together (
55) and (
57), where (
55) and (
57) contain all strict contravariant and covariant components of
\({}^{}_{12}x^{}_{}\) at the same time. We always put together (
55) and (
57) in the same way. We always associate
\({}^{}_{(1)}x^{i}\) with
\({}^{}_{(1)}x_{i}\),
\({}^{}_{(1)}x^{j}\) with
\({}^{}_{(2)}x_{j}\),
\({}^{}_{(2)}x^{i}\) with
\({}^{}_{(1)}x_{i}\) and
\({}^{}_{(2)}x^{j}\) with
\({}^{}_{(2)}x_{j}\). After putting together (
55) and (
57), whose structure is evidently the one of two determinants because we are considering multilinear matters, we obtain different single terms (monomials). It follows that a variable index appearing twice in a monomial implies summation of it over all values of the index (hence, every time it is possible to obtain a polynomial by using the Einstein notation). On the other hand, all strict contravariant and covariant components of
\({}^{}_{12}x^{}_{}\) are simultaneously identified with two determinants because, in general, the determinant of a square matrix is the most exemplary multilinear relationship as well as a linear combination of basis vectors is the most exemplary linear relationship. We obtain the mathematical expectation of
\(X_{12}\) given by
$$\begin{aligned} \begin{aligned} \Vert {}^{}_{12}x^{}_{}\Vert ^2_\alpha = \begin{vmatrix} \Vert {}^{}_{(1)}{\mathbf {x}}^{}_{}\Vert ^2_\alpha \quad&\quad \langle {}^{}_{(1)}{\mathbf {x}}^{}_{}, \, {}^{}_{(2)}{\mathbf {x}}^{}_{} \rangle _\alpha \\ \langle {}^{}_{(2)}{\mathbf {x}}^{}_{}, \, {}^{}_{(1)}{\mathbf {x}}^{}_{} \rangle _\alpha \quad&\quad \Vert {}^{}_{(2)}{\mathbf {x}}^{}_{}\Vert ^2_\alpha \\ \end{vmatrix}&= \Vert {}^{}_{(1)}{\mathbf {x}}^{}_{}\Vert ^2_\alpha \Vert {}^{}_{(2)}{\mathbf {x}}^{}_{}\Vert ^2_\alpha - \left( \langle {}^{}_{(1)}{\mathbf {x}}^{}_{}, \, {}^{}_{(2)}{\mathbf {x}}^{}_{} \rangle _\alpha \right) ^2 \end{aligned} , \end{aligned}$$
(58)
where we evidently observe
$$\begin{aligned} \langle {}^{}_{(1)}{\mathbf {x}}^{}_{}, \, {}^{}_{(2)}{\mathbf {x}}^{}_{} \rangle _\alpha = \langle {}^{}_{(2)}{\mathbf {x}}^{}_{}, \, {}^{}_{(1)}{\mathbf {x}}^{}_{} \rangle _\alpha . \end{aligned}$$
(59)
By putting together (
55) and (
57) we are always faced with four joint distributions characterizing
\(_{1}X \, _{1}X\),
\(_{1}X \, _{2}X\),
\(_{2}X \, _{1}X\) and
\(_{2}X \, _{2}X\) that are all summarized. We write
$$\begin{aligned} \Vert {}^{}_{12}x^{}_{}\Vert ^2_\alpha = {\mathbf {P}}(X_{12}) , \end{aligned}$$
(60)
where it turns out to be
$$\begin{aligned} \begin{aligned} {\mathbf {P}}(X_{12})&= \begin{vmatrix} \Vert {}^{}_{(1)}{\mathbf {x}}^{}_{}\Vert ^2_\alpha&\langle {}^{}_{(1)}{\mathbf {x}}^{}_{}, \, {}^{}_{(2)}{\mathbf {x}}^{}_{} \rangle _\alpha \\ \langle {}^{}_{(2)}{\mathbf {x}}^{}_{}, \, {}^{}_{(1)}{\mathbf {x}}^{}_{} \rangle _\alpha&\Vert {}^{}_{(2)}{\mathbf {x}}^{}_{}\Vert ^2_\alpha \\ \end{vmatrix} \\ {}&= \begin{vmatrix} {}^{}_{(1)}x^{i} \, {}^{}_{(1)}x^{i} p_{ii}^{(11)} = {}^{}_{(1)}x^{i} \, {}^{}_{(1)}x_{i} \quad&\quad {}^{}_{(1)}x^{j} \, {}^{}_{(2)}x^{i} p_{ij}^{(12)} = {}^{}_{(1)}x^{j} \, {}^{}_{(2)}x_{j}\\ {}^{}_{(2)}x^{i} \, {}^{}_{(1)}x^{j} p_{ji}^{(21)} = {}^{}_{(2)}x^{i} \, {}^{}_{(1)}x_{i} \quad&\quad {}^{}_{(2)}x^{j} \, {}^{}_{(2)}x^{j} p_{jj}^{(22)} = {}^{}_{(2)}x^{j} \, {}^{}_{(2)}x_{j}\\ \end{vmatrix} . \end{aligned} \end{aligned}$$
(61)
We note that
\(p^{(11)}\) is the tensor of all joint probabilities associated with
\(({}^{}_{1}{\mathbf {x}}^{}_{}, \, {}^{}_{1}{\mathbf {x}}^{}_{})\). The same is true for all others contained in (
61). It is possible to observe that in general it turns out to be
$$\begin{aligned} {\mathbf {P}}(_{1}X \, _{2}X) \ne {\mathbf {P}}(X_{12}) . \end{aligned}$$
(62)
We finally write
$$\begin{aligned} \begin{aligned} {\mathbf {P}}(X_{12}) = \begin{vmatrix} {\mathbf {P}}(_{1}X \, _{1}X) \quad&\quad {\mathbf {P}}(_{1}X \, _{2}X) \\ {\mathbf {P}}(_{2}X \, _{1}X) \quad&\quad {\mathbf {P}}(_{2}X \, _{2}X) \\ \end{vmatrix} , \end{aligned} \end{aligned}$$
(63)
where the determinant of the square matrix of order 2 under consideration is a bilinear function of the columns of it.
\(\square\)
All deviations from
\({\mathbf {P}}(_{1}X)\) and
\({\mathbf {P}}(_{2}X)\) of the possible monetary values of
\(_{1}X\) and
\(_{2}X\) are translations. It is then possible to write
$$\begin{aligned} \begin{aligned} \Vert {}^{}_{12}d^{}_{}\Vert ^2_\alpha = \begin{vmatrix} \Vert {}^{}_{(1)}{\mathbf {d}}^{}_{}\Vert ^2_\alpha \quad&\quad \langle {}^{}_{(1)}{\mathbf {d}}^{}_{}, \, {}^{}_{(2)}{\mathbf {d}}^{}_{} \rangle _\alpha \\ \langle {}^{}_{(2)}{\mathbf {d}}^{}_{}, \, {}^{}_{(1)}{\mathbf {d}}^{}_{} \rangle _\alpha \quad&\quad \Vert {}^{}_{(2)}{\mathbf {d}}^{}_{}\Vert ^2_\alpha \\ \end{vmatrix}&= \Vert {}^{}_{(1)}{\mathbf {d}}^{}_{}\Vert ^2_\alpha \Vert {}^{}_{(2)}{\mathbf {d}}^{}_{}\Vert ^2_\alpha - \left( \langle {}^{}_{(1)}{\mathbf {d}}^{}_{}, \, {}^{}_{(2)}{\mathbf {d}}^{}_{} \rangle _\alpha \right) ^2 , \end{aligned} \end{aligned}$$
(64)
where
\({}^{}_{12}d^{}_{}\) is an antisymmetric tensor of order 2 logically representing
\(X_{12}\). We are faced with changes of origin of the possible monetary values of
\(_{1}X\) and
\(_{2}X\). We write
$$\begin{aligned} \Vert {}^{}_{12}d^{}_{}\Vert ^2_\alpha = \mathrm {Var}(X_{12}) = \sigma ^2_{X_{12}} . \end{aligned}$$
(65)
We note that it turns out to be
$$\begin{aligned} \langle {}^{}_{(1)}{\mathbf {d}}^{}_{}, \, {}^{}_{(2)}{\mathbf {d}}^{}_{} \rangle _\alpha = \langle {}^{}_{(2)}{\mathbf {d}}^{}_{}, \, {}^{}_{(1)}{\mathbf {d}}^{}_{} \rangle _\alpha = \mathrm {Cov}(_{1}X, \, _{2}X) = \mathrm {Cov}(_{2}X, \, _{1}X) , \end{aligned}$$
(66)
so it is possible to write
$$\begin{aligned} \begin{aligned} \mathrm {Var}(X_{12}) = \begin{vmatrix} \mathrm {Var}(_{1}X) \quad&\quad \mathrm {Cov}(_{1}X, \, _{2}X) \\ \mathrm {Cov}(_{2}X, \, _{1}X) \quad&\quad \mathrm {Var}(_{2}X) \\ \end{vmatrix} . \end{aligned} \end{aligned}$$
(67)
If we are faced with the variance of
\(X_{12}\) then
\(_{1}X\) and
\(_{2}X\) are fused together. In general, if we compute only the covariance of
\(_{1}X\) and
\(_{2}X\) (in addition to the variance of each of them) then they are simply put near.
\(\square\)