2.1 The general case of structural decomposition
For the general case of SDA, we will consider a variable
\({\mathbf {Y}}\) that is a product of
\(N\) variables:
$$\begin{aligned} {\mathbf {Y}} = {\mathbf {Z}}_{1} {\mathbf {Z}}_{2} \dots {\mathbf {Z}}_{N} = \prod _{n=1}^{N} {\mathbf {Z}}_{n}. \end{aligned}$$
(1)
Each variable
\({\mathbf {Z}}_{n}\) is called
factor where the subscript
\(n\) identifies the
nth factor in the decomposition. In the general case,
\({\mathbf {Y}}\) and
\({\mathbf {Z}}_{n}\) are matrices, and in special cases,
\({\mathbf {Z}}_{1}\) and/or
\({\mathbf {Z}}_{N}\) may be vectors and
\({\mathbf {Y}}\) may be a vector or a scalar.
Using superscripts (0) and (1) for the initial period 0 and the terminal period 1, define the change in
\({\mathbf {Y}}\). This can be done in two ways. First, define the absolute change in
\({\mathbf {Y}}\):
$$\begin{aligned} {\mathbf {Y}}^{(1)} - {\mathbf {Y}}^{(0)} = {\mathbf {Z}}_{1}^{(1)} {\mathbf {Z}}_{2}^{(1)} \dots {\mathbf {Z}}_{N}^{(1)} - {\mathbf {Z}}_{1}^{(0)} {\mathbf {Z}}_{2}^{(0)} \dots {\mathbf {Z}}_{N}^{(0)} = \prod _{n=1}^{N} {\mathbf {Z}}_{n}^{(1)} - \prod _{n=1}^{N} {\mathbf {Z}}_{n}^{(0)} = \Delta {\mathbf {Y}}, \end{aligned}$$
(2)
where
\(\Delta \) is the difference operator.
\(\Delta {\mathbf {Y}}\) may be understood as an increment of
\({\mathbf {Y}}\) and has the same units of measurement as
\({\mathbf {Y}}\) (usually, monetary units).
Second, define the relative change in
\({\mathbf {Y}}\):
$$\begin{aligned} {\mathbf {Y}}^{(1)} \oslash {\mathbf {Y}}^{(0)} = \left( {\mathbf {Z}}_{1}^{(1)} {\mathbf {Z}}_{2}^{(1)} \dots {\mathbf {Z}}_{N}^{(1)} \right) \oslash \left( {\mathbf {Z}}_{1}^{(0)} {\mathbf {Z}}_{2}^{(0)} \dots {\mathbf {Z}}_{N}^{(0)} \right) = \left( \prod _{n=1}^{N} {\mathbf {Z}}_{n}^{(1)} \right) \oslash \left( \prod _{n=1}^{N} {\mathbf {Z}}_{n}^{(0)} \right) = \text {P} {\mathbf {Y}}, \end{aligned}$$
(3)
where P is the ratio operator and
\(\oslash \) signifies the element-by-element division. P
\({\mathbf {Y}}\) measures the growth of each element in
\({\mathbf {Y}}\), and the units of measurement are dimensionless.
The problem of structural decomposition is to decompose
\(\Delta {\mathbf {Y}}\) and P
\({\mathbf {Y}}\) into
\(N\) terms that would attribute the change in
\({\mathbf {Y}}\) to the changes in each
nth factor. For example, the absolute change in
\({\mathbf {Y}}\) that is induced by a change in
\({\mathbf {Z}}_{1}\) may be written as:
$$\begin{aligned} \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1}, t_{2}, \dots , t_{N}) = {\mathbf {Z}}_{1}^{(1)} {\mathbf {Z}}_{2}^{(t_{2})} \dots {\mathbf {Z}}_{N}^{(t_{N})} - {\mathbf {Z}}_{1}^{(0)} {\mathbf {Z}}_{2}^{(t_{2})} \dots {\mathbf {Z}}_{N}^{(t_{N})}, \end{aligned}$$
(4)
where the superscripts
\((t_{2}) \dots (t_{N})\) can take values 1 or 0, and factors other than
\({\mathbf {Z}}_{1}\) can therefore be defined at either initial or terminal periods. For brevity, we will denote
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1}, t_{2}, \dots , t_{N}) \) by
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1}) \). One combination of
\( \{t_{2}, \dots , t_{N}\} \) corresponds to one
decomposition form of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1}) \). From the literature on the index number theory (Siegel
1945) and structural decomposition analysis (Seibel
2003), it is known that the number of possible unique decomposition forms of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1}) \) is
\(2^{N-1}\).
It is customary to refer to the two forms of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) \) where all factors other than
\({\mathbf {Z}}_{n}\) are defined at period 1 or period 0 as
polar decomposition forms (Dietzenbacher and Los
1998; de Haan
2001). For example, the polar forms of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1}) \) are:
$$\begin{aligned} \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1} | t_{2}=1, \dots , t_{N}=1)&= {\mathbf {Z}}_{1}^{(1)} {\mathbf {Z}}_{2}^{(1)} \dots {\mathbf {Z}}_{N}^{(1)} - {\mathbf {Z}}_{1}^{(0)} {\mathbf {Z}}_{2}^{(1)} \dots {\mathbf {Z}}_{N}^{(1)}, \end{aligned}$$
(5a)
$$\begin{aligned} \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1} | t_{2}=0, \dots , t_{N}=0)&= {\mathbf {Z}}_{1}^{(1)} {\mathbf {Z}}_{2}^{(0)} \dots {\mathbf {Z}}_{N}^{(0)} - {\mathbf {Z}}_{1}^{(0)} {\mathbf {Z}}_{2}^{(0)} \dots {\mathbf {Z}}_{N}^{(0)}. \end{aligned}$$
(5b)
All
\(2^{N-1}\) forms of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) \) can be classified according to the
distance from the polar form, denoted by
\(k \in \{0, \dots , N-1\}\). One of the polar forms needs to be chosen as the starting point in the decomposition: for convenience, let it be (
5a), and the corresponding value of
\(k\) is 0. This may be understood as follows: none of the remaining factors is defined at period 0, and all of those are at period 1. At
\(k = 1\), one of the remaining factors is now defined at period 0, while the rest are still at period 1. Obviously, there are
\(N - 1\) such forms of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) \). At
\(k = 2\), two of the remaining factors are defined at period 0, while the rest are at period 1, which continues until the other polar form is reached at
\(k = N-1\). The number of unique forms of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) \) at each
\(k\) is equal to the number of
\(k\) combinations of
\(N-1\):
\( \frac{(N-1)!}{(N-1-k)! k!} = \left( {\begin{array}{c}N-1\\ k\end{array}}\right) \). We will denote each
mth unique form of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) \) at
\(k\) steps from the polar form by
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n})_{k,m} \) with the subscript
\(m\) running from 1 to
\( \left( {\begin{array}{c}N-1\\ k\end{array}}\right) \). Calculating the weighted average of all unique forms at each
\(k\) with the respective weights
\(c_{k}\) and across all
\(k\) yields the aggregate form of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) \):
$$\begin{aligned} \overline{ \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) } = \sum _{k=0}^{N-1} c_{k} \sum _{m=1}^{ \left( {\begin{array}{c}N-1\\ k\end{array}}\right) } \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n})_{k,m} \quad \text {where} \quad c_{k} = \frac{(N-1-k)! k!}{N!}. \end{aligned}$$
(6)
Expression (
6) may be recognised as the Bennet indicator for the
nth factor (de Boer and Rodrigues
2020) that is the additive counterpart to the well-known Fisher index. Finally, collecting the changes induced by all
\(N\) factors produces a full additive decomposition of
\(\Delta {\mathbf {Y}}\):
$$\begin{aligned} \Delta {\mathbf {Y}} = \sum _{n=1}^{N} \overline{ \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) } = \sum _{n=1}^{N} \sum _{k=0}^{N-1} c_{k} \sum _{m=1}^{ \left( {\begin{array}{c}N-1\\ k\end{array}}\right) } \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n})_{k,m}. \end{aligned}$$
(7)
In case of multiplicative decomposition, the relative change in
\({\mathbf {Y}}\) that is induced by a change in
\({\mathbf {Z}}_{1}\) should be written as:
$$\begin{aligned} \text {P} {\mathbf {Y}} ({\mathbf {Z}}_{1}, t_{2}, \dots , t_{N}) = \left( {\mathbf {Z}}_{1}^{(1)} {\mathbf {Z}}_{2}^{(t_{2})} \dots {\mathbf {Z}}_{N}^{(t_{N})} \right) \oslash \left( {\mathbf {Z}}_{1}^{(0)} {\mathbf {Z}}_{2}^{(t_{2})} \dots {\mathbf {Z}}_{N}^{(t_{N})} \right) . \end{aligned}$$
(8)
The relative change P
\( {\mathbf {Y}} ({\mathbf {Z}}_{n}) \) aggregated within and between all
\(k\) is as follows:
$$\begin{aligned} \overline{ \text {P} {\mathbf {Y}} ({\mathbf {Z}}_{n}) } = \bigodot _{k=0}^{N-1} \left( \bigodot _{m=1}^{ \left( {\begin{array}{c}N-1\\ k\end{array}}\right) } \text {P} {\mathbf {Y}} ({\mathbf {Z}}_{n})_{k,m} \right) ^{\circ c_{k}}, \end{aligned}$$
(9)
where
\( \bigodot \) denotes the Hadamard product of a sequence, the power
\(\circ c_{k}\) applies to the elements of the respective vectors (Hadamard power), and the weights
\(c_{k}\) are defined as in Eq. (
6). Expression (
9) is nothing but the Fisher index for the
nth factor. And the full multiplicative decomposition of P
\({\mathbf {Y}}\) is given by:
$$\begin{aligned} \text {P} {\mathbf {Y}} = \bigodot _{n=1}^{N} \overline{ \text {P} {\mathbf {Y}} ({\mathbf {Z}}_{n}) } = \bigodot _{n=1}^{N} \bigodot _{k=0}^{N-1} \left( \bigodot _{m=1}^{ \left( {\begin{array}{c}N-1\\ k\end{array}}\right) } \text {P} {\mathbf {Y}} ({\mathbf {Z}}_{n})_{k,m} \right) ^{\circ c_{k}}. \end{aligned}$$
(10)
Decompositions (
7) and (
10) are
exact in the sense that they do not have any residual terms. These two formulae may be derived by computing the simple average (respectively, arithmetic or geometric) of all
elementary decompositions of the change in
\({\mathbf {Y}}\). An elementary decomposition is made of a unique sequence of
\(N\) decomposition forms denoting the change in
\({\mathbf {Y}}\) because of the changes in each
nth factor, that is
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) \) or P
\( {\mathbf {Y}} ({\mathbf {Z}}_{n}) \), where one
\(n\) is consecutively chosen at each
\(k\). The total number of sequences is equal to the number of permutations of
\(N\) factors, or
\(N!\) (Dietzenbacher and Los
1998).
Computing the simple average from the elementary decompositions requires
\(N!\) forms of
\( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n}) \) or P
\( {\mathbf {Y}} ({\mathbf {Z}}_{n}) \) some of which are duplicates, while computing the weighted average only involves
\(2^{N-1}\) unique forms thereof. The sum of all weights, with due account of the number of times they apply to all
\(m\)th forms at step
\(k\), is unity:
$$\begin{aligned} \sum _{k=0}^{N-1} c_{k} \frac{(N-1)!}{(N-1-k)! k!} = \sum _{k=0}^{N-1} \frac{(N-1-k)! k!}{N!} \frac{(N-1)!}{(N-1-k)! k!} = N \frac{(N-1)!}{N!} = 1. \end{aligned}$$
Tables
1 and
2 provide an exemplary calculation of the coefficients (weights) and the number of unique decomposition forms for each factor required to implement decompositions (
7) and (
10) where the number of factors is up to 10. Thanks to the underlying formula, Table
2 contains the Pascal’s triangle less the first row. The number of unique decomposition forms for each factor (
\(2^{N-1}\)) and their sum for all factors (
\(N 2^{N-1}\)) are a good indication of the complexity of the decomposition exercise: in case of 5 factors, 16 decomposition forms need to be computed for each factor and 80 for all factors, and in case of 10 factors, the respective numbers are 512 and 5120.
Table 1
Denominator of the coefficients (weights) in the decomposition
2 | 2 | 2 | | | | | | | | |
3 | 3 | 6 | 3 | | | | | | | |
4 | 4 | 12 | 12 | 4 | | | | | | |
5 | 5 | 20 | 30 | 20 | 5 | | | | | |
6 | 6 | 30 | 60 | 30 | 20 | 6 | | | | |
7 | 7 | 42 | 105 | 140 | 105 | 42 | 7 | | | |
8 | 8 | 56 | 168 | 280 | 280 | 168 | 56 | 8 | | |
9 | 9 | 72 | 252 | 504 | 630 | 504 | 252 | 72 | 9 | |
10 | 10 | 90 | 360 | 840 | 1260 | 1260 | 840 | 360 | 90 | 10 |
Table 2
Number of unique decomposition forms that correspond to one factor at each step from the polar form
2 | 1 | 1 | | | | | | | | | 2 |
3 | 1 | 2 | 1 | | | | | | | | 4 |
4 | 1 | 3 | 3 | 1 | | | | | | | 8 |
5 | 1 | 4 | 6 | 4 | 1 | | | | | | 16 |
6 | 1 | 5 | 10 | 10 | 5 | 1 | | | | | 32 |
7 | 1 | 6 | 15 | 20 | 15 | 6 | 1 | | | | 64 |
8 | 1 | 7 | 21 | 35 | 35 | 21 | 7 | 1 | | | 128 |
9 | 1 | 8 | 28 | 56 | 70 | 56 | 28 | 8 | 1 | | 256 |
10 | 1 | 9 | 36 | 84 | 126 | 126 | 84 | 36 | 9 | 1 | 512 |
Hence, the well-known problem of SDA is an exponential growth of the array of terms to be computed for an exact and full decomposition of \({\mathbf {Y}}\) as the number of factors increases. Several approaches have been proposed in the literature on SDA to address this problem.
It may be reasonable to handle factors in groups. For example, the factors that affect final demand may be delimited from those that affect intermediate demand. This enables
hierarchical or
nested SDA (see e.g., Koller and Stehrer
2010). In the most typical case of two groups, the array of
N factors is divided into two subsets with
R and
S factors, where
\(R + S = N\). The decomposition will involve two aggregate factors at the first tier and
\(R + S\) factors at the second tier. The total number of decomposition forms for all
N factors will now be
\(R 2^{R} + S 2^{S}\) which is necessarily less than
\(N 2^{N-1}\) if at least
\(R\) or
\(S\) is more than 1.
1
Dietzenbacher and Los (
1998) and Dietzenbacher et al (
2000) test the average of the two
polar decompositions against the average of all elementary decompositions. One polar decomposition is the elementary decomposition where at each
\(k\) starting from 0,
\(n = k+1\), and another polar decomposition is its ‘mirror image’ with
\(k\) running in reverse order, starting from
\(N-1\). There are two polar decompositions that contain the polar decomposition forms for the first and the last factors. This only requires computing two decomposition forms for each factor and
\(2N\) forms for all factors. They show that the result is a good approximation of the full decomposition. However, de Haan (
2001) stresses that the selection of the two polar decompositions is arbitrary. There exist
\(N!/2\) such decomposition pairs among the elementary decompositions, and any factor can be first or last.
Dietzenbacher and Los (
1998) also discuss two ad hoc solutions to simplify the SDA calculations that, however, do not provide exact decompositions. One solution is to take the averages of pairs of the polar decomposition forms for each factor.
2 These averages may be treated as special cases of Eqs. (
6) and (
9) where the coefficients (weights)
\(c_{k}\) are set to 1/2 and
\(k\) are set to 0 and
\(N-1\) (
\(m\) is irrelevant because there is only one polar decomposition form at
\(k = 0\) and
\(k = N-1\)). Then the full decompositions may be written as:
$$\begin{aligned} \Delta {\mathbf {Y}}= & {} \sum _{n=1}^{N} \overline{ \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n})_{pol} } = \sum _{n=1}^{N} \sum _{k=0,N-1} \frac{1}{2} \Delta {\mathbf {Y}} ({\mathbf {Z}}_{n})_{k} + {\mathbf {e}}_{\Delta }, \end{aligned}$$
(11)
$$\begin{aligned} \text {P} {\mathbf {Y}}= & {} \bigodot _{n=1}^{N} \overline{ \text {P} {\mathbf {Y}} ({\mathbf {Z}}_{n})_{pol} } = \left[ \bigodot _{n=1}^{N} \bigodot _{k=0,N-1} \big ( \text {P} {\mathbf {Y}} ({\mathbf {Z}}_{n})_{k} \big )^{\circ \frac{1}{2}} \right] \circ {\mathbf {e}}_{\text {p}}, \end{aligned}$$
(12)
where
\( {\mathbf {e}}_{\Delta } \) and
\( {\mathbf {e}}_{\text {p}} \) are residual terms, respectively, in the additive and multiplicative cases, the lower index
pol denotes a polar decomposition form and
\(\circ \) denotes the Hadamard product or Hadamard power in the superscript. This ad hoc solution requires computing
\(2N\) decomposition forms for all factors.
Another opportunity is to apply the so-called mid-point weights to each factor \({\mathbf {Z}}_{n}\). In the additive case, this signifies that, for example, \( \Delta {\mathbf {Y}} ({\mathbf {Z}}_{1}) \) is defined as \( {\mathbf {Z}}_{1}^{(1)} {\mathbf {Z}}_{2}^{(0/1)} \dots {\mathbf {Z}}_{N}^{(0/1)} - {\mathbf {Z}}_{1}^{(0)} {\mathbf {Z}}_{2}^{(0/1)} \dots {\mathbf {Z}}_{N}^{(0/1)} \) plus an error term, where for \( n \ne 1\), \( {\mathbf {Z}}_{n}^{(0/1)} = \frac{1}{2} {\mathbf {Z}}_{n}^{(0)} + \frac{1}{2} {\mathbf {Z}}_{n}^{(1)} \). This solution, also known as the Marshall–Edgeworth decomposition, therefore involves only one decomposition form for each factor and \(N\) forms for all factors, but also has a residual term. It is less clear how the mid-term weights apply in the multiplicative case.
Dietzenbacher and Los (
1998) report that both solutions perform rather well, and their results are very close to those from the full and exact additive decomposition of
\({\mathbf {Y}}\).
In addition to Bennet and Fisher decompositions that are combinatorial in nature and build on, respectively, the arithmetic and geometric averages, another family of decompositions employs the logarithmic mean. These include Montgomery (additive) decomposition, Montgomery–Vartia (multiplicative) decomposition and Sato–Vartia (additive and multiplicative) decomposition that are exhaustively reviewed by de Boer (
2008,
2009); de Boer and Rodrigues (
2020). These decompositions, particularly favoured for SDA of energy and emissions, are also known as ‘Divisia-based’, ‘Divisia-linked’ decomposition approaches or ‘Logarithmic Mean Divisia Index’ (LMDI) methods (reviewed by Su and Ang (
2012), Wang et al (
2017)).
Montgomery, Montgomery–Vartia and Sato–Vartia decompositions are exact, and the related indicators and indices for each factor \({\mathbf {Z}}_{n}\) are shown to be ideal—as are the Bennet indicator and the Fisher index—because they satisfy a number of tests from the index number theory (e.g., time reversal test, product test, etc.). Only one decomposition needs to be computed to obtain indicators or indices for all factors, so the methods based on the logarithmic mean are believed to be easier to implement than the combinatorial ones. However, these methods are not reviewed here because they do not apply to the special case of structural decomposition that motivated this paper.
2.2 A special case of structural decomposition: factors nested within an inverse matrix
Further complication arises if one of the factors
\({\mathbf {Z}}_{n}\) is an inverse of a sum of other factors. This is a typical case in input–output SDA. To elaborate, we now turn to a simple Leontief model
3 where industry output
\({\mathbf {x}}\) is a product of the Leontief inverse
\({\mathbf {L}}\) and final demand
\({\mathbf {f}}\):
$$\begin{aligned} {\mathbf {x}} = \mathbf {Lf} = \left( \mathbf {I - A} \right) ^{-1} {\mathbf {f}}. \end{aligned}$$
\({\mathbf {A}}\) is a matrix of technical coefficients with the
\(j\)th column describing the expenditures on intermediate inputs per one unit of output of industry
\(j \in \{1, \dots , J\}\). For our illustrative example, we will treat columns of
\({\mathbf {A}}\) as factors. Each
\(j\)th factor is then matrix
\({\mathbf {A}}_{j}\) where all columns other that the
\(j\)th column from
\({\mathbf {A}}\) are set to zero. Under constant prices, the changes in
\(j\)th factor may be understood as the changes in the ’production recipe’ or technology. The matrix of technical coefficients is now the sum of all
J factors,
\( {\mathbf {A}} = {\mathbf {A}}_{1} + {\mathbf {A}}_{2} + \dots + {\mathbf {A}}_{J}\), and the change in output in the additive case is:
$$\begin{aligned} \Delta {\mathbf {x}}&= {\mathbf {L}}^{(1)} {\mathbf {f}}^{(1)} - {\mathbf {L}}^{(0)} {\mathbf {f}}^{(0)} = \nonumber \\&= \left( {\mathbf {I}} - {\mathbf {A}}_{1}^{(1)} - {\mathbf {A}}_{2}^{(1)} - \dots - {\mathbf {A}}_{J}^{(1)} \right) ^{-1} {\mathbf {f}}^{(1)} - \left( {\mathbf {I}} - {\mathbf {A}}_{1}^{(0)} - {\mathbf {A}}_{2}^{(0)} - \dots - {\mathbf {A}}_{J}^{(0)} \right) ^{-1} {\mathbf {f}}^{(0)}, \end{aligned}$$
(13)
and in the multiplicative case:
$$\begin{aligned} \text {P} {\mathbf {x}}&= \left( {\mathbf {L}}^{(1)} {\mathbf {f}}^{(1)} \right) \oslash \left( {\mathbf {L}}^{(0)} {\mathbf {f}}^{(0)} \right) = \nonumber \\&= \left( \left( {\mathbf {I}} - {\mathbf {A}}_{1}^{(1)} - {\mathbf {A}}_{2}^{(1)} - \dots - {\mathbf {A}}_{J}^{(1)} \right) ^{-1} {\mathbf {f}}^{(1)} \right) \oslash \left( \left( {\mathbf {I}} - {\mathbf {A}}_{1}^{(0)} - {\mathbf {A}}_{2}^{(0)} - \dots - {\mathbf {A}}_{J}^{(0)} \right) ^{-1} {\mathbf {f}}^{(0)} \right) . \end{aligned}$$
(14)
Invoking the hierarchical approach, we first decompose the change in output into the changes induced by the change in the Leontief inverse and the change in final demand:
$$\begin{aligned} \Delta {\mathbf {x}}= & {} \frac{1}{2} \left( {\mathbf {L}}^{(1)} - {\mathbf {L}}^{(0)} \right) \left( {\mathbf {f}}^{(1)} + {\mathbf {f}}^{(0)} \right) + \frac{1}{2} \left( {\mathbf {L}}^{(1)} + {\mathbf {L}}^{(0)} \right) \left( {\mathbf {f}}^{(1)} - {\mathbf {f}}^{(0)} \right) , \end{aligned}$$
(15)
$$\begin{aligned} \text {P} {\mathbf {x}}= & {} \left( \frac{ {\mathbf {L}}^{(1)} {\mathbf {f}}^{(1)} }{ {\mathbf {L}}^{(0)} {\mathbf {f}}^{(1)} } \circ \frac{ {\mathbf {L}}^{(1)} {\mathbf {f}}^{(0)} }{ {\mathbf {L}}^{(0)} {\mathbf {f}}^{(0)} } \right) ^{\circ \frac{1}{2}} \circ \left( \frac{ {\mathbf {L}}^{(1)} {\mathbf {f}}^{(1)} }{ {\mathbf {L}}^{(1)} {\mathbf {f}}^{(0)} } \circ \frac{ {\mathbf {L}}^{(0)} {\mathbf {f}}^{(1)} }{ {\mathbf {L}}^{(0)} {\mathbf {f}}^{(0)} } \right) ^{\circ \frac{1}{2}}. \end{aligned}$$
(16)
For easier exposition, in the multiplicative case (
16) the element-by-element division symbol
\(\oslash \) is replaced by the fraction sign, and the power applies to each element of the vector in brackets. The first term on the right-hand side of Eqs. (
15) and (
16) describes the changes in output related to the changes in the Leontief inverse. Note that this term is the average of the two underlying decomposition forms of
\( \Delta {\mathbf {x}} ({\mathbf {L}}) \) or P
\({\mathbf {x}} ({\mathbf {L}})\).
There are two basic options (Rose and Casler
1996) to further decompose the changes related to the Leontief inverse. First, similar to Eq. (
4), the difference of
\({\mathbf {x}}\) that is attributed, for example, to the changes in the outlays of the first industry (
\(j = 1\)) may be written as follows:
$$\begin{aligned} \Delta {\mathbf {x}} ({\mathbf {A}}_{1}, t_{2}, \dots , t_{J})&= \frac{1}{2} \left( {\mathbf {I}} - {\mathbf {A}}_{1}^{(1)} - {\mathbf {A}}_{2}^{(t_{2})} - \dots - {\mathbf {A}}_{J}^{(t_{J})} \right) ^{-1} \left( {\mathbf {f}}^{(1)} + {\mathbf {f}}^{(0)} \right) -\nonumber \\&\quad - \frac{1}{2} \left( {\mathbf {I}} - {\mathbf {A}}_{1}^{(0)} - {\mathbf {A}}_{2}^{(t_{2})} - \dots - {\mathbf {A}}_{J}^{(t_{J})} \right) ^{-1} \left( {\mathbf {f}}^{(1)} + {\mathbf {f}}^{(0)} \right) . \end{aligned}$$
(17)
Then the complete decomposition of
\(\Delta {\mathbf {x}} ({\mathbf {A}})\) will be identical to that of
\(\Delta {\mathbf {Y}}\) in Sect.
2.1 involving a total of
\(2^{J}\) decomposition forms
4 of
\( \Delta {\mathbf {x}} ({\mathbf {A}}_{j}) \) for all
J factors and their weighted averages, leading to
J Bennet indicators. Similarly, the decomposition of the ratio P
\({\mathbf {x}} ({\mathbf {A}})\) involves
J Fisher indices.
The second option utilises the known property of the Leontief inverse:
\( \Delta {\mathbf {L}} = {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}) {\mathbf {L}}^{(0)} = {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}) {\mathbf {L}}^{(1)} \). Replacing
\(\Delta {\mathbf {A}}\) with the sum of the changes in
\(J\) factors yields:
$$\begin{aligned} \Delta {\mathbf {L}}&= {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}) {\mathbf {L}}^{(0)} = {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}_{1}) {\mathbf {L}}^{(0)} + {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}_{2}) {\mathbf {L}}^{(0)} + \dots + {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}_{J}) {\mathbf {L}}^{(0)}, \end{aligned}$$
(18a)
$$\begin{aligned} \Delta {\mathbf {L}}&= {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}) {\mathbf {L}}^{(1)} = {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}_{1}) {\mathbf {L}}^{(1)} + {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}_{2}) {\mathbf {L}}^{(1)} + \dots + {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}_{J}) {\mathbf {L}}^{(1)}. \end{aligned}$$
(18b)
For any
j,
\( {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}_{j}) {\mathbf {L}}^{(0)} \) is no longer equal to
\( {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}_{j}) {\mathbf {L}}^{(1)} \), and an average of Eqs. (
18a) and (
18b) can be taken to avoid an arbitrary choice. Then there will be 2 decomposition forms for each
\(j\)th factor and
\(2J\) forms for all
\(J\) factors.
There are two issues with this second option. Although mathematically correct, the terms related to each
jth factor in Eqs. (
18a) and (
18b) or their average do not exactly capture changes attributable to that factor:
\( {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}_{j}) {\mathbf {L}}^{(0)} \ne {\mathbf {L}}^{(1)} - (\mathbf {I - A}^{(1)} + \Delta {\mathbf {A}}_{j})^{-1} \) and
\( {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}_{j}) {\mathbf {L}}^{(1)} \ne {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}_{j}) (\mathbf {I - A}^{(0)} - \Delta {\mathbf {A}}_{j})^{-1} \). The correct term for each
jth factor would be
\( {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}_{j}) (\mathbf {I - A}^{(1)} + \Delta {\mathbf {A}}_{j})^{-1} \) and
\( {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}_{j}) (\mathbf {I - A}^{(0)} - \Delta {\mathbf {A}}_{j})^{-1} \), but these will not add up to
\(\Delta {\mathbf {L}}\). Furthermore, this simplifying option is not available in the case of multiplicative decomposition.
Note that the decompositions based on the logarithmic mean require that the dependent variable be expressed as a product of factors as in Eq. (
1). As an inverse of a sum of factors cannot be expressed as a product of those factors, the Montgomery, Montgomery–Vartia and Sato–Vartia decompositions are irrelevant in this special case.
The problem is now to find a reasonable ‘shortcut’ to the decomposition of \(\Delta {\mathbf {x}} ({\mathbf {A}})\) and P\({\mathbf {x}} ({\mathbf {A}})\) and to avoid computing all \(2^{J}\) decomposition forms where each form includes an inverse of a unique combination of factors therein.
2.3 ‘Shortcuts’ to the complete decomposition with factors nested in the Leontief inverse
There is no specific order of factors within the Leontief inverse: any factor can appear first or last in the sum without affecting the result. It is even more obvious in this case that the choice of two polar decompositions is arbitrary.
5 Therefore, we will not consider the average of two polar decompositions as a viable shortcut.
The shortcuts to be considered are as follows:
Shortcut 1 The averages of pairs of the polar decomposition forms for each factor
This shortcut applies Eqs. (
11) and (
12) to the factors within the Leontief inverse. For each
\(j\), compute two polar decomposition forms of
\( \Delta {\mathbf {x}} ({\mathbf {A}}_{j}) \) according to Eq. (
17) where all time periods
\(t\) other than
\(t_{j}\) are set to 1 and 0 (in other words, two decomposition forms at distance
\(k \in \{0, J-1\}\) from the polar form), then take the average of the two forms. The change of
\({\mathbf {x}}\) because of change in the
\(j\)th factor then is:
$$\begin{aligned}&\Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{S1} = \overline{ \Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{pol} } = \sum _{k=0,J-1} \frac{1}{2} \Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{k}, \end{aligned}$$
(19)
$$\begin{aligned}&\text {P} {\mathbf {x}} ({\mathbf {A}}_{j})_{S1} = \overline{ \text {P} {\mathbf {x}} ({\mathbf {A}}_{j})_{pol} } = \bigodot _{k=0,N-1} \big ( \text {P} {\mathbf {x}} ({\mathbf {A}}_{j})_{k} \big )^{\circ \frac{1}{2}}. \end{aligned}$$
(20)
The above requires computing only 2 forms for each factor,
\(2J\) forms for all factors and does not provide an exact decomposition.
Shortcut 2 The normalised averages of pairs of the polar decomposition forms for each factor
The information from Eqs. (
19) and (
20) can be utilised to modify the averaged polar forms
\( \overline{ \Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{pol} } \) and
\( \overline{ \text {P} {\mathbf {x}} ({\mathbf {A}}_{j})_{pol} } \), so that the decompositions of
\( \Delta {\mathbf {x}} ({\mathbf {A}}) \) and P
\( {\mathbf {x}} ({\mathbf {A}}) \) are exact. Therefore, the residual term is distributed across the indicators or indices for all
\(J\) factors. In the additive case, this can be achieved by multiplying each element in the averaged
\(j\)th polar form by a respective coefficient:
$$\begin{aligned} \Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{S2} = {\mathbf {c}}_{\Delta } \circ \overline{ \Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{pol} } \quad \text {where} \quad {\mathbf {c}}_{\Delta } = \big ( \Delta {\mathbf {x}} ({\mathbf {A}}) \big ) \oslash \left( \sum _{j=1}^{J} \overline{ \Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{pol} } \right) . \end{aligned}$$
(21)
In the multiplicative case, the coefficients need to be defined as powers that apply to the base on an element-by-element basis:
$$\begin{aligned} & {\text{P}} {\mathbf{x}} ({\mathbf{A}}_{j})_{S2} = \left( \overline{ {\text{P}} {\mathbf{x}} ({\mathbf{A}}_{j})_{pol} } \right) ^{ \circ {\mathbf{c}}_{\text {p}} }, \nonumber \\ & {\text{where}} \quad ({\mathbf{c}}_{\text{p}})_{i} = \log _{ ({\mathbf{b}})_{i} } \left( {\text{P}} {\mathbf{x}} ({\mathbf{A}}) \right) _{i}, \quad ({\mathbf{b}})_{i} = \prod _{j=1}^{J} \left( \overline{{\text{P}} {\mathbf{x}} ({\mathbf{A}}_{j})_{pol} } \right) _{i}, \end{aligned}$$
(22)
and
\(i\) denotes the
\(i\)th element in the respective
\(J \times 1\) vector.
The above modification of shortcut 1 requires the computation of \(2J\) polar decomposition forms for each factor and for all factors and provides an exact decomposition.
Shortcut 3 Decomposition with mid-point weights (Marshall–Edgeworth decomposition)—only for additive decomposition
For the calculation of the change attributed to the
\(j\)th factor, other factors are defined as the arithmetic mean of their values at period 0 and period 1. We modify Eq. (
17) to formally describe shortcut 3:
$$\begin{aligned} \Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{S3}&= \frac{1}{2} \left( {\mathbf {I}} - {\mathbf {A}}_{j}^{(1)} - \sum _{h \ne j}^{J} \frac{1}{2} \left( {\mathbf {A}}_{h}^{(1)} + {\mathbf {A}}_{h}^{(0)} \right) \right) ^{-1} \left( {\mathbf {f}}^{(1)} + {\mathbf {f}}^{(0)} \right) - \nonumber \\&\quad - \frac{1}{2} \left( {\mathbf {I}} - {\mathbf {A}}_{j}^{(0)} - \sum _{h \ne j}^{J} \frac{1}{2} \left( {\mathbf {A}}_{h}^{(1)} + {\mathbf {A}}_{h}^{(0)} \right) \right) ^{-1} \left( {\mathbf {f}}^{(1)} + {\mathbf {f}}^{(0)} \right) . \end{aligned}$$
(23)
There are two decomposition forms for each factor and
\(2J\) forms for all factors. An aggregation across all
\(J\) factors does not provide an exact decomposition.
Shortcut 4 Factorisation of the change in the Leontief inverse—only for additive decomposition
We will finally compute the difference of
\({\mathbf {x}}\) attributed to the change in each factor that builds on the factorisation of
\(\Delta {\mathbf {L}}\) as it appears in Eqs. (
18a)–(
18b):
$$\begin{aligned} \Delta {\mathbf {x}} ({\mathbf {A}}_{j})_{S4} = \frac{1}{2} \left( {\mathbf {L}}^{(1)} (\Delta {\mathbf {A}}_{j}) {\mathbf {L}}^{(0)} + {\mathbf {L}}^{(0)} (\Delta {\mathbf {A}}_{j}) {\mathbf {L}}^{(1)} \right) \frac{1}{2} \left( {\mathbf {f}}^{(1)} + {\mathbf {f}}^{(0)} \right) . \end{aligned}$$
(24)
The above computes two decomposition forms for each factor, merged into one expression, and
\(2J\) forms for all factors. The decomposition is exact. Although shortcut 4 cannot be interpreted as a correct measure of the effect of the change in
\({\mathbf {A}}_{j}\), it can still provide an approximation of the true result, and we will see whether that approximation is good.