Let us start by considering the approximate
\(\tilde{{\varvec{{\textsf {a}}}}}_o\) solution to (
1) and let
\(\overline{{\varvec{{\textsf {K}}}}}_L\) be the stiffness matrix from a past design iteration. In the CA approach, we utilize the Cholesky factorization of
\(\overline{{\varvec{{\textsf {K}}}}}_L\), to compute
\({\varvec{{\textsf {a}}}}_o\) i.e.
$$\begin{aligned} \left( \overline{{\varvec{{\textsf {K}}}}}_L + \Delta {\varvec{{\textsf {K}}}}_L \right) {\varvec{{\textsf {a}}}}_o = {\varvec{{\textsf {F}}}}_o, \end{aligned}$$
(5)
where
\(\Delta {\varvec{{\textsf {K}}}}_L \equiv {\varvec{{\textsf {K}}}}_L - \overline{{\varvec{{\textsf {K}}}}}_L\) is the stiffness change due to a design update. Applying
\(\overline{{\varvec{{\textsf {K}}}}}_L^{-1}\) to both sides of (
5) yields
$$\begin{aligned} \left( {\varvec{{\textsf {I}}}} + \overline{{\varvec{{\textsf {K}}}}}_L^{-1}\Delta {\varvec{{\textsf {K}}}}_L \right) {\varvec{{\textsf {a}}}}_o = \overline{{\varvec{{\textsf {K}}}}}_L^{-1}{\varvec{{\textsf {F}}}}_o. \end{aligned}$$
(6)
Now, we expand the inverse of
\(({\varvec{{\textsf {I}}}} + \overline{{\varvec{{\textsf {K}}}}}_L^{-1}\Delta {\varvec{{\textsf {K}}}}_L )\) in a power series
2$$\begin{aligned} ({\varvec{{\textsf {I}}}} + {\varvec{{\textsf {B}}}} )^{-1} = \sum _{k=0}^\infty (-{\varvec{{\textsf {B}}}})^k, \quad \text {where} \quad {\varvec{{\textsf {B}}}}=\overline{{\varvec{{\textsf {K}}}}}_L^{-1}\Delta {\varvec{{\textsf {K}}}}_L. \end{aligned}$$
(7)
Hence, the solution to (
6) is
$$\begin{aligned} {\varvec{{\textsf {a}}}}_o = \sum _{k=0}^\infty (-{\varvec{{\textsf {B}}}})^k\overline{{\varvec{{\textsf {K}}}}}_L^{-1}{\varvec{{\textsf {F}}}}_o, \end{aligned}$$
(8)
which we truncate at
\(k=s\ll n\), such that
\(\tilde{{\varvec{{\textsf {a}}}}}_o = {\varvec{{\textsf {R}}}}{\varvec{{\textsf {y}}}}\), cf. (
4). The basis vectors in
\({\varvec{{\textsf {R}}}}\) are obtained recursively from (
8) as
$$\begin{aligned} \begin{array}{ll} \displaystyle {\varvec{{\textsf {u}}}}_1 = \overline{{\varvec{{\textsf {K}}}}}_L^{-1}{\varvec{{\textsf {F}}}}_o, &{} \\ \displaystyle {\varvec{{\textsf {u}}}}_i = -{\varvec{{\textsf {B}}}}{\varvec{{\textsf {u}}}}_{i-1}, &{} \quad i=2,...,s. \\ \end{array} \end{aligned}$$
(9)
After the basis vectors are generated, we solve the reduced
\(s\times s\) problem
$$\begin{aligned} {\varvec{{\textsf {R}}}}^T{\varvec{{\textsf {K}}}}_L{\varvec{{\textsf {R}}}}{\varvec{{\textsf {y}}}} = {\varvec{{\textsf {R}}}}^T{\varvec{{\textsf {F}}}}_o, \end{aligned}$$
(10)
for
\({\varvec{{\textsf {y}}}}\), and insert the result into (
4). In practice, the magnitudes of
\({\varvec{{\textsf {u}}}}_i\) rapidly decrease due to repeated application of
\(\overline{{\varvec{{\textsf {K}}}}}_L^{-1}\Delta {\varvec{{\textsf {K}}}}\), resulting in a poorly conditioned system (
10). To mitigate this issue we introduce normalization and
\({\varvec{{\textsf {K}}}}_L\)-orthogonalization of the basis vectors wherefore the final basis vectors
\({\varvec{{\textsf {R}}}} = \left[ {\varvec{{\textsf {v}}}}_1, {\varvec{{\textsf {v}}}}_2, \ldots {\varvec{{\textsf {v}}}}_s\right]\) are obtained recursively as
$$\begin{aligned} \begin{array}{ll} \displaystyle {\varvec{{\textsf {u}}}}_1 = \overline{{\varvec{{\textsf {K}}}}}_L^{-1}{\varvec{{\textsf {F}}}}_o, &{} \\ \displaystyle {\varvec{{\textsf {u}}}}_i = -{\varvec{{\textsf {B}}}}{\varvec{{\textsf {t}}}}_{i-1}, &{} \quad i=2,...,s, \\ \displaystyle {\varvec{{\textsf {t}}}}_i = {\varvec{{\textsf {u}}}}_{i}({\varvec{{\textsf {u}}}}_i^T{\varvec{{\textsf {K}}}}_L{\varvec{{\textsf {u}}}}_i)^{-1/2}, &{} \quad i=1,...,s, \\ \displaystyle {\varvec{{\textsf {r}}}}_i = {\varvec{{\textsf {t}}}}_i - \sum _{j=1}^{i-1} \left( {\varvec{{\textsf {t}}}}_i^T{\varvec{{\textsf {K}}}}_L{\varvec{{\textsf {v}}}}_j\right) {\varvec{{\textsf {v}}}}_j, &{} \quad i=1,...,s, \\ \displaystyle {\varvec{{\textsf {v}}}}_i = {\varvec{{\textsf {r}}}}_i\left( {\varvec{{\textsf {r}}}}_i^T{\varvec{{\textsf {K}}}}_L{\varvec{{\textsf {r}}}}_i\right) ^{-1/2}, &{} \quad i=1,...,s. \end{array} \end{aligned}$$
(11)
Due to the orthogonalization,
\({\varvec{{\textsf {R}}}}^T{\varvec{{\textsf {K}}}}_L{\varvec{{\textsf {R}}}} = {\varvec{{\textsf {I}}}}\), the reduced system in (
10) boils down to
\({\varvec{{\textsf {I}}}}{\varvec{{\textsf {y}}}} = {\varvec{{\textsf {y}}}} = {\varvec{{\textsf {R}}}}^T{\varvec{{\textsf {F}}}}_o\), such that the approximate solution becomes
$$\begin{aligned} \tilde{{\varvec{{\textsf {a}}}}}_o = {\varvec{{\textsf {R}}}}{\varvec{{\textsf {y}}}} = {\varvec{{\textsf {R}}}}{\varvec{{\textsf {R}}}}^T{\varvec{{\textsf {F}}}}_o. \end{aligned}$$
(12)
An alternative, numerically equivalent, approach to (
12) is to use the factorization of
\(\overline{{\varvec{{\textsf {K}}}}}_L\) as a preconditioner in a Preconditioned Conjugate Gradients (PCG) procedure, cf. e.g. Kirsch et al. (
2002) and Amir et al. (
2012). However, in this work the eigenproblem (
3) is solved with ROM so for consistency we use the ROM approach for the static problem (
1) as well.