To construct a POD basis, let us again consider a fine training sample set
\({\mathcal {P}}^s = \{{\varvec{\mu }}_1,\dots ,{\varvec{\mu }}_{N_s}\} \subset {\mathcal {P}}\) with dimension
\(N_s=\text {dim}({\mathcal {P}}^s)\) introduced in Sect.
5.2. Then we form the snapshots matrix
\({{\mathbb {S}}} \in {\mathbb {R}}^{{\mathcal {N}}_{h,0} \times N_s}\) as
$$\begin{aligned} {\textbf{S}}_u= [\widehat{\textbf{u}}_1,\dots ,\widehat{\textbf{u}}_{N_s}], \end{aligned}$$
(41)
where the vectors
\(\widehat{\textbf{u}}_j \in {\mathbb {R}}^{{\mathcal {N}}_{h,0}}\) denote the extended solutions
\(\widehat{\textbf{u}}{({\varvec{\mu }}_j)}\) for
\(j=1,\dots ,N_s\). These snapshots are also partitioned into
\(N_c\) submatrices as
\(\lbrace {\textbf{S}}_u^1,\dots ,{\textbf{S}}_u^{N_c}\rbrace\), according to
\({\mathcal {P}}^s = \cup _{k=1}^{N_c}{\mathcal {P}}^s_{k}\) in (
37). The local reduced basis
\({\textbf{V}}_k\) is then extracted from each cluster
\({\textbf{S}}_u^k\), separately applying the POD as discussed in Sect.
5.3. Thus, the basis reads:
$$\begin{aligned} {\textbf{V}}_k = [{\varvec{\zeta }}_1,\dots ,{\varvec{\zeta }}_{N_k}] \in {\mathbb {R}}^{{\mathcal {N}}_{h,0} \times N_k}. \end{aligned}$$
(42)
Let us now derive the local reduced basis problem. For any
\({{\varvec{\mu }}} \in {\mathcal {P}}_k\), the solution
\(\widehat{{\textbf{u}}}({\varvec{\mu }})\) can be approximated using the local reduced basis
\({\textbf{V}}_k\), as
$$\begin{aligned} \widehat{{\textbf{u}}}({\varvec{\mu }}) \approx {\textbf{V}}_k {\textbf{u}}_N({\varvec{\mu }}), \end{aligned}$$
(43)
where
\({\textbf{u}}_N({\varvec{\mu }})\in {\mathbb {R}}^{N_k}\) is the solution vector of the reduced problem. A projection-based ROM can be obtained from (
33) by enforcing the residual to be orthogonal to the subspace spanned by
\({\textbf{V}}_k\), such that
$$\begin{aligned} {\textbf{V}}_k^{\top } (\widehat{{\textbf{K}}}({\varvec{\mu }}){\textbf{V}}_k {\textbf{u}}_N({\varvec{\mu }})-{\widehat{{\textbf{f}}}({\varvec{\mu }})}) = {\textbf{0}}. \end{aligned}$$
(44)
Thus, the local reduced basis problem reads:
$$\begin{aligned} {\textbf{K}}_N({\varvec{\mu }}) {\textbf{u}}_N({\varvec{\mu }}) = {\textbf{f}}_N({\varvec{\mu }}), \end{aligned}$$
(45)
while the reduced matrix and vector are defined as
$$\begin{aligned} {\textbf{K}}_N = {\textbf{V}}_k^{\top }\widehat{{\textbf{K}}}({\varvec{\mu }}){\textbf{V}}_k, \qquad {\textbf{f}}_N = {\textbf{V}}_k^{\top }\widehat{{\textbf{f}}}({\varvec{\mu }}). \end{aligned}$$
(46)
The size of the reduced problem (
45) is
\(N_k \ll {\mathcal {N}}_{h,0}\), which makes it suitable for fast online computation given many different parameters
\({\varvec{\mu }} \in {\mathcal {P}}_k\). However, the solution of the reduced problem requires the assembly of the parameter-dependent operators
\(\widehat{{\textbf{K}}}({\varvec{\mu }})\) and
\(\widehat{{\textbf{f}}}({\varvec{\mu }})\). Therefore, an important assumption for the efficiency of the reduced basis method in general is that the operators depend affinely on the parameters
\({{\varvec{\mu }}}\). This assumption is not always fulfilled in the presence of geometrical parameters. Therefore, we will build affine approximations to recover the affine dependence. Since we aim to approximate a manifold of extended operators that is nonlinear with respect to
\({\varvec{\mu }}\), the dimension of the approximation space may be high. Therefore, the clustering strategy of Sect.
5.2 will be also considered to construct local affine approximations. Note that from now on we assume for ease of exposition that the clustering is performed only once for constructing both the reduced bases and affine approximations, although in principle this could be chosen differently [
32]. We now introduce the following local affine approximation for any
\({{\varvec{\mu }}} \in {\mathcal {P}}_k\):
$$\begin{aligned} \widehat{{\textbf{K}}}({\varvec{\mu }}) \approx \sum _{q=1}^{Q_{a}^k} \theta _{a,q}^{k} ({\varvec{\mu }})\widehat{{\textbf{K}}}_q^k, \qquad \widehat{{\textbf{f}}}({\varvec{\mu }})\approx \sum _{q=1}^{Q_f^k} \theta _{f,q}^k({\varvec{\mu }})\widehat{{\textbf{f}}}_q^k, \end{aligned}$$
(47)
where
\(\theta _{a,q}^{k}: {\mathcal {P}}_k \rightarrow {\mathbb {R}}\), for
\(q=1,\dots ,Q_{a}^k\), and
\(\theta _{f,q}^k : {\mathcal {P}}_k \rightarrow {\mathbb {R}}\), for
\(q=1,\dots ,Q_f^k\), are
\({\varvec{\mu }}\)-dependent functions, whereas
\(\widehat{{\textbf{K}}}_q^k \in {\mathbb {R}}^{{\mathcal {N}}_{h,0}\times {\mathcal {N}}_{h,0}}\) and
\(\widehat{{\textbf{f}}}_q^k \in {\mathbb {R}}^{{\mathcal {N}}_{h,0}}\) are
\({{\varvec{\mu }}}\)-independent forms.
2 Since the latter forms do not depend on the parameters
\({{\varvec{\mu }}}\), they can be pre-computed and stored in the offline phase. Then, the online assembly requires only the evaluation of
\(\theta _{a,q}^{k}, \theta _{f,q}^k\), which is inexpensive assuming that
\(Q_{a}^k, Q_f^k \ll {\mathcal {N}}_{h,0}\). To obtain the affine approximation in the form of (
47), we will employ the Discrete Empirical Interpolation Method in combination with Radial Basis Functions Interpolation. This hyper-reduction strategy will be further discussed in Sect.
5.5. Once the affine approximation is recovered, inserting (
47) into (
46) yields, for any given parameter
\({{\varvec{\mu }}} \in {\mathcal {P}}_k\):
$$\begin{aligned} {\textbf{K}}_N({\varvec{\mu }})\approx & {} \sum _{q=1}^{Q_{a}^k} \theta _{a,q}^{k} ({\varvec{\mu }}){\textbf{V}}_k^{\top }\widehat{{\textbf{K}}}_q^k{\textbf{V}}_k, \qquad \nonumber \\ {\textbf{f}}_N({\varvec{\mu }})\approx & {} \sum _{q=1}^{Q_f^k} \theta _{f,q}^k({\varvec{\mu }}){\textbf{V}}_k^{\top }\widehat{{\textbf{f}}}^k_q, \end{aligned}$$
(48)
where
\({\textbf{K}}_N({\varvec{\mu }}) \in {\mathbb {R}}^{N_k \times N_k}\) and
\({{\textbf{f}}}_N({\varvec{\mu }}) \in {\mathbb {R}}^{N_k}\) are the reduced matrix and right-hand side vector, respectively. We remark that in (
48) only the coefficients
\(\theta _{a,q}^{k}, \theta _{f,q}^k\) depend on the parameters
\({\varvec{\mu }}\) and are evaluated online, while all other quantities are assembled and stored in the offline phase. Finally, during the online phase, for any given parameter
\({\varvec{\mu }}\) the respective cluster is selected as in (
36) and the local reduced problem in (
45) is solved considering the approximation assembly in (
48). Finally, the high-fidelity approximation of the solution can be recovered through (
43). We remark that the efficiency of the overall method depends on the size of the local reduced problem
\(N_k\), on the number of local affine terms
\(Q_{a}^k, Q_f^k\), as well as the efficient online evaluation of the coefficients
\(\theta _{a,q}^{k}, \theta _{f,q}^k\). The latter aspects will be further elaborated in Sect.
5.5.