Skip to main content
Erschienen in: Journal of Scientific Computing 3/2020

Open Access 01.03.2020

Multigrid Schemes for High Order Discretizations of Hyperbolic Problems

verfasst von: Andrea A. Ruggiu, Jan Nordström

Erschienen in: Journal of Scientific Computing | Ausgabe 3/2020

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Total variation diminishing multigrid methods have been developed for first order accurate discretizations of hyperbolic conservation laws. This technique is based on a so-called upwind biased residual interpolation and allows for algorithms devoid of spurious numerical oscillations in the transient phase. In this paper, we justify the introduction of such prolongation and restriction operators by rewriting the algorithm in a matrix-vector notation. This perspective sheds new light on multigrid procedures for hyperbolic problems and provides a direct extension for high order accurate difference approximations. The new multigrid procedure is presented, advantages and disadvantages are discussed and numerical experiments are performed.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Multigrid is a convergence acceleration technique originally designed to solve elliptic partial differential equations [1]. For these problems, it was observed that iterative methods were only damping highly oscillating error modes fast. On the other hand, the low frequency errors could be efficiently flattened by means of grid coarsening, since they were seen as high frequency modes on coarser grids. The main idea behind multigrid techniques is to complement the two effects in order to rapidly get the solution on the finest grid [2, 3].
More recently, multigrid methods were successfully modified to find first order accurate steady state solutions of hyperbolic conservation laws [4, 5]. For these problems, the low frequency errors can possibly be rapidly expelled from the domain by grid coarsening, since coarser grids have less severe stability restrictions on time steps. The new L-multigrid procedure allow for \(2^{L}\) times faster wave propagation by ensuring that the Total Variation Diminishing (TVD) property is preserved. In the Total Variation Diminishing Multi-Grid scheme (TVD-MG), the residual restriction is performed using an upwind biased interpolation rather than the classical linear one, and this also guarantees convergence of the iterative procedure. However, a significant drawback of the TVD-MG is the intrinsic need for first order accurate space discretizations.
In this article, we will replicate and extend the previously mentioned low order TVD-MG method to upwind Summation-By-Parts (SBP) based high-order finite difference methods [6, 7] for linear hyperbolic problems. The crucial role played by the upwind biased interpolation in the TVD-MG will be investigated from a matrix-vector perspective, leading to a direct extension of the residual restriction to higher order approximations. In our approach, the wave propagation for high-order discretizations on a fine grid is complemented with the TVD-MG for first order schemes on coarse grids. New restriction operators that are needed to couple the high order discretization on the fine grid with the first order scheme on the second finest grid will be presented. We will also show how the new residual restriction operators developed for scalar advection problems can be used to accelerate the wave propagation for linear hyperbolic systems and nonlinear problems.
The article is structured as follows: in Sect. 2 we introduce a new first order upwind SBP discretization for one-dimensional scalar advection problems. A multigrid technique for such approximations, inspired by the TVD-MG, is presented in Sect. 3. In this section we also revisit the upwind-biased residual restriction needed for convergence acceleration from a matrix-vector point of view. The extension to higher order discretizations is given in Sect. 4. Several modifications needed to implement the multigrid algorithm for hyperbolic systems such as the linear compressible Euler equations are discussed in Sect. 5. In Sect. 6, we present an extension of the multigrid procedure to nonlinear problems. Numerical results for two-dimensional problems are shown in Sect. 7. Finally, conclusions are drawn in Sect. 8.

2 Upwind Discretization of the Advection Equation

Consider the linear wave propagation problem
$$\begin{aligned} \begin{array}{rclllll} u_{t} + u_{x} &{}= &{}f(x), &{}\quad &{}0< x< 1, &{}\quad &{}t> 0, \\ u(0,t) &{}= &{}g, &{} &{}t > 0, \\ u(x,0) &{}= &{}u^{0}(x), &{} &{}0< x < 1, \end{array} \end{aligned}$$
(1)
where f and g are known data independent of time. A semi-discretization of (1) can be written by using SBP operators on upwind form [7].
Definition 1
The difference operators \(D_{+} = P^{-1} \left( Q_{+} + \frac{B}{2}\right) \) and \(D_{-} = P^{-1} \left( Q_{-} + \frac{B}{2}\right) \), with \(B = \text {diag}([-1,0,\ldots ,0,1])\), are said to be pth order diagonal-norm upwind SBP operators for the first derivative if
(i)
\(D_{+}\) is pth and \(\left\lfloor {p/2}\right\rfloor \)th order accurate in the interior and at the left boundary, respectively. Likewise, \(D_{-}\) has order p in the interior and \(\left\lfloor {p/2}\right\rfloor \) at the right boundary;
 
(ii)
the diagonal matrix P defines a discrete norm;
 
(iii)
\(Q_{+} + Q_{-}^{T} = 0\);
 
(iv)
\(Q_{+} + Q_{+}^{T}\) and \(Q_{-} + Q_{-}^{T}\) are positive and negative semi-definite, respectively. \(\square \)
 
Consider an equidistant grid \(\varOmega _{1} = \left\{ x_{j} = j \varDelta x, \ j = 0, 1, 2, \ldots , N\right\} \) on \(\left[ 0,1\right] \), with \(N \varDelta x = 1\). The space discretization of (1) with upwind SBP operators reads
$$\begin{aligned} \begin{array}{rclll} \varvec{U}_{t} + D_{+} \varvec{U} &{}= &{}\varvec{f} - P^{-1} (U_{0} - g) \varvec{e}_{0}, &{}\quad &{}t > 0, \\ \varvec{U}(0) &{}= &{}\varvec{U}^{0}, \end{array} \end{aligned}$$
(2)
where the boundary condition is weakly imposed with a Simultaneous Approximation Term (SAT) [6]. In (2), the vector \(\varvec{U} = \left( U_{0}, U_{1}, \ldots , U_{N}\right) ^{T}\) indicates the approximate solution with \(U_{j}(t) \approx u\left( x_{j},t\right) \) and \(\varvec{f} = \left( f\left( x_{0}\right) , f\left( x_{1}\right) , \right. \)\(\left. \ldots , f\left( x_{N}\right) \right) ^{T}\). Moreover, we have used \(\varvec{e}_{0} = \left( 1,0,\ldots ,0\right) ^{T}\). Note that the positive variant of the SBP upwind operator has been used, since the solution of (1) is a wave propagating towards the right boundary.
As for the usual centered SBP operators [6, 8], it is straightforward to show that (2) leads to stability [7]. Indeed, by letting \(\mathbf {f} = \mathbf {0}\) and applying the energy method, i.e. multiplying (2) from the left by \(\varvec{U}^{T} P\) and adding the transpose of the resulting expression, we get
$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t} \left\| \mathbf {U}\right\| _{P}^{2} = g^{2} - U_{N}^{2} - \left( U_{0} - g\right) ^{2} - \mathbf {U}^{T} \left( Q_{+} + Q_{+}^{T}\right) \mathbf {U}. \end{aligned}$$
Since \(Q_{+} + Q_{+}^{T}\) is positive semi-definite, \(\left\| \mathbf {U}\right\| _{P} = \mathbf {U}^{T} P \mathbf {U}\) will be bounded by data.

2.1 New First Order Upwind SBP Operators

The upwind SBP operators in Definition 1 were derived from second up to ninth order [7]. However, in the present study we will also need first order upwind discretizations satisfying the SBP property in order to replicate and extend the TVD-MG technique [4, 5]. Such operators (for the positive variant) are given by
$$\begin{aligned} D_{+} = \frac{1}{\varDelta x} \begin{bmatrix} 0 \\ -1 &{}1 \\ &{}\ddots &{}\ddots \\ &{} &{}-1 &{}1 \end{bmatrix}, \quad P = \varDelta x \cdot \text {diag}\left( 1,\ldots ,1\right) . \end{aligned}$$
(3)
The operator \(D_{+}\) is first order accurate in all the nodes with the exception of the left boundary closure, where the order of accuracy drops to zero. The particular structure of \(D_{+}\) makes it easy to compute the steady-state solution of (2) as
$$\begin{aligned} U_{i} = g_{0} + \varDelta x \sum _{k = 0}^{i} f_{k} \end{aligned}$$
(4)
which is the discrete analogous of the steady-state solution of (1)
$$\begin{aligned} u\left( x\right) = g_{0} + \int _{0}^{x} f\left( \xi \right) \ \mathrm {d} \xi . \end{aligned}$$
(5)
Remark 1
The matrix P in (3) does not represent a consistent quadrature formula, namely it does not correctly integrate constant grid functions. Consistency can be restored by relaxing the constraint on the matrix P defining a discrete norm. In particular, the modified matrix \(P_{+} = \varDelta x \cdot \text {diag}\left( 0,1,\ldots ,1\right) \) exactly integrates constants and leads to the same \(D_{+}\) as in (3), and defines a semi-norm. However, this matrix can only be used for the positive variant \(D_{+}\), since the corresponding negative variant \(D_{-}\) requires \(P_{-} = \varDelta x \cdot \text {diag}\left( 1,\ldots ,1,0\right) \). Hence, to make the matrix P both strictly positive definite and equal for the two variants, the consistency requirement is relaxed. This lack of consistency is in line with the previously derived upwind SBP operators: indeed, the norm P of a pth order SBP upwind first derivative operator exactly integrates only \((p-2)\)th degree polynomials, if p is odd. \(\square \)

2.2 Convergence to Steady-State for Single Grid Methods

The steady state solution (4) can also be found by discretizing (2) in time and computing an approximation of \(\varvec{U}(t)\) for large t. By marching in time with Euler Forward (EF) we get
$$\begin{aligned} \varvec{U}^{n+1} = \varvec{U}^{n} - \varDelta t \left( D_{+} + P^{-1} \varvec{e}_{0} \varvec{e}_{0}^{T}\right) \varvec{U}^{n} + \varDelta t \varvec{F}, \end{aligned}$$
(6)
where \(\varvec{F} = \varvec{f} + g (\varDelta x)^{-1} \varvec{e}_{0}\) and the superscript n denotes the approximation at time \(t^{n} = n \varDelta t\). The discretization (6) converges to steady-state without spurious oscillations if \(\varDelta t \le \varDelta x\). Moreover, it can be recast in compact form by introducing
$$\begin{aligned} L_{1} = D_{+} + P^{-1} \varvec{e}_{0} \varvec{e}_{0}^{T} = \frac{1}{\varDelta x} \begin{bmatrix} 1 \\ -1 &{}1 \\ &{}\ddots &{}\ddots \\ &{} &{}-1 &{}1 \end{bmatrix}, \end{aligned}$$
(7)
and
$$\begin{aligned} S_{1} = I_{1} - \varDelta t L_{1} = \begin{bmatrix} 1 - \lambda \\ \lambda &{}1 - \lambda \\ &{}\ddots &{}\ddots \\ &{} &{}\lambda &{}1-\lambda \end{bmatrix}. \end{aligned}$$
(8)
In (78), \(I_{1}\) is the identity matrix, \(\lambda = \varDelta t / \varDelta x\) denotes the CFL number and the subscript 1 is used to denote quantities on the grid \(\varOmega _{1}\). With this notation, the method (6) can be rewritten as
$$\begin{aligned} \varvec{U}^{n+1} = S_{1} \varvec{U}^{n} + \left( I_{1} - S_{1}\right) L_{1}^{-1} \varvec{F}. \end{aligned}$$
(9)
If the eigenvalues of \(L_{1}\) have strictly positive real parts, then the convergence of (9) is guaranteed [9, 10].
Remark 2
The invertibility of \(L_{1}\) can be shown for pseudo-spectral approximations [9], but not in general for discretizations based on finite-difference methods [11]. The 1st and 2nd order upwind SBP operators lead to matrices \(L_{1}\) with triangular and block-triangular structure, respectively, for which invertibility follows in a straightforward way. However, proving this result for a higher order approximation requires a considerable effort. \(\square \)
For a first order upwind operator \(D_{+}\) and \(\lambda = 1\), the convergence to steady-state is achieved in \(N+1\) iterations, since
$$\begin{aligned} U_{0}^{n+1} = g_{0} + \varDelta x f_{0}, \qquad \qquad U_{j}^{n+1} = U_{j-1}^{n} + \varDelta x f_{j}, \quad j = 1, \ldots , N \end{aligned}$$
yield
$$\begin{aligned} U_{N}^{N+1} = U_{N-1}^{N} + \varDelta x f_{N} = U_{N-2}^{N-1} + \varDelta x \left( f_{N-1} + f_{N}\right) = \cdots = g_{0} + \varDelta x \sum _{k=0}^{N} f_{k}. \end{aligned}$$
Other explicit schemes can of course also be used for the time advancement of (2). For example, the fourth order Runge-Kutta (RK4) scheme leads to the iterative procedure (9) with
$$\begin{aligned} S_{1} = I_{1} - \varDelta t L_{1} + \frac{1}{2} \left( \varDelta t L_{1}\right) ^{2} - \frac{1}{6} \left( \varDelta t L_{1}\right) ^{3} + \frac{1}{24} \left( \varDelta t L_{1}\right) ^{4}. \end{aligned}$$
(10)
Although also this iterative method converges to the steady-state solution, the EF time marching for first order discretizations leads to faster convergence. In Fig. 1 the approximate solution of (2) with \(\mathbf {U}^{0} = \left( 1,\ldots ,1\right) ^{T}\), \(f = 0\) and \(g = 0\) is displayed at \(t = 0.5\) for both procedures with \(\lambda = 1\). The RK4 method damps the initial discontinuity, and thereby slows down the convergence to the steady-state solution \(\mathbf {U} = \left( 0,\ldots ,0\right) ^{T}\).
Using (10) leads to a transient phase consisting of both a convective part and a damping part. Purely convective convergence is a distinctive feature of first order upwind space discretizations using EF with \(\lambda = 1\), see Fig. 2. For \(\lambda < 1\), dissipation effects arise also for EF, but the convergence to steady-state is still faster when compared to RK4. For these reasons, in the following we use EF as time-marching for first order discretizations. For higher order schemes, using EF is inappropriate due to its poor stability properties and RK4 is preferred.

3 Convergence Acceleration for First Order Upwind Schemes

Consider the discrete problem (6), which converges to the steady-state solution \(\mathbf {U} = L_{1}^{-1} \mathbf {F}_{1}\) in \(N+1\) iterations. The wave propagation speed \(\lambda = 1\) can be increased by using the following two-level algorithm, which is inspired by the TVD-MG [4, 5]:
Quantities on the fine and coarse grid are indicated with superscripts 1, 2, respectively.
In (11) we have introduced:
  • A coarse grid \(\varOmega _{2} = \left\{ x_{j}^{(2)} = 2j \varDelta x^{(1)} = j \varDelta x^{(2)}, \ j = 0, 2, \ldots , N\right\} \)\(\subset \)\(\left[ 0,1\right] \);
  • A restriction operator of injection type \(\mathcal {R}_{u}\) such that \(\left( \mathcal {R}_{u} \varvec{v}^{(1)}\right) _{j} = v^{(2)}_{j}\) for \(j = 0, 2, \ldots , N\);
  • A residual restriction operator \(I_{r}\) which conveys the information from \(\varOmega _{1}\) to \(\varOmega _{2}\),
    $$\begin{aligned} \left( I_{r} \varvec{v}^{(1)} \right) _{j} = {\left\{ \begin{array}{ll} \begin{array}{lll} \frac{1}{2} v^{(1)}_{0}, &{}\quad &{}j = 0, \\ \frac{1}{2} \left( v_{j}^{(1)} + v_{j-1}^{(1)}\right) , &{} &{}j = 2, 4, \ldots , N, \end{array} \end{array}\right. } \end{aligned}$$
    (12)
    which is upwind-biased [4, 5] at interior nodes and inconsistent at the left boundary node. This somewhat odd choice will be explained later. See Fig. 3 for details;
  • A space discretization \(L_{2} = D_{+,2} + P_{2}^{-1} \mathbf {e}_{0,2} \mathbf {e}_{0,2}^{T}\) and a smoother \(S_{2} = I_{2} - \varDelta t_{2} L_{2}\) on the coarse grid \(\varOmega _{2}\) which are counterparts to the operators \(L_{1}\) and \(S_{1}\) in (78) on \(\varOmega _{1}\);
  • The prolongation operator \(I_{p} = I_{p}^{I} + I_{p}^{E}\) in the fine grid update step [12] has been split into two contributions: \(I_{p}^{I}\) for the nodes included on the coarse grid and \(I_{p}^{E}\) for the other nodes, see Fig. 3. In particular,
    $$\begin{aligned} \begin{array}{l} \left( I_{p}^{I} \varvec{v}^{(2)} \right) _{j} = {\left\{ \begin{array}{ll} \begin{array}{l@{\quad }l} v_{j}^{(2)}, &{} j = 0, 2, 4, \ldots , N, \\ 0, &{} j = 1, 3, 5, \ldots , N-1, \end{array} \end{array}\right. } \\ \left( I_{p}^{E} \varvec{v}^{(2)} \right) _{j} = {\left\{ \begin{array}{ll} \begin{array}{ll} 0, &{} j = 0, 2, 4, \ldots , N, \\ v_{j-1}^{(2)}, &{} j = 1, 3, 5, \ldots , N-1, \end{array} \end{array}\right. } \end{array} \end{aligned}$$
    (13)
    are introduced to avoid overshoots in the transient phase, and preserve the TVD property [4].
In (11), the coarse grid evolution step converges to the steady-state solution \(L_{2}^{-1} \mathbf {F}_{2}\). This vector can be recast as
$$\begin{aligned} L_{2}^{-1} \mathbf {F}_{2}&= L_{2}^{-1} \left( L_{2} \varvec{U}^{(2)} - I_{r} \overline{\varvec{r}}^{(1)}\right) = L_{2}^{-1} \left[ L_{2} \mathcal {R}_{u} \overline{\varvec{U}}^{(1)} - I_{r} \left( L_{1} \overline{\varvec{U}}^{(1)} - \mathbf {F}_{1}\right) \right] \\&= L_{2}^{-1} \left( L_{2} \mathcal {R}_{u} - I_{r} L_{1} \right) \overline{\varvec{U}}^{(1)} + L_{2}^{-1} I_{r} \mathbf {F}_{1}. \end{aligned}$$
By assuming that
$$\begin{aligned} I_{r} L_{1} \approx L_{2} \mathcal {R}_{u}, \end{aligned}$$
(14)
the steady-state solution for the coarse grid evolution step becomes
$$\begin{aligned} L_{2}^{-1} \mathbf {F}_{2} \approx L_{2}^{-1} I_{r} \mathbf {F}_{1} \approx \mathcal {R}_{u} \left( L_{1}^{-1} \mathbf {F}_{1} \right) = \mathcal {R}_{u} \mathbf {U}. \end{aligned}$$
In other words, if (14) holds, the coarse grid problem converges towards the steady-state solution \(\mathcal {R}_{u} \mathbf {U}\), i.e. the injection of the steady-state solution \(\mathbf {U}\) onto the coarse grid nodes. For this reason, (14) plays a crucial role in our multigrid method. Henceforth, we will refer to this condition as the approximation assumption.
The convergence of the algorithm in (11) can be studied by rewriting it in compact form as
$$\begin{aligned} \varvec{U}^{n+1} = M \varvec{U}^{n} + \left( I_{1} - M\right) L_{1}^{-1} \varvec{F}_{1} \end{aligned}$$
(15)
where
$$\begin{aligned} M = \left( I_{1} - I_{p} \left( I_{2} - S_{2}\right) L_{2}^{-1} I_{r} L_{1}\right) S_{1} - I_{p}^{E} \mathcal {R}_{u} \left( I_{1} - S_{1}\right) \end{aligned}$$
(16)
is the multigrid iteration matrix.
Remark 3
The only formal difference between the algorithm in (11) and the conventional two-grid procedure [12] is in the fine-grid update step. Indeed, by changing that to the usual
$$\begin{aligned} \varvec{U}^{n+1} = \overline{\varvec{U}}^{(1)} + I_{p} \left( \overline{\varvec{U}}^{(2)} - \mathcal {R}_{u} \overline{\varvec{U}}^{(1)} \right) , \end{aligned}$$
the resulting multigrid iteration matrix for the two approaches has the same structure, i.e. However, in contrast with (13) a centered prolongation operator \(I_{p}\) based on linear interpolation was used in [12]:
$$\begin{aligned} \left( I_{p} \varvec{v}^{(2)} \right) _{j} = {\left\{ \begin{array}{ll} \begin{array}{lll} v_{j}^{(2)}, &{}\quad &{}j = 0, 2, 4, \ldots , N, \\ \frac{1}{2}\left( v_{j}^{(2)} + v_{j+1}^{(2)}\right) , &{} &{}j = 1, 3, 5, \ldots , N-1. \end{array} \end{array}\right. } \end{aligned}$$
Using the first order upwind scheme discretized in time with EF (6) for both the fine- and coarse-grid discretizations, M becomes
$$\begin{aligned} M = \begin{bmatrix} (1 - \lambda )(1 - 2\lambda ) \\ -2\lambda (1-\lambda ) &{}1 - \lambda \\ \lambda (1-\lambda ) &{}\lambda (1- \lambda ) &{}(1 - \lambda )^2 \\ \lambda (1-\lambda ) &{}\lambda (1- \lambda ) &{}- \lambda (1-\lambda ) &{}1 - \lambda \\ &{}\lambda ^{2} &{}\lambda (1 - \lambda ) &{}\lambda (1 - \lambda ) &{}(1 - \lambda )^{2} \\ &{}\lambda ^{2} &{}\lambda (1 - \lambda ) &{}\lambda (1 - \lambda ) &{}- \lambda (1 - \lambda ) &{}1 - \lambda \\ &{} &{}\ddots &{}\ddots &{}\ddots &{}\ddots &{}\ddots \end{bmatrix}. \end{aligned}$$
It is easy to verify that the multigrid iteration scheme (15) converges for \(\lambda \le 1\), since the eigenvalues of M are given by its diagonal entries. Moreover, for the specific choice \(\lambda = 1\), the method has a purely convective transient phase and convergence is achieved in exactly \(N/4 + 1\) iterations. It is also possible to recursively apply grid coarsening to speed up the procedure: for L grids, the multigrid algorithm can be written as
In (17), the superscript \((m \rightarrow n)\) indicates that the interpolation operator transfers information from the grid m to the grid n. For notation simplicity, this superscript is neglected when considering the finest and the second finest grid.
The multigrid algorithm (17) converges in \(N/2^{L} + 1\) iterations (see Fig. 4). These results are in line with the ones obtained with the TVD-MG [4] where the boundary conditions were strongly imposed.
Remark 4
For all the numerical experiments in Sects. 3 and 4, unless otherwise specified, we have used the manufactured solution \(u_s(x)\)\(=\)\(e^{-x}\)\((\cos (10\pi x)\)\(+\)\(\cos (2\pi x))\) and a random initial data compatible with the boundary condition \(u\left( 0,t\right) = 2\). \(\square \)
Remark 5
The multigrid algorithm (17) is referred to as a multiplicative scheme [4, 5]. A so-called additive scheme can be obtained by replacing \(\varvec{U}^{(k+1)} = \mathcal {R}_{u}^{(k \rightarrow k+1)} \overline{\varvec{U}}^{(k)}\) with \(\varvec{U}^{(k+1)} = \mathcal {R}_{u}^{(k \rightarrow k+1)} \varvec{U}^{(k)}\). However, the resulting algorithm turns out to be slower than the multiplicative version and hence only (17) will be considered in the following. \(\square \)

3.1 Initial Modifications for Higher Order Discretizations

The matrix-vector notation introduced in (1117) can be used to generalize and adapt the multigrid algorithm for higher order discretizations. As a first attempt, we substitute \(D_{+}\) in (2) with the SBP upwind operators of 3rd, 4th, 5th and 6th order for all the grids involved. For all the levels, the RK4 time integrator with \(\lambda = 0.5\) is used. The convergence results in Fig. 5 show that the multigrid procedure either becomes ineffective or ceases to converge for more than 2 grid levels.
A second attempt to extend the multigrid procedure to higher order discretizations was made by recalling that the accuracy of the steady state solution does not depend on the accuracy on the coarse grids. Thus, we kept a first order upwind discretization for \(L_{k}\), \(k = 2, 3, \ldots \). The convergence plots for the multigrid procedure with high order discretization on the fine grid and first order discretization on coarser grids are shown in Fig. 6. Once again, the multigrid algorithm does not produce satisfactory results, at least not for orders of accuracy higher than 3.
The outcome of these numerical experiments shows that an extension to higher order discretizations requires a deeper understanding of the interpolation operators and further analysis.

3.2 Revisiting the Upwind-Biased Residual Restriction

Due to the pointwise notation and strong imposition of the boundary conditions used for the TVD-MG, the upwind biased operator \(I_{r}\) (12) was previously presented only for the interior nodes of the discretization. In that case, it can be shown that this residual restriction, for \(\mathbf {F}_{1} = \mathbf {0}\), led to [4]
$$\begin{aligned} \left( \mathbf {F}_{2}\right) _{j}&= \frac{1}{\varDelta x^{(2)}} \left( U^{(2)}_{j} - U^{(2)}_{j-2}\right) - \frac{1}{2} \left( \overline{r}_{j}^{(1)} + \overline{r}_{j-1}^{(1)}\right) \nonumber \\&= \frac{1}{\varDelta x^{(2)}} \left( \overline{U}^{(1)}_{j} - \overline{U}^{(1)}_{j-2}\right) - \frac{1}{2\varDelta x^{(1)}} \left[ \left( \overline{U}^{(1)}_{j} - \overline{U}^{(1)}_{j-1} \right) + \left( \overline{U}^{(1)}_{j-1} - \overline{U}^{(1)}_{j-2} \right) \right] \nonumber \\&= 0, \end{aligned}$$
(18)
since \(\varDelta x^{(2)} = 2 \varDelta x^{(1)}\). On the other hand, the coarse grid residual step of the algorithm (11) can be recast as
$$\begin{aligned} \varvec{F}_{2} = L_{2} \varvec{U}^{(2)} - I_{r} \overline{\mathbf {r}}^{(1)} = \left( L_{2} \mathcal {R}_{u} - I_{r} L_{1}\right) \overline{\varvec{U}}^{(1)} + I_{r} \varvec{F}_{1}. \end{aligned}$$
(19)
For first order upwind discretizations, in the interior nodes we can write
https://static-content.springer.com/image/art%3A10.1007%2Fs10915-020-01166-4/MediaObjects/10915_2020_1166_Equ77_HTML.png
Hence, each step of (18) is mimicked by (19), if \(\mathbf {F}_{1} = \mathbf {0}\). By comparing these two expressions, it is clear that the upwind biased restriction operator [4, 5] satisfies \(L_{2} \mathcal {R}_{u} = I_{r} L_{1}\), verifying exactly the approximation assumption (14), on the interior nodes of \(\varOmega _{2}\). To extend this property also to the boundary nodes we compute
$$\begin{aligned} I_{r}^{+} = L_{2} \mathcal {R}_{u} L_{1}^{-1} = \begin{bmatrix} \frac{1}{2} \\ &{}\frac{1}{2} &{}\frac{1}{2} \\ &{} &{} &{}\frac{1}{2} &{}\frac{1}{2} \\ &{} &{} &{} &{} &{}\ddots &{}\ddots \\ &{} &{} &{} &{} &{} &{} &{}\frac{1}{2} &{}\frac{1}{2} \end{bmatrix}, \end{aligned}$$
(20)
which explains the choice (12) which is identical to (20). \(I_{r}^{+}\) in (1220) is inconsistent at the left boundary node and matches the previously developed upwind biased restriction on the other nodes.
The derivation of \(I_{r}^{+}\) can of course be repeated for propagation problems with left-traveling waves discretized in space with the negative variant of the first order SBP upwind operators. This leads to a residual restriction operator which is rotated with respect to its positive counterpart
$$\begin{aligned} I_{r}^{-} = \begin{bmatrix} \frac{1}{2} &{}\frac{1}{2} \\ &{} &{}\frac{1}{2} &{}\frac{1}{2} \\ &{} &{} &{} &{}\ddots &{}\ddots \\ &{} &{} &{} &{} &{} &{}\frac{1}{2} &{}\frac{1}{2} \\ &{} &{} &{} &{} &{} &{} &{} &{}\frac{1}{2} \end{bmatrix}. \end{aligned}$$
In the following sections we will use \(I_{r}^{+}\) and \(I_{r}^{-}\) for the right- and left-traveling waves, respectively. Similarly, the prolongation operators will be denoted with the superscript \(+\) or − depending on the direction of propagation.

4 Extension to Higher Orders of Accuracy

To generalize the multigrid procedure for (1) to higher order accuracy, we recall that a unique restriction operator \(I_{r}^{+}\) such that \(L_{2} \mathcal {R}_{u} = I_{r}^{+} L_{1}\) exists if \(L_{1} = D_{+} + P^{-1} \mathbf {e}_{0} \mathbf {e}_{0}^{T}\) is invertible. This relation should hold also for the residual restriction between coarser grids. However, since the order of convergence does not depend on the accuracy of the operators on the coarse grids, we choose, as we did in Sect. 3, to use a first order upwind scheme for the coarse grid operators \(L_{i}\), \(i = 2, 3, \ldots \). The obvious advantage is that the residual restriction in (1220) automatically satisfies the constraint \(L_{k+1} \mathcal {R}_{u}^{(k \rightarrow k+1)} = I_{r}^{+,(k \rightarrow k+1)} L_{k}\) for \(k = 2, 3, \ldots \). In other words, to accelerate the convergence for higher order discretizations we use the already existing TVD-MG for first order schemes [4, 5] on coarse grids. The crucial modification to the former algorithm is the introduction of new residual interpolation operators between the fine and the second finest grids, which depends on the order of accuracy of \(L_{1}\).
Note that by demanding \(I_{r}^{+} = L_{2} \mathcal {R}_{u} L_{1}^{-1}\) we can simplify the two-grid matrix M in (16) to
$$\begin{aligned} M = \left( I_{1} - I_{p}^{+} \left( I_{2} - S_{2}\right) \mathcal {R}_{u}\right) S_{1} - I_{p}^{E,+} \mathcal {R}_{u} \left( I_{1} - S_{1}\right) , \end{aligned}$$
(21)
where \(I_{p}^{+} = I_{p}^{I} + I_{p}^{E,+}\). This matrix and, in general, the convergence properties of the multigrid scheme are now formally independent of \(I_{r}^{+}\). However, to use the algorithm in practice we need to compute \(\varvec{F}_{2} = I_{r}^{+} \varvec{F}_{1}\) and hence we still need the restriction operators \(I_{r}^{+}\) in closed form.

4.1 Prolongation Operators

For the interpolation from coarse to fine grid, we will consider two classes of prolongation operators:
1.
The linear prolongation operator \(I_{p}^{+}\) in (13).
 
2.
The prolongation operator \(I_{p}^{+} = I_{p}^{I} + I_{p}^{E,+}\) such that \(I_{p}^{I} = \mathcal {R}_{u}^{T}\) and \(I_{p}^{E,+}\) is obtained from the SBP-preserving relation [12], limited to the nodes not included on the coarse grid \(\varOmega _{2}\), i.e.
$$\begin{aligned} I_{p}^{E,+} = {\left\{ \begin{array}{ll} \begin{array}{lll} P_{1}^{-1} \left( I_{r}^{-}\right) ^{T} P_{2}, &{}\quad &{}\text {nodes on }\varOmega _{1} \setminus \varOmega _{2}, \\ 0, &{} &{}\text {nodes on }\varOmega _{2}. \end{array} \end{array}\right. } \end{aligned}$$
(22)
For simplicity, these prolongation operators will be referred to as SBP-preserving.
 
The SBP-preserving choice (22) also leads to (13) for first order discretizations. In Sect. 4.4 we will perform a few numerical experiments involving the SBP-preserving prolongation. However, unless otherwise stated, we will use the linear prolongation operator in the following.

4.2 Second Order Discretizations

The second order space discretization of (1) is (2) with
$$\begin{aligned} D_{+} = \frac{1}{\varDelta x} \begin{bmatrix} -1 &{}1 \\ -1 &{}1 \\ \frac{1}{2} &{}-2 &{}\frac{3}{2} \\ &{}\ddots &{}\ddots &{}\ddots \\ &{} &{}\frac{1}{2} &{}-2 &{}\frac{3}{2} \\ &{} &{} &{}\frac{2}{5} &{}-\frac{8}{5} &{}1 &{}\frac{1}{5} \\ &{} &{} &{} &{}2 &{}-5 &{}3 \end{bmatrix}, \quad P = \varDelta x \cdot \text {diag}\left( \frac{1}{4}, \frac{5}{4}, 1,\ldots ,1, \frac{5}{4}, \frac{1}{4}\right) .\quad \end{aligned}$$
(23)
Discretizing (2) in time with RK4 we get, as for the first order case, an iterative solver that converges to an approximation of the steady-state solution (5). To accelerate the convergence, we apply (11) with a first order SBP upwind coarse-grid discretization. The residual restriction operator \(I_{r}^{+} = L_{2} \mathcal {R}_{u} L_{1}^{-1}\) in this case can be written in closed form as
$$\begin{aligned} I_{r}^{+} = \left[ \begin{array}{cccccccccccc} \frac{1}{8} &{}-\frac{1}{8} &{}&{}&{}&{}&{}&{}&{}&{}&{}&{}\\ &{}\frac{2}{3} &{}\frac{1}{3} &{}&{}&{}&{}&{}&{}&{}&{}&{}\\ &{}\frac{2}{3^{3}} &{}\frac{4}{3^{3}} &{}\frac{4}{3^{2}} &{}\frac{1}{3}&{}&{}&{}&{}&{}&{}&{} \\ &{}\frac{2}{3^{5}} &{}\frac{4}{3^{5}} &{}\frac{4}{3^{4}} &{}\frac{4}{3^{3}} &{}\frac{4}{3^{2}} &{}\frac{1}{3} &{}&{}&{}&{}&{}\\ &{}\vdots &{}\vdots &{}\cdots &{}\cdots &{}\cdots &{}\ddots &{}\ddots &{}&{}&{}&{}\\ &{}\frac{2}{3^{N-3}} &{}\frac{4}{3^{N-3}} &{}\frac{4}{3^{N-4}} &{}\cdots &{}\cdots &{}\cdots &{}\frac{4}{3^{2}} &{}\frac{1}{3} &{}&{}&{}\\ &{}\frac{1}{4\cdot 3^{N-3}} &{}\frac{1}{6\cdot 3^{N-4}} &{}\frac{1}{6 \cdot 3^{N-5}} &{}\cdots &{}\cdots &{}\cdots &{}\frac{1}{6 \cdot 3} &{}\frac{1}{6} &{}\frac{5}{8} &{}\frac{1}{8}\\ \end{array}\right] . \end{aligned}$$
(24)
Note that the resulting interpolation is consistent for all nodes except the first one, as for (12).
Unlike the first order case, the residual restriction consists of nonlocal contributions which increase the computational effort. A possible solution to this drawback is to cancel all contributions smaller than a given threshold \(\varepsilon \) in absolute value. Furthermore, we round the remaining entries to the number nearest to \(\varepsilon \). We have chosen \(\varepsilon \) to be equal to \(10^{-6}\). Figure 7 shows the sparsity patterns for the residual restriction in (24) and its approximated counterpart.
Remark 6
The storage of \(I_{r}\) is not required, it suffices to compute its action on vectors.
In Fig. 8 (left) we compare the convergence of the multigrid algorithm with the exact (24) and the approximate residual restriction operator. The approximate version does not significantly change the convergence results compared to the use of the exact version in (24), and hence it is preferred.
Remark 7
As for the single-grid case mentioned in Sect. 2.2, the convergence to steady-state consists of a convective and a damping phase. The residual restriction \(I_{r}^{+} = L_{2} \mathcal {R}_{u} L_{1}^{-1}\) makes the convective phase faster and keeps the damping phase almost unaltered. \(\square \)
Remark 8
The fine grid wave propagation is not TVD for discretizations with order higher than one. As a consequence, our multigrid algorithm is not designed to be TVD preserving.

4.3 Extension to Higher Orders

To extend the multigrid algorithm to higher order discretizations, we need to find \(I_{r}^{+}\) such that \(I_{r}^{+} L_{1} = L_{2} \mathcal {R}_{u}\). As for the second order case, it is possible to compute these restriction operators by explicitly inverting the matrix \(L_{1}\). However, in general the entries of \(I_{r}^{+}\) depend on the number of nodes. This makes it unfeasible to exactly represent the restriction operator needed for high order discretizations. However, the real representation of two consecutive central rows of \(I_{r}^{+}\) is only slightly different from one another. In particular we observe that the farther two rows are from the boundaries, the smaller is the difference in absolute value between two corresponding terms. Hence, we identify a repeating stencil by canceling all the contributions smaller than a given threshold \(\varepsilon \) in absolute value and by rounding the entries to the number nearest to \(\varepsilon \).
Remark 9
A repeating stencil structure for the residual restriction operator \(I_{r}^{+}\) can be identified only for N greater than a minimum dimension \(N^{*}\), which depends on the threshold chosen. For each order, we found \(N^{*}\) through a straightforward trial and error procedure.
We have therefore computed, considering a tolerance \(\varepsilon = 10^{-6}\), the approximate version of the residual restriction operators satisfying \(I_{r}^{+} L_{1} = L_{2} \mathcal {R}_{u}\) for a 3rd, 4th, 5th and 6th order fine-grid discretization. Since these discretizations are upwind biased, the resulting residual restrictions also depend on downwind components. In Fig. 9 the sparsity patterns for these operators are shown for \(N = 160\). Once again, the restriction operators are consistent at all nodes except the first one due to the particular structure of the first order upwind operator \(D_{+}\) in (3).
In Fig. 10 we show the convergence plots of the multigrid procedure using the new residual restriction operators \(I_{r}^{+}\). Similarly to the first (Fig. 4) and second (Fig. 8) order discretizations, the new residual restrictions lead to L-grid algorithms with approximately \(2^{L}\) times faster wave propagation, for all orders of accuracy.
For completeness, Table 1 shows the orders of convergence (computed and expected) for different SBP-SAT upwind discretizations of (1). For a pth-order SBP upwind discretization fulfilling Definition 1, the order of convergence is expected to be at least \(\left\lfloor {p/2}\right\rfloor + 1\) [7, 8]. Our results show a superconvergence behavior for some discretizations and do not contradict the theory. Note that in practice it is not meaningful to aim for machine precision with multigrid algorithms. It is enough to get an error below the truncation error of the scheme.
Table 1
Truncation errors and orders of convergence for the upwind SBP operators applied to (2)
\(N{\backslash }\)Discretization
1st
2nd
3rd
4th
5th
6th
100
7.520e−2
2.130e−2
2.474e−3
3.972e−4
2.652e−4
2.664e−4
200
3.785e−2
5.563e−3
4.134e−4
2.303e−5
1.412e−5
1.430e−5
300
2.531e−2
2.500e−3
1.469e−4
4.449e−6
2.683e−6
2.395e−6
400
1.901e−2
1.414e−3
7.081e−5
1.404e−6
8.526e−7
6.654e−7
500
1.522e−2
9.072e−4
4.027e−5
5.789e−7
3.569e−7
2.455e−7
Order
0.9965
1.9889
2.5293
3.9703
3.9026
4.4684
Expected
1
2
2
3
3
4
The orders of convergence are computed with the truncation errors of the two finest grids
Remark 10
The multigrid algorithm (17) with the new interpolators can also be applied to non-constant coefficient problems, for details see “Appendix A”.

4.4 The SBP-Preserving Prolongation

The use of the SBP-preserving prolongation instead of (13) leads to the convergence results shown in Fig. 11 for 3rd, 4th, 5th and 6th order discretizations. In terms of iterations to reach convergence, (22) does not yield any substantial improvement (cf. Fig. 10). However, the cost in terms of arithmetical operations needed for the SBP-preserving prolongation is relatively high compared to the linear prolongation, due to the sparsity pattern comparable to the one of \(I_{r}^{+}\).
Nonetheless, the prolongation in (22) has the advantage of having better stability properties compared to the linear prolongation in (13), hence allowing for slightly increased CFL numbers. In Fig. 12, the convergence results for the 2nd order discretization with RK4 time-marching and \(\lambda = 0.6\) using both classes of prolongation operators are shown. In this case the SBP-preserving prolongation prevents overshoots in the transient phase of the two-grid procedure. Despite this benefit, the SBP-preserving prolongation remains costly compared to the linear prolongation. Hence, in the following (13) will be used in the fine grid update step of (11).

5 Extension to Hyperbolic Systems of Equations

Consider the linear hyperbolic system of equations
$$\begin{aligned} \begin{array}{rclllll} \mathbf {u}_{t} + A \mathbf {u}_{x} &{}= &{}\mathbf {f}(x), &{}\quad &{}0< x< 1, &{}\quad &{}t > 0, \\ \mathbf {u}(x,0) &{}= &{}\mathbf {u}^{0}(x), &{} &{}0< x < 1, \end{array} \end{aligned}$$
(25)
where \(A \in \mathbb {R}^{s \times s}\) is a symmetric matrix with constant coefficients and all the vectors involved have s components. To start with, we associate to (25) the set of characteristic boundary conditions
$$\begin{aligned} \begin{array}{rclll} X_{+}^{T} \mathbf {u} &{}= &{}\mathbf {g}_{0}, &{}\quad &{}x = 0, \\ X_{-}^{T} \mathbf {u} &{}= &{}\mathbf {g}_{1}, &{} &{}x = 1, \end{array} \end{aligned}$$
(26)
where \(X_{+}\) and \(X_{-}\) indicate the eigenvectors related to the positive and negative eigenvalues of A, respectively.
Let \(A = X \Lambda X^{T}\) be the eigendecomposition of A: we define \(A^{\pm } = X \Lambda ^{\pm } X^{T} = X_{\pm } \Lambda ^{\pm } X_{\pm }^{T}\) such that \(A = A^{+} + A^{-}\), where \(\Lambda ^{+}\) and \(\Lambda ^{-}\) are matrices consisting of the positive and negative eigenvalues of A. Furthermore, it is convenient to introduce the projection matrices \(I_{s}^{\pm } = X_{\pm } X_{\pm }^{T}\) in order to split left- and right-traveling waves. The following relations hold
$$\begin{aligned} X = \begin{bmatrix} X_{+}&X_{-} \end{bmatrix}, \ \ \Lambda = \begin{bmatrix} \Lambda ^{+} &{}0 \\ 0 &{}\Lambda ^{-} \end{bmatrix}, \ \ I_{s}^{+} + I_{s}^{-} = I_{s}, \ \ I_{s}^{\pm } A^{\pm } = A^{\pm }, \ \ I_{s}^{\pm } A^{\mp } = 0, \end{aligned}$$
where \(I_{s} \in \mathbb {R}^{s \times s}\) is the identity matrix. The boundary conditions (26) lead to well-posedness of (25), since for \(\mathbf {f} = 0\) the energy-method gives
$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t} \left\| \mathbf {u}\right\| _{2}^{2}&= \mathbf {u}^{T} A \mathbf {u} \mid _{x = 0} - \mathbf {u}^{T} A \mathbf {u} \mid _{x = 1} \\&= \mathbf {g}_{0}^{T} \Lambda ^{+} \mathbf {g}_{0} - \mathbf {g}_{1}^{T} \Lambda ^{-} \mathbf {g}_{1} + \mathbf {u}^{T} A^{-} \mathbf {u} \mid _{x = 0} - \mathbf {u}^{T} A^{+} \mathbf {u} \mid _{x = 1}. \end{aligned}$$
A semidiscrete SBP-SAT approximation of (2526) can be written as [7, 13]
$$\begin{aligned} \varvec{U}_{t} + \left( A^{+} \otimes D_{+}\right) \varvec{U} + \left( A^{-} \otimes D_{-}\right) \varvec{U} = \varvec{f}&- \left( A^{+} \otimes P^{-1} \mathbf {e}_{0} \mathbf {e}_{0}^{T} \right) (\varvec{U} - \widetilde{\varvec{g}}_{0}) \nonumber \\&+\left( A^{-} \otimes P^{-1} \mathbf {e}_{N} \mathbf {e}_{N}^{T} \right) (\varvec{U} - \widetilde{\varvec{g}}_{1}), \end{aligned}$$
(27)
where we have introduced the vector \(\mathbf {e}_{N} = \left( 0,\ldots ,0,1\right) ^{T}\). Moreover, in (27) the operator \(\otimes \) denotes the Kronecker product defined by
$$\begin{aligned} X \otimes Y = \begin{bmatrix} x_{11} Y &{}\cdots &{}x_{1n} Y \\ \vdots &{}\ddots &{}\vdots \\ x_{m1} Y &{}\cdots &{}x_{mn} Y \end{bmatrix} \in \mathbb {R}^{mr \times nk}, \quad \ \ X \in \mathbb {R}^{m \times n}, \ \ Y \in \mathbb {R}^{r \times k}. \end{aligned}$$
The discrete operator \(\left( A_{+} \otimes D_{+}\right) + \left( A_{-} \otimes D_{-}\right) \) in (27) can be recast to highlight the relation between the SBP upwind operators (\(D_{+}\), \(D_{-}\)) and the SBP operators based on centered difference methods (\(D_{c}\)) [8] as
$$\begin{aligned} \left( A_{+} \otimes D_{+}\right) + \left( A_{-} \otimes D_{-}\right)&= \left( A \otimes \frac{D_{+} + D_{-}}{2}\right) + \left( \left( A^{+} - A^{-} \right) \otimes \frac{D_{+} - D_{-}}{2}\right) \nonumber \\&= \left( A \otimes D_{c}\right) + \left( \left| A\right| \otimes P^{-1} S\right) . \end{aligned}$$
(28)
In (28), \(\left| A\right| = A^{+} - A^{-}\) has non-negative eigenvalues and \(S = \frac{1}{2} \left( Q_{+} - Q_{-}\right) = \frac{1}{2} \left( Q_{+} + Q_{+}^{T}\right) \) is a positive semi-definite matrix which can be seen as an artificial dissipation term [7, 14]. As a consequence, the stability of (27) can be proved [6] in the same way as for the usual SBP operators.
Remark 11
The centered operator \(D_{c} = \frac{1}{2} \left( D_{+} + D_{-}\right) \) satisfies the usual SBP property. Indeed, by denoting \(D_{c}\) with \(P^{-1} Q\) we can write
$$\begin{aligned} P^{-1} Q = D_{c} = \frac{D_{+} + D_{-}}{2} = P^{-1} \left( \frac{Q_{+} + Q_{-}}{2} + \frac{B}{2} \right) \end{aligned}$$
and \(Q = \frac{1}{2} \left( Q_{+} + Q_{-} \right) + \frac{B}{2}\). By adding the transpose of Q to itself, we get
$$\begin{aligned} Q + Q^{T} = \frac{1}{2} \left( Q_{+} + Q_{-}^{T} + Q_{-} + Q_{+}^{T} \right) + B = B, \end{aligned}$$
which implies that \(D_{c}\) is a discrete operator fulfilling the usual SBP property [6].

5.1 Modifications of the Multigrid Procedure for Systems of Equations

To apply the multigrid algorithm to hyperbolic systems, we need to recast the semi-discrete formulation (27) in compact form, as we did for the scalar problem (2). By introducing \(L_{1}^{+} = D_{+} + P^{-1} \mathbf {e}_{0} \mathbf {e}_{0}^{T}\) and \(L_{1}^{-} = D_{-} - P^{-1} \mathbf {e}_{N} \mathbf {e}_{N}^{T}\), we can write \(L_{1} = \left( A_{+} \otimes L_{1}^{+}\right) + \left( A_{-} \otimes L_{1}^{-}\right) \) and (27) becomes
$$\begin{aligned} \varvec{U}_{t} + L_{1} \varvec{U} = \varvec{f} + \left( A^{+} \otimes P^{-1} \mathbf {e}_{0} \mathbf {e}_{0}^{T}\right) \widetilde{\varvec{g}}_{0} - \left( A^{-} \otimes P^{-1} \mathbf {e}_{N} \mathbf {e}_{N}^{T}\right) \widetilde{\varvec{g}}_{1} =: \mathbf {F}_{1}. \end{aligned}$$
(29)
In a similar manner as for the scalar case, convergence to steady-state can be accelerated by using the algorithm (11). For the system case, \(L_{2} = \left( A_{+} \otimes L_{2}^{+}\right) + \left( A_{-} \otimes L_{2}^{-}\right) \) indicates a coarse-grid counterpart of \(L_{1}\) and \(S_{i}\) is a matrix representing a time-marching procedure on \(\varOmega _{i}\), \(i = 1,2\). The fine grid update step for the system case can be built from the characteristic modes. Applying directly the same idea of the scalar problem to the characteristic components, we obtain
$$\begin{aligned} \left( X^{T} \otimes I_{N+1}\right) \varvec{U}^{n+1}&= \left( X^{T} \otimes I_{N+1}\right) \overline{\varvec{U}}^{(1)} + \left( X^{T} \otimes I_{p}^{I,e}\right) \left( \overline{\varvec{U}}^{(2)} - \mathcal {R}_{u} \overline{\varvec{U}}^{(1)} \right) \nonumber \\&\quad + \left( \left[ X_{+}, 0\right] ^{T} \otimes I_{p}^{E,+} \right) \left( \overline{\varvec{U}}^{(2)} - \mathcal {R}_{u} \varvec{U}^{n} \right) \\&\quad + \left( \left[ 0, X_{-}\right] ^{T} \otimes I_{p}^{E,-} \right) \left( \overline{\varvec{U}}^{(2)} - \mathcal {R}_{u} \varvec{U}^{n} \right) . \end{aligned}$$
Here, we used the identity matrix \(I_{N+1} \in \mathbb {R}^{\left( N+1\right) \times \left( N + 1\right) }\) and \(I_{p}^{I,e}\) indicates the prolongation operator for the included nodes in the scalar case. Multiplying from the left by \(\left( X \otimes I_{N+1}\right) \) yields
$$\begin{aligned} \varvec{U}^{n+1}&= \overline{\varvec{U}}^{(1)} + \left( I_{s} \otimes I_{p}^{I,e}\right) \left( \overline{\varvec{U}}^{(2)} - \mathcal {R}_{u} \overline{\varvec{U}}^{(1)} \right) \nonumber \\&\quad + \left[ \left( I_{s}^{+} \otimes I_{p}^{E,+}\right) + \left( I_{s}^{-} \otimes I_{p}^{E,-}\right) \right] \left( \overline{\varvec{U}}^{(2)} - \mathcal {R}_{u} \varvec{U}^{n} \right) . \end{aligned}$$
This formula can be recast as the fine grid update step in (11) by using the following prolongation operators
$$\begin{aligned} I_{p}^{I} = I_{s} \otimes I_{p}^{I,e}, \quad \quad \quad I_{p}^{E} = \left( I_{s}^{+} \otimes I_{p}^{E,+}\right) + \left( I_{s}^{-} \otimes I_{p}^{E,-}\right) . \end{aligned}$$
Likewise, we must also consider the restriction operators
$$\begin{aligned} \mathcal {R}_{u} = I_{s} \otimes \mathcal {R}_{u}^{e}, \quad \quad \quad I_{r} = \left( I_{s}^{+} \otimes I_{r}^{+}\right) + \left( I_{s}^{-} \otimes I_{r}^{-}\right) , \end{aligned}$$
(30)
where \(\mathcal {R}_{u}^{e}\) indicates the injection operator for the scalar case. These restriction operators lead to the following
Lemma 1
The operators in (30) yield \(I_{r} L_{1} = L_{2} \mathcal {R}_{u}\) for the characteristic boundary conditions (26).
Proof
The result can be proved by computing \(I_{r} L_{1}\) as
$$\begin{aligned} I_{r} L_{1}&= \left[ \left( I_{s}^{+} \otimes I_{r}^{+}\right) + \left( I_{s}^{-} \otimes I_{r}^{-}\right) \right] \left[ \left( A^{+} \otimes L_{1}^{+}\right) + \left( A^{-} \otimes L_{1}^{-}\right) \right] \\&= \left( \underbrace{I_{s}^{+} A^{+}}_{= A^{+}} \otimes I_{r}^{+} L_{1}^{+}\right) + \left( \underbrace{I_{s}^{+} A^{-}}_{= 0} \otimes I_{r}^{+} L_{1}^{-}\right) \\&\quad + \left( \underbrace{I_{s}^{-} A^{+}}_{= 0} \otimes I_{r}^{-} L_{1}^{+}\right) + \left( \underbrace{I_{s}^{-} A^{-}}_{= A^{-}} \otimes I_{r}^{-} L_{1}^{-}\right) \\&= \left( A^{+} \otimes I_{r}^{+} L_{1}^{+}\right) + \left( A^{-} \otimes I_{r}^{-} L_{1}^{-}\right) . \end{aligned}$$
Since \(I_{r}^{+}\) and \(L_{1}^{+}\) are the same matrices as in the scalar problem, their product fulfills \(I_{r}^{+} L_{1}^{+} = L_{2}^{+} \mathcal {R}_{u}^{e}\) with \(L_{2}^{+} = D_{+,2} + P_{2}^{-1} \mathbf {e}_{0,2} \mathbf {e}_{0,2}^{T}\). Similarly, \(I_{r}^{-} L_{1}^{-} = L_{2}^{-} \mathcal {R}_{u}^{e}\), where \(L_{2}^{-} = D_{-,2} - P_{2}^{-1} \mathbf {e}_{N,2} \mathbf {e}_{N,2}^{T}\), and hence
$$\begin{aligned} I_{r} L_{1}&= \left( A^{+} \otimes L_{2}^{+} \mathcal {R}_{u}^{e}\right) + \left( A^{-} \otimes L_{2}^{-} \mathcal {R}_{u}^{e}\right) \\&= \left[ \left( A^{+} \otimes L_{2}^{+}\right) + \left( A^{-} \otimes L_{2}^{-}\right) \right] \left( I_{s} \otimes \mathcal {R}_{u}^{e}\right) = L_{2} \mathcal {R}_{u}. \end{aligned}$$
\(\square \)
Remark 12
The residual restriction \(I_{r}\) in (30), which consists of the projection matrices \(I_{s}^{\pm }\), has the advantage of making the mixed contributions in \(I_{r}L_{1}\) disappear. \(\square \)
Lemma 1 implies that the operators \(I_{r}^{+}\) and \(I_{r}^{-}\), previously computed for the scalar case, can be used to write a residual restriction \(I_{r}\) which verifies the approximation assumption (14). This results suggests that the algorithm for the system case behaves similarly to the scalar constant coefficient case for problems with characteristic boundary conditions such as (2526).
Remark 13
The general L-grid algorithm (17) can be similarly used for the system case.

5.2 Numerical Results for Characteristic Boundary Conditions

To test the algorithm (17) for the system case, we study the linearized one dimensional symmetrized form of the compressible Euler equations [15]
$$\begin{aligned} \begin{array}{rclllll} \mathbf {U}_{t} + A\mathbf {U}_{x} &{}= &{}\mathbf {F}, &{}\quad &{}0< x< 1, &{}\quad &{}t > 0, \\ \mathbf {U}(x,0) &{}= &{}\mathbf {f}, &{} &{}0< x < 1, \end{array} \end{aligned}$$
(31)
In (31) we have
$$\begin{aligned} \mathbf {U} = \begin{bmatrix} \frac{\overline{c}}{\sqrt{\gamma } \ \overline{\rho }} \rho , u, \frac{1}{\overline{c} \sqrt{\gamma (\gamma - 1)}} \theta \end{bmatrix}^{T}, \qquad \qquad A = \begin{bmatrix} \overline{u} &{}\frac{\overline{c}}{\sqrt{\gamma }} &{}0 \\ \frac{\overline{c}}{\sqrt{\gamma }} &{}\overline{u} &{}\sqrt{\frac{\gamma - 1}{\gamma }} \overline{c} \\ 0 &{}\sqrt{\frac{\gamma - 1}{\gamma }} \overline{c} &{}\overline{u} \end{bmatrix} \end{aligned}$$
with \(\overline{u} = 1\), \(\overline{c} = 2\), \(\overline{\rho } = 1\) and \(\gamma = 1.4\). The variables \(\rho \), u and \(\theta \) are the density, velocity and temperature perturbations of the fluid, respectively. The characteristic boundary conditions (26) become
$$\begin{aligned} \begin{array}{rclll} \frac{\overline{c}}{\overline{\rho }} \rho - \frac{\theta }{\overline{c} \left( \gamma - 1\right) } &{}= &{}g_{01}, &{}\quad &{}x = 0, \\ \frac{\overline{c}}{\overline{\rho }} \rho + \gamma u + \frac{\theta }{\overline{c}} &{}= &{}g_{02}, &{} &{}x = 0, \\ \frac{\overline{c}}{\overline{\rho }} \rho - \gamma u + \frac{\theta }{\overline{c}} &{}= &{}g_{1}, &{} &{}x = 1. \end{array} \end{aligned}$$
(32)
Consider the manufactured solution
$$\begin{aligned} \rho = e^{-\nu \cos ^{2} \left( \xi \pi x\right) }, \quad \quad u = \cos \left( \xi \pi x\right) , \quad \quad \theta = e^{-\nu \sin ^{2}\left( \xi \pi x\right) } \end{aligned}$$
with \(\nu = 0.1\), \(\xi = 2\) and a random initial data compatible with the boundary conditions. The multigrid convergence results for the 1st, 2nd, 3rd, 4th, 5th and 6th order upwind discretizations of (31), (32) are shown in Fig. 13. As expected, applying the algorithm in (17) makes wave propagation faster by a factor of \(2^{L}\). Note that for the two-grid algorithm applied to the 2nd order discretization overshoots occur. As mentioned in Sect. 4.4, this side effect can be eliminated by using the SBP-preserving prolongation (22) in the fine-grid update step of (11).

5.3 Numerical Results for Other Boundary Conditions

Other sets of boundary conditions can also lead to a well-posed problem. Consider a rotation of the matrix \(A = Y \varOmega Y^{T}\) using
$$\begin{aligned} Y = \begin{bmatrix} 1 &{}0 &{}0 \\ \frac{\overline{c}}{\overline{u} \sqrt{\gamma }} &{}1 &{}0 \\ 0 &{}\sqrt{\frac{\gamma - 1}{\gamma }} \frac{\overline{u} \gamma \overline{c}}{\overline{u}^{2} \gamma - \overline{c}^{2}} &{}1 \end{bmatrix}, \quad \quad \varOmega = \text {diag}\left( \overline{u}, \frac{\overline{u}^{2} \gamma - \overline{c}^{2}}{\overline{u} \gamma }, \frac{\overline{u} \gamma \left( \overline{u}^{2} - \overline{c}^{2}\right) }{\overline{u}^{2} \gamma - \overline{c}^{2}}\right) . \end{aligned}$$
(33)
Since we consider a subsonic flow (\(\overline{u} < \overline{c}\)), two boundary conditions must be imposed at \(x = 0\) while the remaining boundary condition is set at \(x = 1\). In particular,
$$\begin{aligned} \begin{array}{rclll} Y_{+}^{T} \mathbf {u} &{}= &{}\mathbf {g}_{0}, &{}\quad &{}x = 0, \\ Y_{-}^{T} \mathbf {u} &{}= &{}\mathbf {g}_{1}, &{} &{}x = 1, \end{array} \end{aligned}$$
lead to the boundary conditions
$$\begin{aligned} \begin{array}{rclll} \frac{\rho }{\overline{\rho }} + \frac{u}{\overline{u}} &{}= &{}g_{0,1}, &{}\quad &{}x = 0, \\ \theta &{}= &{}g_{0,2}, &{} &{}x = 0, \\ u + \frac{\overline{u}}{\overline{u}^{2} \gamma - \overline{c}^{2}} \theta &{}= &{}g_{2}, &{} &{}x = 1, \end{array} \end{aligned}$$
(34)
and a well-posed problem. The semi-discrete formulation
$$\begin{aligned} \varvec{U}_{t} + \left( A^{+} \otimes D_{+}\right) \varvec{U} + \left( A^{-} \otimes D_{-}\right) \varvec{U} = \varvec{f}&- \left( Y_{+} \varOmega Y_{+}^{T} \otimes P^{-1}\mathbf {e}_{0}\mathbf {e}_{0}^{T}\right) (\varvec{U} - \widetilde{\varvec{g}}_{0}) \nonumber \\&+\left( Y_{-} \varOmega Y_{-}^{T} \otimes P^{-1} \mathbf {e}_{N}\mathbf {e}_{N}^{T}\right) (\varvec{U} - \widetilde{\varvec{g}}_{1}), \end{aligned}$$
(35)
can be shown to be stable [13] (it can be rewritten in terms of central difference operators \(D_{c}\) due to (28)).
Applying the L-grid algorithm (17) to (35) leads to the convergence results in Fig. 14. Since \(L_{1} = \left( A_{+} \otimes D_{+}\right) + \left( A_{-} \otimes D_{-}\right) + \left( Y_{+} \varOmega Y_{+}^{T} \otimes P^{-1} \mathbf {e}_{0} \mathbf {e}_{0}^{T}\right) - \left( Y_{-} \varOmega Y_{-}^{T} \otimes P^{-1} \mathbf {e}_{N} \mathbf {e}_{N}^{T}\right) \), Lemma 1 does not hold in this case, and \(I_{r}\) in (30) fulfills \(I_{r} L_{1} = L_{2} \mathcal {R}_{u}\) only at the interior nodes. Despite this fact, the multigrid procedure leads to faster convergence for all the order of accuracy. The convergence to steady-state is slower compared to the one with the characteristic boundary conditions in (32) (cf. Fig. 13), since the non-characteristic boundary condition in (34) are reflecting [16]. In particular, the reflective effects seem to be dominating over the wave propagation for both the single- and multi-grid iterative methods. Since the proposed multigrid method is designed to accelerate wave propagation, slower convergence is somewhat expected.
More generally, the following set of boundary conditions
$$\begin{aligned} \begin{array}{rclll} Y_{+}^{T} \mathbf {u} - R_{0} Y_{-}^{T} \mathbf {u} &{}= &{}\mathbf {g}_{0}, &{}\quad &{}x = 0, \\ Y_{-}^{T} \mathbf {u} - R_{1} Y_{+}^{T} \mathbf {u} &{}= &{}\mathbf {g}_{1}, &{} &{}x = 1, \end{array} \end{aligned}$$
(36)
lead to the well-posedness of (31) if
$$\begin{aligned} \varOmega ^{-} + R_{0}^{T} \varOmega ^{+} R_{0} < 0, \qquad \qquad \varOmega ^{+} + R_{1}^{T} \varOmega ^{-} R_{1} > 0. \end{aligned}$$
(37)
If (37) holds, the semi-discrete SBP-SAT approximation of (31), (36) is stable [13]. As an example, the matrices
$$\begin{aligned} R_{0} = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \qquad \qquad R_{1} = \begin{bmatrix} 1/3&\quad 1/2 \end{bmatrix} \end{aligned}$$
(38)
verify (37) for the rotation (33) and hence can be used in (36) to produce stable boundary conditions for (31). The convergence results for the multigrid algorithm in this case are shown in Fig. 15. Once again, faster convergence to steady-state is achieved proportionally to the number of grids used, but the results are slower compared to the one with \(R_{0} = R_{1} = 0\) as in (34) (cf. Fig. 14).

6 Extension to Nonlinear Problems

Consider the nonlinear conservation law
$$\begin{aligned} \begin{array}{rclllll} u_{t} + f\left( u\right) _{x} &{}= &{}0, &{}\quad &{}0< x< 1, &{}\quad &{}t > 0, \\ u(x,0) &{}= &{}u^{0}(x), &{} &{}0< x < 1, \end{array} \end{aligned}$$
(39)
with appropriate Dirichlet boundary conditions. In order to extend the multigrid algorithm (17) to problems such as (39), we must modify both the residual restriction \(I_{r}\) and the prolongation for the excluded nodes \(I_{p}^{E}\). Since the characteristic directions change from grid points to grid points, these interpolation operators must be constructed by solving local boundary problems
$$\begin{aligned} \begin{array}{rcllllll} u_{t} + f\left( u\right) _{x} &{}= &{}0, &{} &{}x_{L}< x< x_{R}, &{} &{}\overline{t}< t< t^{\text {new}}, \\ u(x_{L},t) &{}= &{}u_{L}, &{} &{}\overline{t}< t< t^\text {new}, \\ u(x_{R},t) &{}= &{}u_{R}, &{} &{}\overline{t}< t< t^{\text {new}} \\ u(x,\overline{t}) &{}= &{}\overline{u}(x), &{} &{}x_{L}< x< x_{R}, \end{array} \ \overline{u}(x) = {\left\{ \begin{array}{ll} \begin{array}{rcll} u_{L}, &{} &{} &{}x_{L}< x< \overline{x}, \\ u_{R}, &{} &{} &{}\overline{x}< x < x_{R}. \end{array} \end{array}\right. } \end{aligned}$$
The extension of \(I_{r}\) and \(I_{p}^{E}\) to nonlinear problems was first presented in [4] for first-order schemes. Here, we generalize this technique to higher orders.

6.1 Interpolation Operators for the Nonlinear Case

We start by considering the residual restriction operator \(I_{r}\) in the two-grid algorithm (11). Since the nonlinear problem (39) can be rewritten as \(u_{t} + f'\left( u\right) u_{x} = 0\), the sign of \(f'\left( u\right) \) determines the direction of the wave propagation. As a consequence, \(I_{r}\left( \overline{\varvec{r}}^{(1)}\right) \) depends on the sign of \(f'\) evaluated at \(\overline{\mathbf {U}}^{(1)}\).
Consider the 2jth node, which is included in the coarse grid. If the sign of \(f'\) does not change in a neighborhood of this node, it is easy to identify the direction of the upwind-biased restriction. In particular, if both \(f'\left( \overline{U}_{2j-1}^{(1)}\right) \) and \(f'\left( \overline{U}_{2j+1}^{(1)}\right) \) are non-negative, the problem gives rise to a right-traveling wave and the residual restriction at \(x_{2j}^{(2)}\) is given by \(I_{r}^{+}\). Vice versa, if \(f'\left( \overline{U}_{2j-1}^{(1)}\right) \le 0\) and \(f'\left( \overline{U}_{2j+1}^{(1)}\right) \le 0\), then the problem propagates from right to left and the restriction at \(x_{2j}^{(2)}\) is computed with \(I_{r}^{-}\). In Fig. 16, these two cases are illustrated for the first-order residual restriction operators \(I_{r}^{\pm }\).
Sign changes of \(f'\) yield either shocks or rarefaction fans. For example, if \(f'\left( \overline{U}_{2j-1}^{(1)}\right) \) is positive and \(f'\left( \overline{U}_{2j+1}^{(1)}\right) \) is negative, a discontinuity is expected to occur in a neighborhood of \(x_{2j}^{(2)}\). In this case, the sign of \(f'\left( \overline{U}_{2j}^{(1)}\right) \) determines the direction of the residual restriction \(I_{r}\), see Fig. 17. Vice versa, rarefaction fans are experienced when \(f'\left( \overline{U}_{2j-1}^{(1)}\right) \) is negative and \(f'\left( \overline{U}_{2j+1}^{(1)}\right) \) is positive. In that case, the node \(x_{2j}^{(2)}\) is not reached by any traveling wave and the residual restriction acts locally as the injection operator \(\mathcal {R}_{u}\), see Fig. 18.
The resulting residual restriction acting on the coarse grid node \(x_{2j}^{(2)}\) is summarized below:
$$\begin{aligned} \left[ I_{r}\left( \overline{\varvec{r}}^{(1)}\right) \right] _{2j} = {\left\{ \begin{array}{ll} \begin{array}{rcrl} \left[ I_{r}^{+}\left( \overline{\varvec{r}}^{(1)}\right) \right] _{2j}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j-1}^{(1)}\right) \ge 0, f'\left( \overline{U}_{2j+1}^{(1)}\right) \ge 0, \\ \left[ I_{r}^{-}\left( \overline{\varvec{r}}^{(1)}\right) \right] _{2j}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j-1}^{(1)}\right) \le 0, f'\left( \overline{U}_{2j+1}^{(1)}\right) \le 0, \\ \left[ I_{r}^{+}\left( \overline{\varvec{r}}^{(1)}\right) \right] _{2j}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j-1}^{(1)}\right) \ge 0 \ge f'\left( \overline{U}_{2j+1}^{(1)}\right) , \\ &{} &{}\text {and} &{}f'\left( \overline{U}_{2j}^{(1)}\right) \ge 0, \\ \left[ I_{r}^{-}\left( \overline{\varvec{r}}^{(1)}\right) \right] _{2j}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j-1}^{(1)}\right) \ge 0 \ge f'\left( \overline{U}_{2j+1}^{(1)}\right) , \\ &{} &{}\text {and} &{}f'\left( \overline{U}_{2j}^{(1)}\right) \le 0, \\ \left[ \mathcal {R}_{u}\left( \overline{\varvec{r}}^{(1)}\right) \right] _{2j}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j-1}^{(1)}\right) \le 0 \le f'\left( \overline{U}_{2j+1}^{(1)}\right) . \end{array} \end{array}\right. } \end{aligned}$$
(40)
The same arguments hold for the prolongation operator on the excluded nodes \(I_{p}^{E}\) in the two-grid algorithm (11), whose direction depends on the sign of \(f'\) evaluated at \(\overline{\mathbf {U}}^{(2)}\). In particular, by introducing \(\mathbf {\overline{b}}^{(2)} = \overline{\varvec{U}}^{(2)} - \mathcal {R}_{u} \overline{\varvec{U}}^{(1)}\) we can write \(I_{p}^{E}\left( \overline{\varvec{b}}^{(2)}\right) \) on the excluded fine grid node \(x_{2j+1}^{(1)}\) as
$$\begin{aligned} \left[ I_{p}^{E}\left( \overline{\varvec{b}}^{(2)}\right) \right] _{2j+1} = {\left\{ \begin{array}{ll} \begin{array}{rcrl} \left[ I_{p}^{E,+}\left( \overline{\varvec{b}}^{(2)}\right) \right] _{2j+1}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j}^{(2)}\right) \ge 0, f'\left( \overline{U}_{2j+2}^{(2)}\right) \ge 0, \\ \left[ I_{p}^{E,-}\left( \overline{\varvec{b}}^{(2)}\right) \right] _{2j+1}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j}^{(2)}\right) \le 0, f'\left( \overline{U}_{2j+2}^{(2)}\right) \le 0, \\ \left[ I_{p}^{E,+}\left( \overline{\varvec{b}}^{(2)}\right) \right] _{2j+1}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j}^{(2)}\right) \ge 0 \ge f'\left( \overline{U}_{2j+2}^{(2)}\right) , \\ &{} &{}\text {and} &{}f'\left( \overline{U}_{2j+1}^{(1)}\right) \ge 0, \\ \left[ I_{p}^{E,-}\left( \overline{\varvec{b}}^{(2)}\right) \right] _{2j+1}, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j}^{(2)}\right) \ge 0 \ge f'\left( \overline{U}_{2j+2}^{(2)}\right) , \\ &{} &{}\text {and} &{}f'\left( \overline{U}_{2j+1}^{(1)}\right) \le 0, \\ 0, &{} &{}\text {if} &{}f'\left( \overline{U}_{2j}^{(2)}\right) \le 0 \le f'\left( \overline{U}_{2j+2}^{(2)}\right) . \end{array} \end{array}\right. } \end{aligned}$$
(41)
Remark 14
The prolongation operator \(I_{p}^{E}\) acts on the excluded nodes \(x_{2j+1}^{(1)}\), which do not have a corresponding node on the coarse grid. Hence, the only available information to determine the direction of a shock at \(x_{2j+1}^{(1)}\) is the sign of \(f'\left( \overline{U}_{2j+1}^{(1)}\right) \).
The L-grid algorithm for nonlinear problems is based on the interpolation operators given by (40) and (41).
Table 2
Truncation errors and orders of convergence for the upwind SBP operators applied to (43)
\(N{\backslash }\) Discretization
1st
2nd
3rd
4th
5th
6th
100
2.394e−2
6.892e−5
3.992e−5
8.254e−7
7.982e−7
2.065e−8
200
1.165e−2
1.741e−5
9.920e−6
1.038e−7
1.004e−7
1.308e−9
300
7.702e−3
7.763e−6
4.400e−6
3.082e−8
2.982e−8
2.596e−10
400
5.752e−3
4.374e−6
2.473e−6
1.302e−8
1.259e−8
8.238e−11
500
4.590e−3
2.802e−6
1.581e−6
6.668e−9
6.452e−9
3.381e−11
Order
1.0113
1.9954
2.003
2.997
2.997
3.991
Expected
1
2
2
3
3
4
The orders of convergence are computed for the manufactured solution \(e^{-x}\) and with the truncation errors of the two finest grids

6.2 A Stable Upwind SBP-SAT Spatial Discretization of the Burgers’ Equation

As an example of a nonlinear conservative law, we consider the Burgers’ equation
$$\begin{aligned} \begin{array}{rclllll} u_{t} + u u_{x} &{}= &{}0, &{}\quad &{}0< x< 1, &{}\quad &{}t > 0, \\ u(x,0) &{}= &{}u^{0}(x), &{} &{}0< x < 1, \end{array} \end{aligned}$$
(42)
which can be recast as (39) with \(f\left( u\right) = u^{2}/2\). A stable spatial discretization of (42) with SBP upwind operators can be obtained by means of centered finite differences \(D_{c} = \frac{1}{2} \left( D_{+} + D_{-}\right) \) (see Remark 11) in combination with the artificial dissipation \(P^{-1} S = \frac{1}{2} \left( D_{+} - D_{-}\right) \) such that \(D_{\pm } = D_{c} \pm P^{-1}S\). In particular, we write
$$\begin{aligned} \begin{aligned} \mathbf {u}_{t} + \frac{1}{3} \mathcal {U} D_{c} \mathbf {u} + \frac{1}{3} D_{c} \mathcal {U} \mathbf {u} + P^{-1} S \mathbf {u} =&\,- P^{-1} \mathbf {e}_{0} a_{0}\left( u_{0}\right) \left( u_{0} - g_{0}\right) \\&+ P^{-1} \mathbf {e}_{N} a_{1}\left( u_{N}\right) \left( u_{N} - g_{N}\right) , \end{aligned} \end{aligned}$$
(43)
with \(\mathcal {U} = \text {diag}\left( \left[ u\left( x_{0}\right) , u\left( x_{1}\right) , \ldots , u\left( x_{N}\right) \right] \right) \), \(g_{0}\), \(g_{N}\) given boundary data and
$$\begin{aligned} a_{0}\left( u_{0}\right) = \frac{1}{3} \left( u_{0} + \left| u_{0} \right| \right) , \qquad \qquad a_{1}\left( u_{N}\right) = \frac{1}{3} \left( u_{N} - \left| u_{N} \right| \right) . \end{aligned}$$
Applying the energy-method to (43) with zero boundary data yields
$$\begin{aligned} \frac{1}{2} \frac{\mathrm {d}}{\mathrm {d}t} \left\| \mathbf {u}\right\| _{P}^{2} + \frac{1}{3} \mathbf {u}^{T} \mathcal {U} Q \mathbf {u} + \frac{1}{3} \mathbf {u}^{T} Q \mathcal {U} \mathbf {u} + \mathbf {u}^{T} S \mathbf {u} = - a_{0}\left( u_{0}\right) u_{0}^{2} + a_{1}\left( u_{N}\right) u_{N}^{2}. \end{aligned}$$
(44)
Since \(\mathbf {u}^{T} \mathcal {U} Q \mathbf {u} + \mathbf {u}^{T} Q \mathcal {U} \mathbf {u} = \frac{1}{2} \mathbf {u}^{T} \left[ \mathcal {U} \left( Q + Q^{T}\right) + \left( Q + Q^{T}\right) \mathcal {U} \right] \mathbf {u} = u_{N}^{3} - u_{0}^{3}\), the energy-rate (44) can be rewritten as
$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t} \left\| \mathbf {u}\right\| _{P}^{2} = - 2 \mathbf {u}^{T} S \mathbf {u} - \frac{2}{3} u_{0}^{2} \left| u_{0}\right| - \frac{2}{3} u_{N}^{2} \left| u_{N}\right| \end{aligned}$$
which leads to stability of (43) since S is positive semi-definite.
The orders of convergence of the semi-discrete problem (43) are shown in Table 2. The truncation errors were computed by using the smooth steady manufactured solution \(u\left( x\right) = e^{-x}\), which is solution to the inhomogeneous problem \(u_{t} + u u_{x} = - e^{-2x}\). The manufactured solution was also used to provide boundary and initial data. For each test, the steady-state solution was computed with 5000 iterations of the fourth order Runge-Kutta method with \(\lambda = 0.5\). The orders of convergence match the expected order \(\left\lfloor {p/2}\right\rfloor + 1\) for a discretization involving pth-order SBP upwind operators [7, 8].

6.3 Numerical Results for the Burgers’ Equation

The convergence to steady-state of the spatial discretization (43) can be accelerated by using the multigrid algorithm (17) modified with the interpolation operators given in (40) and (41). Here, we consider the two test cases used in [4].
As a first example, we consider a shock problem with initial data
$$\begin{aligned} u^{0}\left( x\right) = {\left\{ \begin{array}{ll} \begin{array}{rcl} 1, &{}\quad &{}0 \le x \le \frac{1}{4}, \\ 0, &{} &{}\frac{1}{4}< x < \frac{3}{4}, \\ -1, &{} &{}\frac{3}{4} \le x \le 1. \end{array} \end{array}\right. } \end{aligned}$$
(45)
Both \(u^0\left( x\right) \) and the analytical steady-state solution for the Burgers’ equation (42) are shown in Fig. 19. Due to the discontinuity at \(x = \frac{1}{2}\), the steady-state solution for the semi-discrete problem (43) differs slightly from the analytical one in a neighborhood of \(x = \frac{1}{2}\), see Fig. 20. Likewise, also the multigrid solutions exhibit minor differences with respect to each other. Note that we have not used any specific shock treatment in these calculations. Since the steady-state solution changes slightly from case to case, the convergence plots below show the relative error \(\mathbf {U}^{n+1} - \mathbf {U}^{n}\) in the P-norm. The multigrid convergence for the Burgers’ equation with the discontinuous initial data \(u^{0}\left( x\right) \) in (45) is shown in Fig. 21. As for linear problems, the multigrid algorithm leads to approximately \(2^{L}\) times faster wave propagation, with the only exception of the two-grid procedure applied to the second order discretization. In this case, the convergence is only twice as fast as the single-grid method.
Next, we consider the initial data \(u^{0}\left( x\right) = \cos \left( 5\pi x\right) \) which develops both shocks and rarefactions, leading to the same analytical steady-state solution as before, see Fig. 22. The multigrid convergence for this problem is shown in Fig. 23. Furthermore, in Fig. 24 we show the multigrid solution with five grid levels at different iterations for a fifth order discretization. The algorithm (17) with the nonlinear modifications in (40) and (41) leads to approximately \(2^{L}\) times faster wave propagation only in the first order case. For higher orders, the speedup factor drops to approximately \(2^{L-1}\).
Remark 15
The approximation assumption (14) holds only approximately for the nonlinear residual restriction in (40) applied to the SBP upwind discretization in (43). Nonetheless, we could in some cases obtain the optimal speedup factor. This results suggests that fulfilling the approximation assumption (14) is not absolutely necessary in order to obtain L-grid procedures with \(2^{L}\) times faster wave propagation.

7 Two-Dimensional Problems

Consider the linear advection problem in two space dimensions
$$\begin{aligned} \begin{array}{rclllllll} u_{t} + a u_{x} + b u_{y} &{}= &{}f(x,y), &{}\quad &{}0< x< 1, &{}\quad &{}0< y< 1, &{}\quad &{}t> 0, \\ u(x,0,t) &{}= &{}g_{S}\left( x\right) , &{} &{}0< x< 1, &{} &{}t> 0, \\ u(0,y,t) &{}= &{}g_{W}\left( y\right) , &{} &{}0< y< 1, &{} &{}t > 0, \\ u(x,y,0) &{}= &{}u^{0}(x,y), &{} &{}0< x< 1, &{} &{}0< y < 1, \end{array} \end{aligned}$$
(46)
with a and b positive constants.
Remark 16
Multigrid techniques for two-dimensional hyperbolic problems require a double full-coarsening [5], for details see “Appendix B”.
As a first attempt to accelerate the convergence to steady-state, the residual restriction is computed as a linear combination of the one-dimensional operators as in [5], i.e.
$$\begin{aligned} I_{r} = \frac{1}{2} \left( I_{r,x} \otimes \mathcal {R}_{u,y} + \mathcal {R}_{u,x} \otimes I_{r,y}\right) . \end{aligned}$$
(47)
The resulting multigrid convergence to the manufactured steady-state solution \(u\left( x,y\right) = \cos \left( x^{2} + y^{2}\right) \) is shown in Fig. 25 for \(a = b = 1\) in (46). Convergence is achieved for one, two and three grid levels. For the three level procedure, even though the wave propagation is accelerated by a factor of \(2^{3} = 8\) as expected, the convergence to steady-state is mostly the same as the one of the two level algorithm. Furthermore, overshoots occur for more than three grid levels. These spurious oscillations both slow down the propagation and lead to an inaccurate steady-state solution.
However, the multigrid algorithm with the residual restriction operators satisfying the approximation assumption (14) prevents overshoots in all the L-level procedures. Moreover, it provides \(2^{L}\) times faster wave propagation, see Fig. 26. However, the convergence rate becomes significantly worse for higher order fine grid discretizations. This effect seems to make the algorithm less effective. We envision that other interpolation operators in (17) might both overcome this drawback and prevent overshoots in the transient phase.

8 Conclusions and Future Work

In this paper, we have replicated and extended the first order accurate TVD-MG scheme [4, 5] to upwind Summation-By-Parts (SBP) based high-order accurate finite difference methods [6, 7] for linear hyperbolic problems. We have reinterpreted the upwind biased interpolation from a matrix-vector perspective, leading to more general residual restriction operators. These new operators, which satisfy the approximation assumption, allowed us to complement the wave propagation for high-order discretizations with the TVD-MG scheme for first order approximations on coarse grids.
Furthermore, we have shown that the restriction operators needed to accelerate wave propagation for the linear advection equation can be used to improve the procedure for hyperbolic systems. For characteristic boundary conditions, the new multigrid scheme leads to wave propagation with increasing speed as the number of grids increases. For other stable boundary conditions, the effect of this algorithm is to increase the convergence rate.
We have also extended the nonlinear modification of the TVD-MG scheme to higher order SBP-SAT upwind discretizations. Convergence is achieved for all the orders of accuracy, even when dealing with shocks and rarefaction fans. For almost all the one-dimensional (linear and nonlinear) convection-dominated problems that we have investigated, the L-grid algorithm led to \(2^{L}\) times faster wave propagation. The speedup factor dropped to \(2^{L-1}\) only for the multigrid procedure applied to the Burgers’ equation with both shocks and rarefactions.
Finally, two-dimensional problems have been studied. We have shown that fulfilling the approximation assumption is a sufficient condition for obtaining a speedup factor of \(2^{L}\) for the wave propagation. However, the resulting procedure is costly. More research is needed in order to design two-dimensional multigrid algorithms which prevent overshoots and do not require the approximation assumption to hold.

Acknowledgements

Open Access funding provided by Linköping University. This work was supported by VINNOVA, the Swedish Governmental Agency for Innovation Systems, under Contract Number 2013-01209. We thank the reviewers for valuable suggestions that significantly improved the article.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix

Numerical Results for a Variable Coefficient Advection Problem

The multigrid algorithm (17) can also be used to accelerate variable coefficient advection problems such as
$$\begin{aligned} \begin{array}{rclllll} u_{t} + a\left( x\right) u_{x} &{}= &{}f(x), &{}\quad &{}0< x< 1, &{}\quad &{}t> 0, \\ u(0,t) &{}= &{}g, &{} &{}t > 0, \\ u(x,0) &{}= &{}u^{0}(x), &{} &{}0< x < 1. \end{array} \end{aligned}$$
(48)
Here, we consider a strictly positive coefficient \(a\left( x\right) \) and the upwind SBP-SAT discretization
$$\begin{aligned} \begin{array}{rclll} \varvec{U}_{t} + A D_{+} \varvec{U} &{}= &{}\varvec{f} - A P^{-1} (U_{0} - g) \varvec{e}_{0}, &{}\quad &{}t > 0, \\ \varvec{U}(0) &{}= &{}\varvec{U}^{0}, \end{array} \end{aligned}$$
(49)
where \(A = \text {diag}\left( \left[ a\left( x_{0}\right) , a\left( x_{1}\right) , \ldots , a\left( x_{N}\right) \right] \right) \). Similarly to the previous spatial discretization (2), this problem can be proved to be stable in the \(A^{-1}P\) norm.
Consider the advection coefficient
$$\begin{aligned} a\left( x\right) = {\left\{ \begin{array}{ll} \begin{array}{lll} \frac{1}{2}\left( 2 + \cos \left( 10\pi x\right) \right) , &{} &{}0 \le x < \frac{1}{2}, \\ 2x - \frac{1}{2}, &{} &{}\frac{1}{2} \le x \le 1, \end{array} \end{array}\right. } \end{aligned}$$
(50)
the forcing function \(f\left( x\right) = 2 + x e^{-x^{2}}\) and the boundary data \(g = 1\). The advection coefficient (50) and the corresponding steady-state solution to (48) are shown in Fig. 27.
For this problem, the approximation assumption (14) is not strictly satisfied. Nonetheless, the convergence results for the single- and L-grid procedure (17) in Fig. 28 show that the wave propagation is accelerated approximately by a factor of \(2^{L}\), as for the constant coefficient problem (1).

The Double Full-Coarsening for Two-Dimensional Problems

Consider the one-dimensional advection problem (1) in a two-dimensional domain \(\left( 0,1\right) ^{2}\). The standard full-coarsening (shown in the left side of Fig. 29) suggests that wave propagation, which in this case propagates the signal towards the right boundary, can not be accelerated by multigrid in the lines consisting of only excluded nodes. To address this issue, a second full-coarsening is needed (see the middle side of Fig. 29). By combining these two coarse grids, one gets a double full-coarsening (as in the right side of Fig. 29) which allows for wave propagation through all the lines of the fine grid. As a consequence, two-dimensional problems require \(2^{k-1}\) grids for the kth level and a total amount of \(2^{L}-1\) grids in order to accelerate wave propagation with a L-level algorithm. Analogously, a three-dimensional problem would need \(2^{2\left( k-1\right) }\) grids for the kth level and a total amount of \(\left( 4^{L}-1\right) /3\) grids [5].
The double full-coarsening also allows for an easier interpretation of the fine-grid update step in (17) for two-dimensional problems: the solution at the excluded nodes is reconstructed by interpolating the known data from the neighboring points. As in the one-dimensional problems, the direction of the interpolation is determined by the direction of the wave propagation.
Note that the second full-coarsening in Fig. 29 can not include the boundary nodes. Indeed, a staggered grid retaining the boundary nodes would lead to the same CFL limitation of the fine grid. Due to this constraint, a limited amount of interpolation operators must be derived in order to accelerate the convergence of \(u_{t} + u_{x} = f\) to steady-state. For this problem, which propagates the modes along the x-line, we consider the interpolation operators
$$\begin{aligned} I_{r} = I_{r,x} \otimes \mathcal {R}_{u,y}, \quad I_{p}^{E} = I_{p,x}^{E} \otimes I_{p,y}^{I}, \quad I_{p}^{I} = I_{p,x}^{I} \otimes I_{p,y}^{I} \quad \mathcal {R}_{u} = \mathcal {R}_{u,x} \otimes \mathcal {R}_{u,y}, \end{aligned}$$
(51)
where \(I_{r,\alpha }\), \(\mathcal {R}_{u,\alpha }\), \(I_{p,\alpha }^{I}\), \(I_{p,\alpha }^{E}\) are the one-dimensional interpolation operators in the coordinate direction \(\alpha \in \left\{ x,y\right\} \). These operators depend on the kind of coarsening considered. The four possible configurations are shown in Fig. 30. In Sect. 4 we have derived the interpolation operators only for the configuration where the boundary nodes are included in both the fine and coarse grids. With a similar procedure it is possible to obtain the residual restrictions for the other three cases, as well.
Remark 17
Our multigrid method makes use of high order wave propagation in the fine grid, while the first order discretization is needed for all the coarse grids. Hence, the interpolation operators conveying the information between grids involving an even number of nodes are uniquely determined by the direction of the wave propagation.
The modified two-dimensional L-level algorithm for \(u_{t} + u_{x} = f\) was studied, for both the single standard and double full-coarsenings, in Fig. 31. The standard coarsening fails to accelerate wave propagation, as expected. The double coarsening leads to \(2^{L}\) times faster wave propagation as also demonstrated in [5] for first order discretizations. However, the convergence to steady-state is rather slow.
Literatur
1.
Zurück zum Zitat Fedorenko, R.P.: Iterative methods for elliptic difference equations. Rus. Math. Surv. 28, 129–195 (1973)CrossRef Fedorenko, R.P.: Iterative methods for elliptic difference equations. Rus. Math. Surv. 28, 129–195 (1973)CrossRef
2.
Zurück zum Zitat Hackbusch, W.: Multi-Grid Methods and Applications. Springer, Berlin (1985)CrossRef Hackbusch, W.: Multi-Grid Methods and Applications. Springer, Berlin (1985)CrossRef
3.
4.
Zurück zum Zitat Wan, J.W.L., Jameson, A.: Monotonicity preserving multigrid time stepping schemes for conservation laws. Comput. Vis. Sci. 11, 41–58 (2008)MathSciNetCrossRef Wan, J.W.L., Jameson, A.: Monotonicity preserving multigrid time stepping schemes for conservation laws. Comput. Vis. Sci. 11, 41–58 (2008)MathSciNetCrossRef
5.
Zurück zum Zitat Amarala, S., Wan, J.W.L.: Multigrid methods for systems of hyperbolic conservation laws. Multiscale Model. Simul. 11, 586–614 (2013)MathSciNetCrossRef Amarala, S., Wan, J.W.L.: Multigrid methods for systems of hyperbolic conservation laws. Multiscale Model. Simul. 11, 586–614 (2013)MathSciNetCrossRef
6.
Zurück zum Zitat Svärd, M., Nordström, J.: Review of summation-by-parts schemes for initial–boundary-value problems. J. Comput. Phys. 268, 17–38 (2014)MathSciNetCrossRef Svärd, M., Nordström, J.: Review of summation-by-parts schemes for initial–boundary-value problems. J. Comput. Phys. 268, 17–38 (2014)MathSciNetCrossRef
8.
Zurück zum Zitat Del Rey, D.C., Hicken, J.E., Zingg, D.W.: Review of summation-by-parts operators with simultaneous approximation terms for the numerical solution of partial differential equations. Comput. Fluids 95, 171–196 (2014)MathSciNetCrossRef Del Rey, D.C., Hicken, J.E., Zingg, D.W.: Review of summation-by-parts operators with simultaneous approximation terms for the numerical solution of partial differential equations. Comput. Fluids 95, 171–196 (2014)MathSciNetCrossRef
9.
Zurück zum Zitat Ruggiu, A.A., Nordström, J.: On pseudo-spectral time discretizations in summation-by-parts form. J. Comput. Phys. 360, 192–201 (2018)MathSciNetCrossRef Ruggiu, A.A., Nordström, J.: On pseudo-spectral time discretizations in summation-by-parts form. J. Comput. Phys. 360, 192–201 (2018)MathSciNetCrossRef
10.
11.
Zurück zum Zitat Ruggiu, A.A., Nordström, J.: Eigenvalue analysis for summation-by-parts finite difference time discretizations. Linköping University Press, Technical Report, LiTH-MAT-R–2019/09–SE (2019) Ruggiu, A.A., Nordström, J.: Eigenvalue analysis for summation-by-parts finite difference time discretizations. Linköping University Press, Technical Report, LiTH-MAT-R–2019/09–SE (2019)
12.
Zurück zum Zitat Ruggiu, A.A., Nordström, J.: A new multigrid formulation for high order finite difference methods on summation-by-parts form. J. Comput. Phys. 359, 216–238 (2018)MathSciNetCrossRef Ruggiu, A.A., Nordström, J.: A new multigrid formulation for high order finite difference methods on summation-by-parts form. J. Comput. Phys. 359, 216–238 (2018)MathSciNetCrossRef
13.
Zurück zum Zitat Nordström, J.: A roadmap to well posed and stable problems in computational physics. J. Sci. Comput. 71, 365–385 (2017)MathSciNetCrossRef Nordström, J.: A roadmap to well posed and stable problems in computational physics. J. Sci. Comput. 71, 365–385 (2017)MathSciNetCrossRef
14.
Zurück zum Zitat Mattsson, K., Svärd, M., Nordström, J.: Stable and accurate artificial dissipation. J. Sci. Comput. 21, 57–79 (2004)MathSciNetCrossRef Mattsson, K., Svärd, M., Nordström, J.: Stable and accurate artificial dissipation. J. Sci. Comput. 21, 57–79 (2004)MathSciNetCrossRef
15.
Zurück zum Zitat Abarbanel, S., Gottlieb, D.: Optimal time splitting for two- and three-dimensional Navier–Stokes equations with mixed derivatives. J. Comput. Phys. 41, 1–43 (1981)MathSciNetCrossRef Abarbanel, S., Gottlieb, D.: Optimal time splitting for two- and three-dimensional Navier–Stokes equations with mixed derivatives. J. Comput. Phys. 41, 1–43 (1981)MathSciNetCrossRef
16.
Zurück zum Zitat Engquist, B., Majda, A.: Absorbing boundary conditions for the numerical simulation of waves. Math. Comput. 31, 629–651 (1977)MathSciNetCrossRef Engquist, B., Majda, A.: Absorbing boundary conditions for the numerical simulation of waves. Math. Comput. 31, 629–651 (1977)MathSciNetCrossRef
Metadaten
Titel
Multigrid Schemes for High Order Discretizations of Hyperbolic Problems
verfasst von
Andrea A. Ruggiu
Jan Nordström
Publikationsdatum
01.03.2020
Verlag
Springer US
Erschienen in
Journal of Scientific Computing / Ausgabe 3/2020
Print ISSN: 0885-7474
Elektronische ISSN: 1573-7691
DOI
https://doi.org/10.1007/s10915-020-01166-4

Weitere Artikel der Ausgabe 3/2020

Journal of Scientific Computing 3/2020 Zur Ausgabe

Premium Partner