Skip to main content
Erschienen in: BIT Numerical Mathematics 4/2018

Open Access 05.07.2018

An accelerated Poisson solver based on multidomain spectral discretization

verfasst von: Tracy Babb, Adrianna Gillman, Sijia Hao, Per-Gunnar Martinsson

Erschienen in: BIT Numerical Mathematics | Ausgabe 4/2018

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper presents a numerical method for variable coefficient elliptic PDEs with mostly smooth solutions on two dimensional domains. The method works best for domains that can readily be mapped onto a rectangle, or a collection of nonoverlapping rectangles. The PDE is discretized via a multi-domain spectral collocation method of high local order (order 30 and higher have been tested and work well). Local mesh refinement results in highly accurate solutions even in the presence of local irregular behavior due to corner singularities, localized loads, etc. The system of linear equations attained upon discretization is solved using a direct (as opposed to iterative) solver with \(O(N^{1.5})\) complexity for the factorization stage and \(O(N \log N)\) complexity for the solve. The scheme is ideally suited for executing the elliptic solve required when parabolic problems are discretized via time-implicit techniques. In situations where the geometry remains unchanged between time-steps, very fast execution speeds are obtained since the solution operator for each implicit solve can be pre-computed.
Hinweise
Communicated by Elisabeth Larsson.

1 Introduction

This manuscript describes a direct solver for elliptic PDEs such as, e.g.,
$$\begin{aligned} \left\{ \begin{aligned}{}[Au](\varvec{x})&= g(\varvec{x}),\quad&\varvec{x}\in \varOmega ,\\ u(\varvec{x})&= f(\varvec{x}),\quad&\varvec{x}\in \varGamma , \end{aligned}\right. \end{aligned}$$
(1)
where A is a variable coefficient elliptic differential operator
$$\begin{aligned}{}[Au](\varvec{x})= & {} -\,c_{11}(\varvec{x})[\partial _{1}^{2}u](\varvec{x}) -2c_{12}(\varvec{x})[\partial _{1}\partial _{2}u](\varvec{x}) -c_{22}(\varvec{x})[\partial _{2}^{2}u](\varvec{x})\nonumber \\&+\,c_{1}(\varvec{x})[\partial _{1}u](\varvec{x}) +c_{2}(\varvec{x})[\partial _{2}u](\varvec{x}) +c(\varvec{x})\,u(\varvec{x}), \end{aligned}$$
(2)
where \(\varOmega \) is a rectangular domain in \(\mathbb {R}^{2}\) with boundary \(\varGamma = \partial \varOmega \), where all coefficient functions (c, \(c_{i}\), \(c_{ij}\)) are smooth, and where f and g are given functions. The generalization to domains that are either unions of rectangles, or can via local parameterizations be mapped to a union of rectangles is relatively straight-forward [11, Sec. 6.4]. The technique is specifically developed to accelerate implicit time stepping techniques for parabolic PDEs such as, e.g., the heat equation
$$\begin{aligned} \left\{ \begin{aligned} \Delta u(\varvec{x},t)&= \frac{\partial u}{\partial t}(\varvec{x},t),\quad&\varvec{x}\in \varOmega ,\ t> 0,\\ u(\varvec{x},t)&= f(\varvec{x},t),\quad&\varvec{x}\in \varGamma ,\ t > 0,\\ u(\varvec{x},0)&= g(\varvec{x}),\quad&\varvec{x}\in \varOmega . \end{aligned}\right. \end{aligned}$$
(3)
When (3) is discretized using an implicit time-stepping scheme (e.g. backwards Euler, Crank–Nicolson or the Transpose Method of Lines [1]), one is required to solve for each time-step an equation of the form (1), see Sect. 7.6. With the ability to combine very high order discretizations with a highly efficient means of time-stepping parabolic equations, we believe that the proposed method will be particularly well suited for numerically solving the Navier–Stokes equation at low Reynolds numbers.
The proposed solver is direct and builds an approximation to the solution operator of (1) via a hierarchical divide-and-conquer approach. It is conceptually related to classical nested dissection and multifrontal methods [24], but provides tight integration between the direct solver and the discretization procedure. Locally, the scheme relies on high order spectral discretizations, and collocation of the differential operator. We observe that while classical nested dissection and multifrontal solvers slow down dramatically as the discretization order is increased [6, Table 3], the proposed method retains high efficiency regardless of the discretization order. The method is an evolution of the scheme described in [10, 11], and later refined in [57]. One novelty of the present work is that it describes how problems with body loads can be handled efficiently (the previous papers [57, 11] consider the case where \(g=0\) in (1)). A second novelty is that local mesh refinement is introduced to enable the method to accurately solve problems involving concentrated loads, singularities at re-entrant corners, and other phenomena that lead to localized loss of regularity in the solution (in contrast, the previous papers [57, 11] restrict attention to uniform grids).
The principal advantage of the proposed solver, compared to commonly used solvers for (1), is that it is direct (as opposed to iterative), which makes it particularly well suited for problems for which efficient pre-conditioners are difficult to find, such as, e.g., problems with oscillatory solutions. The cost to build the solution operator is in the most basic version of the scheme \(O(N^{3/2})\), where N is the number of discretization points. However, the practical efficiency of the solver is very high and the superlinear scaling is hardly visible until \(N > 10^7\). When the number of discretization points is higher than \(10^7\), the scheme can be modified to attain linear complexity by implementing techniques analogous to those described in [5]. Once the solution operator has been built, the time required to apply it to execute a solve given a boundary condition and a body load is either \(O(N\log N)\) for the basic scheme, or O(N) for the accelerated scheme, with a small scaling constant in either case. In Sect. 7, we demonstrate that even when \(N=10^6\), the time for solving (1) with a precomputed solution operator is approximately 1 s on a standard office laptop.
The discretization scheme we use is related to earlier work on spectral collocation methods on composite (“multi-domain”) grids, such as, e.g., [9, 14], and in particular Pfeiffer et al. [12]. The differences and similarities between the various techniques is discussed in detail in [11]. Our procedure is also conceptually related to so-called “reduction to the interface” methods, see [8] and the references therein. Such “interface” methods also use local solution operators defined on boundaries but typically rely on variational formulations of the PDE, rather than the collocation techniques that we employ.
The manuscript is organized as follows: Sect. 2 provides a high level description of the proposed method. Sections 3 and 4 describe the local discretization scheme. Section 5 describes the nested dissection type solver used to solve the system of linear equations resulting from the discretization. Section 6 describes how local mesh refinement can be introduced to the scheme. Section 7 provides results from numerical experiments that establish the efficiency of the proposed method.

2 Overview of algorithm

The proposed method is based on a hierarchical subdivision of the computational domain, as illustrated in Fig. 1 for the case of \(\varOmega = [0,1]^2\). In the uniform mesh version of the solver, the tree of boxes is built by recursively splitting the original box in halves. The splitting continues until each box is small enough that the solution, and its first and second derivatives, can accurately be resolved on a local tensor product grid of \(p\times p\) Chebyshev nodes (where, say, \(p=10\) or \(p=20\)).
Once the tree of boxes has been constructed, the actual solver consists of two stages. The first, or “build”, stage consists of a single upwards pass through the tree of boxes, starting with the leaves and going up to larger boxes. On each leaf, we place a local \(p\times p\) tensor product grid of Chebyshev nodes, and then discretize the restriction of (1) via a basic collocation scheme, as in [13]. By performing dense linear algebraic operations on matrices of size at most \(p^{2} \times p^{2}\), we form for each leaf a local solution operator and an approximation to the local Dirichlet-to-Neumann (DtN) operator, as described in Sect. 3. The build stage then continues with an upwards pass through the tree (going from smaller boxes to larger) where for each parent box, we construct approximations to its local solution operator and its local DtN operator by “merging” the corresponding operators for its children, cf. Sect. 4. The end result of the “build stage” is a hierarchical representation of the overall solution operator for (1). Once this solution operator is available, the “solve stage” takes as input a given boundary data f and a body load g, and constructs an approximation to the solution u valid throughout the domain via two passes through the tree: first an upwards pass (going from smaller boxes to larger) where “particular solutions” that satisfy the inhomogeneous equation are built, and then a downwards pass where the boundary conditions are corrected.
The global grid of collocation points used in the upwards and downwards passes is obtained by placing on the edge of each leaf a set of q Gaussian interpolation nodes (a.k.a. Legendre nodes). Observe that this parameter q is in principle distinct from the local parameter p which specifies the order of the local Chebyshev grids used to construct the solution operators on the leaves. However, we typically choose \(p=q+1\) or \(p=q+2\).

3 Leaf computation

In this section, we describe how to numerically build the various linear operators (represented as dense matrices) needed for a given leaf \(\varOmega _{\tau }\) in the hierarchical tree. To be precise, let u be the solution to the local equation
$$\begin{aligned} \left\{ \begin{aligned}{}[Au](\varvec{x})&= g(\varvec{x}),\quad&\varvec{x}\in \varOmega _{\tau },\\ u(\varvec{x})&= d(\varvec{x}),\quad&\varvec{x}\in \varGamma _{\tau }, \end{aligned}\right. \end{aligned}$$
(4)
for some given (local) Dirichlet data d. We then build approximations to two linear operators that both take d and g as their inputs. The first operator outputs the local solution u on \(\varOmega _{\tau }\) and the second outputs the boundary fluxes of u on \(\varGamma _{\tau }\).

3.1 Notation

We use two sets of interpolation nodes on the domain \(\varOmega _\tau \). First, let \(\{\varvec{y}_{j}\}_{j=1}^{4q}\) denote the nodes obtained by placing q Gaussian nodes on each of the four sides of \(\varOmega _{\tau }\). Next, let \(\{\varvec{x}_{i}\}_{i=1}^{p^2}\) denote the nodes in a \(p\times p\) Chebyshev grid on \(\varOmega _{\tau }\). We partition the index vector for the nodes in the Chebyshev grid as
$$\begin{aligned} \{1,2,\dots ,p^{2}\} = I_\mathrm{ce} \cup I_\mathrm{ci} \end{aligned}$$
so that \(I_\mathrm{ce}\) holds the (Chebyshev) exterior nodes and \(I_\mathrm{ci}\) holds the (Chebyshev) interior nodes. Let \(\varvec{\mathsf {u}}_\mathrm{c}\), \(\varvec{\mathsf {u}}_\mathrm{ci}\), \(\varvec{\mathsf {u}}_\mathrm{ce}\), and \(\varvec{\mathsf {u}}_\mathrm{ge}\) denote vectors holding approximations to the values of the solution u at the interpolation nodes:
$$\begin{aligned} \varvec{\mathsf {u}}_\mathrm{c}\approx & {} \{u(\varvec{x}_{i})\}_{i=1}^{p^{2}},\quad \varvec{\mathsf {u}}_\mathrm{ci} \approx \{u(\varvec{x}_{i})\}_{i\in I_\mathrm{ci}},\\ \varvec{\mathsf {u}}_\mathrm{ce}\approx & {} \{u(\varvec{x}_{i})\}_{i\in I_\mathrm{ce}},\quad \varvec{\mathsf {u}}_\mathrm{ge} \approx \{u(\varvec{y}_{j})\}_{j=1}^{4q}. \end{aligned}$$
Let \(\varvec{\mathsf {v}}_\mathrm{ge} \in \mathbb {R}^{4q}\) denote a vector holding boundary fluxes of u on the Gaussian grid, so that
$$\begin{aligned} \varvec{\mathsf {v}}_\mathrm{ge}(j)&\approx [\partial _{1}u](\varvec{y}_{hj})\quad \text{ when }\ \varvec{y}_{j}\ \text{ lies } \text{ on } \text{ a } \text{ vertical } \text{ boundary },\\ \varvec{\mathsf {v}}_\mathrm{ge}(j)&\approx [\partial _{2}u](\varvec{y}_{hj})\quad \text{ when }\ \varvec{y}_{j}\ \text{ lies } \text{ on } \text{ a } \text{ horizontal } \text{ boundary }. \end{aligned}$$
The sign convention for boundary fluxes means that a positive flux sometimes represents flow into the box and sometimes out of the box. Finally, let \(\varvec{\mathsf {d}}_\mathrm{ge}\) and \(\varvec{\mathsf {g}}_\mathrm{ci}\) denote tabulations of the boundary data and the body load,
$$\begin{aligned} \varvec{\mathsf {d}}_\mathrm{ge} = \{d(\varvec{y}_{j})\}_{j=1}^{4q},\quad \varvec{\mathsf {g}}_\mathrm{ci} = \{g(\varvec{x}_{i})\}_{i\in I_\mathrm{ci}}. \end{aligned}$$
Our objective is now to build the matrices that map \(\{\varvec{\mathsf {d}}_\mathrm{ge},\varvec{\mathsf {g}}_\mathrm{ci}\} \) to \(\varvec{\mathsf {v}}_\mathrm{ge}\) and \(\varvec{\mathsf {u}}_\mathrm{c}\).

3.2 Discretization on the Cheyshev grid

In order to execute the local solve on \(\varOmega _{\tau }\) of (4), we use a classical spectral collocation technique, as described, e.g., in [13]. To this end, let \(\varvec{\mathsf {D}}^{(1)}\) and \(\varvec{\mathsf {D}}^{(2)}\) denote the \(p^2 \times p^2\) spectral differentiation matrices on the \(p\times p\) Chebyshev grid (in other words, for any function u that is a tensor product of polynomials of degree at most \(p-1\), the differentiation matrix exactly maps a vector of collocated function values to the vector of collocated values of its derivative). Further, let \(\varvec{\mathsf {A}}\) denote the matrix
$$\begin{aligned} \varvec{\mathsf {A}} = - \varvec{\mathsf {C}}_{11}\left( \varvec{\mathsf {D}}^{(1)}\right) ^2 - 2\varvec{\mathsf {C}}_{12}\varvec{\mathsf {D}}^{(1)}\varvec{\mathsf {D}}^{(2)} - \varvec{\mathsf {C}}_{22}\left( \varvec{\mathsf {D}}^{(2)}\right) ^2 + \varvec{\mathsf {C}}_{1}\varvec{\mathsf {D}}^{(1)} + \varvec{\mathsf {C}}_{2}\varvec{\mathsf {D}}^{(2)} + \varvec{\mathsf {C}}, \end{aligned}$$
where \(\varvec{\mathsf {C}}_{ij}\) are diagonal matrices with entries \(\{c_{ij}(\varvec{x}_k)\}_{k=1}^{p^{2}}\), and \(\varvec{\mathsf {C}}_{i}\) and \(\varvec{\mathsf {C}}\) are defined analogously. Next, partition the matrix \(\varvec{\mathsf {A}}\) to separate interior and exterior nodes via
$$\begin{aligned} \varvec{\mathsf {A}}_\mathrm{ci,ci} = \varvec{\mathsf {A}}(I_\mathrm{ci},I_\mathrm{ci}), \quad \text{ and }\quad \varvec{\mathsf {A}}_\mathrm{ci,ce} = \varvec{\mathsf {A}}(I_\mathrm{ci},I_\mathrm{ce}). \end{aligned}$$
Collocating (4) at the interior nodes then results in the discretized equation
$$\begin{aligned} \varvec{\mathsf {A}}_\mathrm{ci,ci}\,\varvec{\mathsf {u}}_\mathrm{ci} + \varvec{\mathsf {A}}_\mathrm{ci,ce}\,\varvec{\mathsf {d}}_\mathrm{ce} = \varvec{\mathsf {g}}_\mathrm{ci}, \end{aligned}$$
(5)
where \(\varvec{\mathsf {d}}_\mathrm{ce} = \{d(\varvec{x}_{i})\}_{i \in I_\mathrm{ce}}\) encodes the local Dirichlet data d.

3.3 Solving on the Chebyshev grid

While solving (5) gives the solution at the interior Chebychev nodes, it does not give a map to the boundary fluxes \(\varvec{\mathsf {v}}_\mathrm{ge}\) that we seek. These are found by following the classic approach of writing the solution as the superposition of the homogeneous and particular solutions. Specifically, the solution to (4) is split as
$$\begin{aligned} u = w + \phi \end{aligned}$$
where w is a particular solution
$$\begin{aligned} \left\{ \begin{aligned} Aw(\varvec{x})&= g(\varvec{x}),\quad&\varvec{x}\in \varOmega _{\tau },\\ w(\varvec{x})&= 0,\quad&\varvec{x}\in \varGamma _{\tau }, \end{aligned}\right. \end{aligned}$$
(6)
and where \(\phi \) is a homogeneous solution
$$\begin{aligned} \left\{ \begin{aligned} A\phi (\varvec{x})&= 0,\quad&\varvec{x}\in \varOmega _{\tau },\\ \phi (\varvec{x})&= d(\varvec{x}),\quad&\varvec{x}\in \varGamma _{\tau }. \end{aligned}\right. \end{aligned}$$
(7)
Discretizing (6) on the Chebyshev grid, and collocating at the internal nodes, we get the equation
$$\begin{aligned} \varvec{\mathsf {A}}_\mathrm{ci,ce}\varvec{\mathsf {w}}_\mathrm{ce} + \varvec{\mathsf {A}}_\mathrm{ci,ci}\varvec{\mathsf {w}}_\mathrm{ci} = \varvec{\mathsf {g}}_\mathrm{ci}. \end{aligned}$$
Observing that \(\varvec{\mathsf {w}}_\mathrm{ce} = 0\), the particular solution is given by
$$\begin{aligned} \varvec{\mathsf {w}}_\mathrm{c} = \left[ \begin{array}{c} \varvec{\mathsf {w}}_\mathrm{ce} \\ \varvec{\mathsf {w}}_\mathrm{ci} \end{array}\right] = \varvec{\mathsf {F}}_\mathrm{c,ci}\varvec{\mathsf {g}}_\mathrm{ci}, \quad \text{ where }\quad \varvec{\mathsf {F}}_\mathrm{c,ci} = \left[ \begin{array}{c} \varvec{\mathsf {0}} \\ \varvec{\mathsf {A}}_\mathrm{ci,ci}^{-1} \end{array}\right] . \end{aligned}$$
(8)
Analogously, the discretization of (7) on the Chebyshev grid yields
$$\begin{aligned} \varvec{\mathsf {A}}_\mathrm{ci,ce}{\phi }_\mathrm{ce} + \varvec{\mathsf {A}}_\mathrm{ci,ci}{\phi }_\mathrm{ci} = \varvec{\mathsf {0}}. \end{aligned}$$
Since \({\phi }_\mathrm{ce} = \varvec{\mathsf {d}}_\mathrm{ce}\), the homogeneous solution is given by
$$\begin{aligned} {\phi }_\mathrm{c} = \left[ \begin{array}{c} {\phi }_\mathrm{ce} \\ {\phi }_\mathrm{ci} \end{array}\right] = \left[ \begin{array}{c} \varvec{\mathsf {I}} \\ -\varvec{\mathsf {A}}_\mathrm{ci,ci}^{-1}\varvec{\mathsf {A}}_\mathrm{ci,ce} \end{array}\right] \varvec{\mathsf {d}}_\mathrm{ce}. \end{aligned}$$
(9)

3.4 Interpolation and differentiation

Section 3.3 describes how to locally solve the BVP (4) on the Chebyshev grid via the superposition of the homogeneous and particular solutions. This computation assumes that the local Dirichlet data d is given on the Chebyshev exterior nodes. In reality, this data will be provided on the Gaussian nodes, and we therefore need to introduce an interpolation operator that moves data between the different grids. To be precise, let \(\varvec{\mathsf {L}}_\mathrm{ce,ge}\) denote a matrix of size \(4(p-1)\times 4q\) that maps a given data vector \(\varvec{\mathsf {d}}_\mathrm{ge}\) to a different vector
$$\begin{aligned} \begin{array}{cccccccccc} \varvec{\mathsf {d}}_\mathrm{ce} &{}=&{} \varvec{\mathsf {L}}_\mathrm{ce,ge} &{} \varvec{\mathsf {d}}_\mathrm{ge}\\ 4(p-1)\times 1 &{}&{} 4(p-1) \times 4q &{} 4q \times 1 \end{array} \end{aligned}$$
(10)
as follows: An entry of \(\varvec{\mathsf {d}}_\mathrm{ce}\) corresponding to an interior node is defined via a standard interpolation from the Gaussian to the Chebyshev nodes on the local edge. An entry of \(\varvec{\mathsf {d}}_\mathrm{ce}\) corresponding to a corner node is defined as the average value of the two extrapolated values from the Gaussian nodes on the two edges connecting to the corner (observe that except for the four rows corresponding to the corner nodes, \(\varvec{\mathsf {L}}_\mathrm{ce,ge}\) is a \(4\times 4\) block diagonal matrix).
Combining (8), (9), and (10), the solution to (4) on the Chebyshev grid is
$$\begin{aligned} \varvec{\mathsf {u}}_\mathrm{c} = \varvec{\mathsf {w}}_\mathrm{c} + {\phi }_\mathrm{c} = \varvec{\mathsf {F}}_\mathrm{c,ci}\,\varvec{\mathsf {g}}_\mathrm{ci} + \varvec{\mathsf {S}}_\mathrm{c,ge}\,\varvec{\mathsf {d}}_\mathrm{ge}, \quad \text{ where }\quad \varvec{\mathsf {S}}_\mathrm{c,ge} := \left[ \begin{array}{c} \varvec{\mathsf {I}}_\mathrm{ce,ce} \\ -\varvec{\mathsf {A}}_\mathrm{ci,ci}^{-1}\varvec{\mathsf {A}}_\mathrm{ci,ce} \end{array}\right] \,\varvec{\mathsf {L}}_\mathrm{ce,ge}.\nonumber \\ \end{aligned}$$
(11)
All that remains is to determine the vector \(\varvec{\mathsf {v}}_\mathrm{ge}\) of boundary fluxes on the Gaussian nodes. To this end, let us define a combined interpolation and differentiation matrix \(\varvec{\mathsf {D}}_\mathrm{ge,c}\) of size \(4q \times p^{2}\) via
$$\begin{aligned} \varvec{\mathsf {D}}_\mathrm{ge,c} = \left[ \begin{array}{c} \varvec{\mathsf {L}}_\mathrm{loc}\,\varvec{\mathsf {D}}_{2}(I_\mathrm{s},:) \\ \varvec{\mathsf {L}}_\mathrm{loc}\,\varvec{\mathsf {D}}_{1}(I_\mathrm{e},:) \\ \varvec{\mathsf {L}}_\mathrm{loc}\,\varvec{\mathsf {D}}_{2}(I_\mathrm{n},:) \\ \varvec{\mathsf {L}}_\mathrm{loc}\,\varvec{\mathsf {D}}_{1}(I_\mathrm{w},:) \end{array}\right] , \end{aligned}$$
where \(\varvec{\mathsf {L}}_\mathrm{loc}\) is a \(q\times p\) interpolation matrix from a set of p Chebyshev nodes to a set of q Gaussian nodes, and where \(I_\mathrm{s},\,I_\mathrm{e},\,I_\mathrm{n},\,I_\mathrm{w}\) are vectors of length p with entries corresponding to the points on the south, east, north, and west sides of the exterior nodes in the Chebyshev grid. By differentiating the local solution on the Chebyshev grid defined by (11), the boundary fluxes \(\varvec{\mathsf {v}}_\mathrm{ge}\) are given by
$$\begin{aligned} \varvec{\mathsf {v}}_\mathrm{ge} = \varvec{\mathsf {H}}_\mathrm{ge,ci}\,\varvec{\mathsf {g}}_\mathrm{ci} + \varvec{\mathsf {T}}_\mathrm{ge,ge}\,\varvec{\mathsf {d}}_\mathrm{ge}, \end{aligned}$$
(12)
where
$$\begin{aligned} \varvec{\mathsf {H}}_\mathrm{ge,ci} = \varvec{\mathsf {D}}_\mathrm{ge,c}\,\varvec{\mathsf {F}}_\mathrm{c,ci} \quad \text{ and }\quad \varvec{\mathsf {T}}_\mathrm{ge,ge} = \varvec{\mathsf {D}}_\mathrm{ge,c}\,\varvec{\mathsf {S}}_\mathrm{c,ge}. \end{aligned}$$
(13)

4 Merging two leaves

Consider a rectangular box \(\tau \) consisting of two leaf boxes \(\alpha \) and \(\beta \), and suppose that all local operators for \(\alpha \) and \(\beta \) defined in Sect. 3 have been computed. Our objective is now to construct the Dirichlet-to-Neumann operator for \(\tau \) from the local operators for its children. In this operation, only sets of Gaussian nodes on the boundaries will take part, cf. Fig. 2. We group these nodes into three sets, indexed by vectors \(J_{1}\), \(J_{2}\), and \(J_{3}\), defined as follows:
\(J_{1}\)
Edge nodes of box \(\alpha \) that are not shared with box \(\beta \).
\(J_{2}\)
Edge nodes of box \(\beta \) that are not shared with box \(\alpha \).
\(J_{3}\)
Edge nodes that line the interior edge shared by \(\alpha \) and \(\beta \).
We also define
$$\begin{aligned} J_\mathrm{ge}^{\tau } = J_{1} \cup J_{2} \quad \text{ and }\quad J_\mathrm{gi}^{\tau } = J_{3} \end{aligned}$$
as the exterior and interior nodes for the parent box \(\tau \). Finally, we let \(\varvec{\mathsf {h}}^{\alpha }, \varvec{\mathsf {h}}^{\beta } \in \mathbb {R}^{4q}\) denote two vectors that hold the boundary fluxes for the two local particular solutions \(w^{\alpha }\) and \(w^{\beta }\), cf. (12),
$$\begin{aligned} \varvec{\mathsf {h}}^{\alpha }_\mathrm{ge} = \varvec{\mathsf {H}}_\mathrm{ge,ci}^{\alpha }\,\varvec{\mathsf {g}}_\mathrm{ci}^{\alpha }, \quad \text{ and }\quad \varvec{\mathsf {h}}^{\beta }_\mathrm{ge} = \varvec{\mathsf {H}}_\mathrm{ge,ci}^{\beta }\,\varvec{\mathsf {g}}_\mathrm{ci}^{\beta }. \end{aligned}$$
(14)
Then the equilibrium equations for each of the two leaves can be written
$$\begin{aligned} \varvec{\mathsf {v}}^{\alpha }_\mathrm{ge} = \varvec{\mathsf {T}}^{\alpha }_\mathrm{ge,ge}\,\varvec{\mathsf {u}}^{\alpha }_\mathrm{ge} + \varvec{\mathsf {h}}^{\alpha }_\mathrm{ge}, \quad \text{ and }\quad \varvec{\mathsf {v}}^{\beta }_\mathrm{ge} = \varvec{\mathsf {T}}^{\beta }_\mathrm{ge,ge}\,\varvec{\mathsf {u}}^{\beta }_\mathrm{ge} + \varvec{\mathsf {h}}^{\beta }_\mathrm{ge}. \end{aligned}$$
(15)
Now partition the two equations in (15) using the notation shown in Fig. 2:
$$\begin{aligned} \left[ \begin{array}{c} \varvec{\mathsf {v}}_{1}\\ \varvec{\mathsf {v}}_{3} \end{array}\right] =&\left[ \begin{array}{cc} \varvec{\mathsf {T}}_{1,1}^{\alpha } &{}\quad \varvec{\mathsf {T}}_{1,3}^{\alpha } \\ \varvec{\mathsf {T}}_{3,1}^{\alpha } &{}\quad \varvec{\mathsf {T}}_{3,3}^{\alpha } \end{array}\right] \, \left[ \begin{array}{c} \varvec{\mathsf {u}}_{1}\\ \varvec{\mathsf {u}}_{3} \end{array}\right] + \left[ \begin{array}{c} \varvec{\mathsf {h}}_{1}^{\alpha } \\ \varvec{\mathsf {h}}_{3}^{\alpha } \end{array}\right] , \end{aligned}$$
(16)
$$\begin{aligned} \left[ \begin{array}{c} \varvec{\mathsf {v}}_{2}\\ \varvec{\mathsf {v}}_{3} \end{array}\right] =&\left[ \begin{array}{cc} \varvec{\mathsf {T}}_{2,2}^{\beta } &{}\quad \varvec{\mathsf {T}}_{2,3}^{\beta } \\ \varvec{\mathsf {T}}_{3,2}^{\beta } &{}\quad \varvec{\mathsf {T}}_{3,3}^{\beta } \end{array}\right] \, \left[ \begin{array}{c} \varvec{\mathsf {u}}_{2}\\ \varvec{\mathsf {u}}_{3} \end{array}\right] + \left[ \begin{array}{c} \varvec{\mathsf {h}}_{2}^{\beta } \\ \varvec{\mathsf {h}}_{3}^{\beta } \end{array}\right] . \end{aligned}$$
(17)
[the subscript “ge” is suppressed in (16) and (17) since all nodes involved are Gaussian exterior nodes]. Combine the two equations for \(\varvec{\mathsf {v}}_{3}\) in (16) and (17) to obtain the equation
$$\begin{aligned} \varvec{\mathsf {T}}_{3,1}^{\alpha }\,\varvec{\mathsf {u}}_{1} + \varvec{\mathsf {T}}_{3,3}^{\alpha }\,\varvec{\mathsf {u}}_{3} + \varvec{\mathsf {h}}_{3}^{\alpha } = \varvec{\mathsf {T}}_{3,2}^{\beta }\,\varvec{\mathsf {u}}_{2} + \varvec{\mathsf {T}}_{3,3}^{\beta }\,\varvec{\mathsf {u}}_{3} + \varvec{\mathsf {h}}_{3}^{\beta }. \end{aligned}$$
This gives
$$\begin{aligned} \varvec{\mathsf {u}}_{3} = \left( \varvec{\mathsf {T}}^{\alpha }_{3,3} - \varvec{\mathsf {T}}^{\beta }_{3,3}\right) ^{-1} \left( \varvec{\mathsf {T}}^{\beta }_{3,2}\varvec{\mathsf {u}}_{2} -\varvec{\mathsf {T}}^{\alpha }_{3,1}\varvec{\mathsf {u}}_{1} +\varvec{\mathsf {h}}_{3}^{\beta } -\varvec{\mathsf {h}}_{3}^{\alpha } \right) \end{aligned}$$
(18)
Using the relation (18) in combination with (16), we find that
$$\begin{aligned} \left[ \begin{array}{c} \varvec{\mathsf {v}}_{1} \\ \varvec{\mathsf {v}}_{2} \end{array}\right] =&\ \left( \left[ \begin{array}{cc} \varvec{\mathsf {T}}_{1,1}^{\alpha } &{}\quad \varvec{\mathsf {0}} \\ \varvec{\mathsf {0}} &{}\quad \varvec{\mathsf {T}}_{2,2}^{\beta } \end{array}\right] + \left[ \begin{array}{c} \varvec{\mathsf {T}}_{1,3}^{\alpha } \\ \varvec{\mathsf {T}}_{2,3}^{\beta } \end{array}\right] \, \left( \varvec{\mathsf {T}}^{\alpha }_{3,3} - \varvec{\mathsf {T}}^{\beta }_{3,3}\right) ^{-1} \left[ -\varvec{\mathsf {T}}^{\alpha }_{3,1}\ \big |\ \varvec{\mathsf {T}}^{\beta }_{3,2}\right] . \right) \left[ \begin{array}{c} \varvec{\mathsf {u}}_{1} \\ \varvec{\mathsf {u}}_{2} \end{array}\right] \\&+\left[ \begin{array}{c} \varvec{\mathsf {h}}^{\alpha }_{1} \\ \varvec{\mathsf {h}}^{\beta }_{2} \end{array}\right] + \left[ \begin{array}{c} \varvec{\mathsf {T}}_{1,3}^{\alpha } \\ \varvec{\mathsf {T}}_{2,3}^{\beta } \end{array}\right] \, \left( \varvec{\mathsf {T}}^{\alpha }_{3,3} - \varvec{\mathsf {T}}^{\beta }_{3,3}\right) ^{-1} \left( \varvec{\mathsf {h}}_{3}^{\beta }-\varvec{\mathsf {h}}_{3}^{\alpha }\right) . \end{aligned}$$
We now define the operators
$$\begin{aligned} \varvec{\mathsf {X}}^{\tau }_\mathrm{gi,gi} =&\ \left( \varvec{\mathsf {T}}^{\alpha }_{3,3} - \varvec{\mathsf {T}}^{\beta }_{3,3}\right) ^{-1},\\ \varvec{\mathsf {S}}^{\tau }_\mathrm{gi,ge} =&\ \left( \varvec{\mathsf {T}}^{\alpha }_{3,3} - \varvec{\mathsf {T}}^{\beta }_{3,3}\right) ^{-1}\left[ -\varvec{\mathsf {T}}^{\alpha }_{3,1}\ \big |\ \varvec{\mathsf {T}}^{\beta }_{3,2}\right] = \varvec{\mathsf {X}}^{\tau }_\mathrm{gi,gi}\left[ -\varvec{\mathsf {T}}^{\alpha }_{3,1}\ \big |\ \varvec{\mathsf {T}}^{\beta }_{3,2}\right] ,\\ \varvec{\mathsf {T}}^{\tau }_\mathrm{ge,ge} =&\left[ \begin{array}{cc} \varvec{\mathsf {T}}_{1,1}^{\alpha } &{}\quad \varvec{\mathsf {0}} \\ \varvec{\mathsf {0}} &{}\quad \varvec{\mathsf {T}}_{2,2}^{\beta } \end{array}\right] + \left[ \begin{array}{c} \varvec{\mathsf {T}}_{1,3}^{\alpha } \\ \varvec{\mathsf {T}}_{2,3}^{\beta } \end{array}\right] \,\left( \varvec{\mathsf {T}}^{\alpha }_{3,3} - \varvec{\mathsf {T}}^{\beta }_{3,3}\right) ^{-1}\left[ -\varvec{\mathsf {T}}^{\alpha }_{3,1}\ \big |\ \varvec{\mathsf {T}}^{\beta }_{3,2}\right] \\ =&\left[ \begin{array}{cc} \varvec{\mathsf {T}}_{1,1}^{\alpha } &{}\quad \varvec{\mathsf {0}}\\ \varvec{\mathsf {0}} &{}\quad \varvec{\mathsf {T}}_{2,2}^{\beta } \end{array}\right] + \left[ \begin{array}{c} \varvec{\mathsf {T}}_{1,3}^{\alpha } \\ \varvec{\mathsf {T}}_{2,3}^{\beta } \end{array}\right] \,\varvec{\mathsf {S}}_\mathrm{gi,ge}^{\tau }. \end{aligned}$$
The approximate solution on the shared edge \(\varvec{\mathsf {u}}_\mathrm{gi}^{\tau }\) can be constructed via an upward pass to compute the approximate boundary flux by
$$\begin{aligned} \varvec{\mathsf {h}}_\mathrm{ge}^{\tau } =\ \left[ \begin{array}{c} \varvec{\mathsf {h}}^{\alpha }_{1} \\ \varvec{\mathsf {h}}^{\beta }_{2} \end{array}\right] +\left[ \begin{array}{c} \varvec{\mathsf {T}}_{1,3}^{\alpha } \\ \varvec{\mathsf {T}}_{2,3}^{\beta } \end{array}\right] \varvec{\mathsf {w}}_\mathrm{gi}^{\tau }, \end{aligned}$$
(19)
where \(\varvec{\mathsf {w}}_\mathrm{gi}^{\tau } =\ \varvec{\mathsf {X}}_\mathrm{gi,gi}^{\tau }\bigl (\varvec{\mathsf {h}}_{3}^{\beta }-\varvec{\mathsf {h}}_{3}^{\alpha }\bigr )\), followed by a downward pass
$$\begin{aligned} \varvec{\mathsf {u}}^{\tau }_\mathrm{gi} = \varvec{\mathsf {S}}_\mathrm{gi,ge}\varvec{\mathsf {u}}^{\tau }_\mathrm{ge} + \varvec{\mathsf {w}}_\mathrm{gi}^{\tau }. \end{aligned}$$
Remark 1
(Physical interpretation of merge) The quantities \(\varvec{\mathsf {w}}^{\tau }_\mathrm{gi}\) and \(\varvec{\mathsf {h}}_\mathrm{ge}^{\tau }\) have a simple physical meaning. The vector \(\varvec{\mathsf {w}}_\mathrm{gi}^{\tau }\) introduced above is simply a tabulation of the particular solution \(w^{\tau }\) associated with \(\tau \) on the interior boundary \(\varGamma _{3}\), and \(\varvec{\mathsf {h}}_\mathrm{ge}^{\tau }\) is the normal derivative of \(w^{\tau }\). To be precise, \(w^{\tau }\) is the solution to the inhomogeneous problem, cf. (6)
$$\begin{aligned} \left\{ \begin{aligned} Aw^{\tau }(\varvec{x}) =&\ g(\varvec{x}),\quad&\varvec{x}\in \varOmega _{\tau },\\ w^{\tau }(\varvec{x}) =&\ 0,\qquad&\varvec{x}\in \varGamma _{\tau }. \end{aligned}\right. \end{aligned}$$
(20)
We can re-derive the formula for \(w|_{\varGamma _{3}}\) using the original mathematical operators as follows: First observe that for \(\varvec{x}\in \varOmega ^{\alpha }\), we have \(A(w^{\tau } - w^{\alpha }) = g - g = 0\), so the DtN operator \(T^{\alpha }\) applies to the function \(w^{\tau } - w^{\alpha }\):
$$\begin{aligned} T^{\alpha }_{31}\left( w_{1}^{\tau } - w_{1}^{\alpha }\right) + T^{\alpha }_{33}\left( w_{3}^{\tau } - w_{3}^{\alpha }\right) = \left( \partial _{n}w^{\tau }\right) |_{3} - \left( \partial _{n}w^{\alpha }\right) |_{3} \end{aligned}$$
Use that \(w_{1}^{\tau } = w_{1}^{\alpha } = w_{3}^{\alpha } = 0\), and that \((\partial _{n}w^{\alpha })|_{3} = h_{3}^{\alpha }\) to get
$$\begin{aligned} T^{\alpha }_{33}w_{3}^{\tau } = \left( \partial _{n}w^{\tau }\right) |_{3} - h_{3}^{\alpha }. \end{aligned}$$
(21)
Analogously, we get
$$\begin{aligned} T^{\beta }_{33}w_{3}^{\tau } = \left( \partial _{n}w^{\tau }\right) |_{3} - h_{3}^{\beta }. \end{aligned}$$
(22)
Combine (21) and (22) to eliminate \((\partial _{n}w^{\tau })|_{3}\) and obtain
$$\begin{aligned} \left( T^{\alpha }_{33} - T^{\beta }_{33}\right) w_{3}^{\tau } = - h_{3}^{\alpha } + h_{3}^{\beta }. \end{aligned}$$
Observe that in effect, we can write the particular solution \(w^{\tau }\) as
$$\begin{aligned} w^{\tau }(\varvec{x}) = \left\{ \begin{array}{ll} w^{\alpha }(\varvec{x}) + \hat{w}^{\tau }(\varvec{x})&{}\quad \varvec{x}\in \varOmega ^{\alpha },\\ w^{\beta }(\varvec{x}) + \hat{w}^{\tau }(\varvec{x})&{}\quad \varvec{x}\in \varOmega ^{\beta }, \end{array}\right. \end{aligned}$$
The function \(w^{\tau }\) must of course be smooth across \(\varGamma _{3}\), so the function \(\hat{w}^{\tau }\) must have a jump that exactly offsets the discrepancy in the derivatives of \(w^{\alpha }\) and \(w^{\beta }\). This jump is precisely of size \(h^{\alpha } - h^{\beta }\).

5 The full solver for a uniform grid

5.1 Notation

Suppose that we are given a rectangular domain \(\varOmega \), which has hierarchically been split into a binary tree of successively smaller patches, as described in Sect. 2. We then define two sets of interpolation nodes. First, \(\{\varvec{x}_{i}\}_{i=1}^{M}\) denotes the set of nodes obtained by placing a \(p\times p\) tensor product grid of Chebyshev nodes on each leaf in the tree. For a leaf \(\tau \), let \(I_\mathrm{c}^{\tau }\) denote an index vector pointing to the nodes in \(\{\varvec{x}_{i}\}_{i=1}^{M}\) that lie on leaf \(\tau \). Thus the index vector for the set of nodes in \(\tau \) can be partitioned into exterior and interior nodes as follows
$$\begin{aligned} I_\mathrm{c}^{\tau } = I_\mathrm{ce}^{\tau } \cup I_\mathrm{ci}^{\tau }. \end{aligned}$$
The second set of interpolation nodes \(\{\varvec{y}_{j}\}_{j=1}^{N}\) is obtained by placing a set of q Gaussian (“Legendre”) interpolation nodes on the edge of each leaf. For a node \(\tau \) in the tree (either a leaf or a parent), let \(I_\mathrm{ge}^{\tau }\) denote an index vector that marks all Gaussian nodes that lie on the boundary of \(\varOmega _{\tau }\). For a parent node \(\tau \), let \(I_\mathrm{gi}^{\tau }\) denote the Gaussian nodes that are interior to \(\tau \), but exterior to its two children (as in Sect. 4).
Once the collocation points have been set up, we introduce a vector \(\varvec{\mathsf {u}} \in {\mathbb {R}}^{M}\) holding approximations to the values of the potential u on the Gaussian collocation points,
$$\begin{aligned} \varvec{\mathsf {u}}(j) \approx u\left( \varvec{y}_{j}\right) ,\quad j = 1,2,3,\dots ,M. \end{aligned}$$
We refer to subsets of this vector using the short-hand
$$\begin{aligned} \varvec{\mathsf {u}}_\mathrm{ge}^{\tau } = \varvec{\mathsf {u}}\left( I_\mathrm{ge}^{\tau }\right) , \text{ and }\quad \varvec{\mathsf {u}}_\mathrm{gi}^{\tau } = \varvec{\mathsf {u}}\left( I_\mathrm{gi}^{\tau }\right) \end{aligned}$$
for the exterior and interior nodes respectively. At the very end of the algorithm, approximations to u on the local Chebyshev tensor product grids are constructed. For a leaf node \(\tau \), let the vectors \(\varvec{\mathsf {u}}_\mathrm{c}^{\tau }\), \(\varvec{\mathsf {u}}_\mathrm{ce}^{\tau }\), and \(\varvec{\mathsf {u}}_\mathrm{ci}^{\tau }\) denote the vectors holding approximations to the potential on sets of collocation points in the Chebyshev grid marked by \(I_\mathrm{c}^{\tau }\), \(I_\mathrm{ce}^{\tau }\), and \(I_\mathrm{ci}^{\tau }\), respectively. Observe that these vectors are not subvectors of \(\varvec{\mathsf {u}}\).
Before proceeding to the description of the algorithm, we introduce two sets of auxiliary vectors. First, for any parent node \(\tau \), let the vector \(\varvec{\mathsf {w}}_\mathrm{gi}^{\tau }\) denote the computed values of the local particular solution \(w^{\tau }\) that solves (6) on \(\varOmega _{\tau }\), as tabulated on the interior line marked by \(I_\mathrm{gi}^{\tau }\). Also, define \(\varvec{\mathsf {h}}^{\tau }\) as the approximate boundary fluxes of \(w^{\tau }\) as defined by (14) for a leaf and by (19) for a parent.

5.2 The build stage

Once the domain is partitioned into a hierarchical tree, we execute a “build stage” in which the following matrices are constructed for each box \(\tau \):
\(\varvec{\mathsf {S}}^{\tau }\)
For a box \(\tau \), the solution operator that maps Dirichlet data \(\psi \) on \(\partial \varOmega _{\tau }\) to values of u at the interior nodes. In other words, \(\varvec{\mathsf {u}}^{\tau }_\mathrm{c} = \varvec{\mathsf {S}}^{\tau }_\mathrm{c,ge} {\psi }^{\tau }_\mathrm{ge}\) on a leaf or \(\varvec{\mathsf {u}}^{\tau }_\mathrm{gi} = \varvec{\mathsf {S}}^{\tau }_\mathrm{gi,ge}{{\psi }}^{\tau }_\mathrm{ge}\) on a parent box.
\(\varvec{\mathsf {T}}^{\tau }\)
For a box \(\tau \), the matrix that maps Dirichlet data \(\psi \) on \(\partial \varOmega _{\tau }\) to the flux v on the boundary. In other words, \(\varvec{\mathsf {v}}^{\tau }_\mathrm{ge} = \varvec{\mathsf {T}}^{\tau }_\mathrm{ge,ge}{{\psi }}_\mathrm{ge}\).
\(\varvec{\mathsf {F}}^{\tau }\)
For a leaf box, the matrix that maps the body load to the particular solution on the interior of the leaf assuming the Dirichlet data is zero on the boundary. In other words \(\varvec{\mathsf {w}}^{\tau }_\mathrm{c} = \varvec{\mathsf {F}}^{\tau }_\mathrm{c,ci}\varvec{\mathsf {g}}_\mathrm{ci}\).
\(\varvec{\mathsf {H}}^{\tau }\)
For a leaf box, the matrix that maps the body load to the flux on the boundary of the leaf. In other words \(\varvec{\mathsf {h}}^{\tau }_\mathrm{ge} = \varvec{\mathsf {H}}^{\tau }_\mathrm{ge,ci}\varvec{\mathsf {g}}^{\tau }_\mathrm{ci}\).
\(\varvec{\mathsf {X}}^{\tau }\)
For a parent box \(\tau \) with children \(\alpha \) and \(\beta \), \(\varvec{\mathsf {X}}^{\tau }\) maps the fluxes of the particular solution for the children on the interior of a parent to the particular solution on the interior nodes. In other words \(\varvec{\mathsf {w}}_\mathrm{gi} = \varvec{\mathsf {X}}^{\tau }_\mathrm{gi,gi}(\varvec{\mathsf {h}}_{3}^{\beta }- \varvec{\mathsf {h}}_{3}^{\alpha })\).
The build stage consists of a single sweep over all nodes in the tree. Any ordering of the boxes in which a parent box is processed after its children can be used. For each leaf box \(\tau \), approximations \(\varvec{\mathsf {S}}^{\tau }\) and \(\varvec{\mathsf {F}}^{\tau }\) to the solution operators for the homogeneous and particular solutions are constructed. Additionally, approximations \(\varvec{\mathsf {T}}^{\tau }\) and \(\varvec{\mathsf {H}}^{\tau }\) to the local continuum operators that map boundary data and body load for a particular solution to the boundary fluxes of the resulting particular solution are constructed using the procedure described in Sect. 3. For a parent box \(\tau \) with children \(\alpha \) and \(\beta \), we construct the solution operators \(\varvec{\mathsf {X}}_\mathrm{gi,gi}^{\tau }\) and \(\varvec{\mathsf {S}}_\mathrm{gi,ge}^{\tau }\), and the DtN operator \(\varvec{\mathsf {T}}_\mathrm{ge,ge}^{\tau }\) via the process described in Sect. 4. Algorithm 1 in Fig. 3 summarizes the build stage.

5.3 The solve stage

After the “build stage” described in Algorithm 1 has been completed, an approximation to the global solution operator of (1) has been computed, and represented through the various matrices (\(\varvec{\mathsf {H}}^{\tau }\), \(\varvec{\mathsf {F}}^{\tau }\), etc.) described in Sect. 5.2. Then given specific boundary data f and a body load g, the corresponding solution u to (1) can be found through a “solve stage” that involves two passes through the tree, first an upwards pass (from smaller to larger boxes), and then a downwards pass. In the upward pass, the particular solutions and normal derivatives of the particular solution are computed and stored in the vectors \(\varvec{\mathsf {w}}\) and \(\varvec{\mathsf {h}}\) respectively. Then by sweeping down the tree applying the solution operators \(\varvec{\mathsf {S}}\) to the Dirichlet boundary data for each box \(\tau \) and adding the particular solution, the approximate solution \(\varvec{\mathsf {u}}\) is computed. Algorithm 2 summarizes the solve stage (Fig. 4).
We observe that the vectors \(\varvec{\mathsf {w}}_\mathrm{gi}^{\tau }\) can all be stored on a global vector \(\varvec{\mathsf {w}} \in \mathbb {R}^{N}\). Since each boundary collocation node \(\varvec{\mathsf {y}}_{j}\) belongs to precisely one index vector \(I_\mathrm{gi}^{\tau }\), we simply find that \(\varvec{\mathsf {w}}_\mathrm{gi}^{\tau } = \varvec{\mathsf {w}}(I_\mathrm{gi}^{\tau })\).
Remark 2
(Efficient storage of particular solutions) For notational simplicity, we describe Algorithm 2 (the “solve stage”) in a way that assumes that for each box \(\tau \), we explicitly store a corresponding vector \(\varvec{\mathsf {h}}_\mathrm{ge}^{\tau }\) that represents the boundary fluxes for the local particular solution. In practice, these vectors can all be stored on a global vector \(\varvec{\mathsf {h}} \in \mathbb {R}^{N}\), in a manner similar to how we store \(\varvec{\mathsf {w}}\). For any box \(\tau \) with children \(\alpha \) and \(\beta \), we store on \(\varvec{\mathsf {h}}\) the difference between the boundary fluxes, so that \(\varvec{\mathsf {h}}(I_\mathrm{gi}^{\tau }) = -\varvec{\mathsf {h}}_{3}^{\alpha } + \varvec{\mathsf {h}}_{3}^{\beta }\). In other words, as soon as the boundary fluxes have been computed for a box \(\alpha \), we add its contributions to the vector \(\varvec{\mathsf {h}}(I_\mathrm{ge}^{\alpha })\) with the appropriate signs and then delete it. This becomes notationally less clear, but is actually simpler to code.

5.4 Algorithmic complexity

In this section, we determine the asymptotic complexity of the direct solver. The analysis is very similar to the analysis seen in [6] for no body load. Let \(N_\mathrm{leaf} = 4q\) denote the number of Gaussian nodes on the boundary of a leaf box, and let \(p^2\) denote the number of Chebychev nodes used in the leaf computation. In the asymptotic analysis, we set \(p=q+2\), so that \(p \sim q\). Let L denote the number of levels in the binary tree. This means there are \(2^L\) boxes. Thus the total number of discretization nodes N is approximately \(2^Lq^{2}\).
In processing a leaf, the dominant cost involves matrix inversion (or factorization followed by triangular solve) and matrix-matrix multiplications. The largest matrices encountered are of size \(O(q^2) \times O(q^2)\), making the cost to process one leaf \(O(q^6)\). Since there are \(N/q^2\) leaf boxes, the total cost of pre-computing approximate DtN operators for all the bottom level is \(\sim (N/q^2)\times q^6\sim N\,q^4\).
Next, consider the process of merging two boxes, as described in Sect. 4. On level \(\ell \), there are \(2^{\ell }\) boxes, that each have \(O(2^{-\ell /2}N^{0.5}))\) nodes along their boundaries (on level \(\ell =2\), there are 4 boxes that each have side length one half of the original side length; on level \(\ell =4\), there are 16 boxes that have side length one quarter of the original side length; etc.). The cost of executing a merge is dominated by the cost to perform matrix algebra (inversion, multiplication, etc) of dense matrices of size \(2^{-\ell /2}N^{0.5}\times 2^{-\ell /2}N^{0.5}\). This makes the total cost for the merges in the upwards pass
$$\begin{aligned} \sum _{\ell =1}^{L} 2^\ell \times \left( 2^{-\ell /2}N^{0.5}\right) ^3 \sim \sum _{\ell =1}^{L} 2^\ell \times 2^{-3\ell /2}N^{1.5} \sim N^{1.5}\sum _{\ell =1}^{L} 2^{-\ell /2} \sim N^{1.5}. \end{aligned}$$
Finally, consider the cost of the solve stage (Algorithm 2). We first apply at each of the \(2^{L}\) leaves the operators \(\varvec{\mathsf {H}}_\mathrm{ge,ci}^{\tau }\), which are all of size \(4q \times (p-2)^{2}\), making the overall cost \(\sim 2^{L}q^{3} = N\,q\) since \(p\sim q\) and \(N\sim 2^{L}q^{2}\). In the upwards sweep, we apply at level \(\ell \) matrices of size \(O(2^{-\ell /2}N^{0.5}) \times O(2^{-\ell /2}N^{0.5})\) on \(2^{\ell }\) boxes, adding up to an overall cost of
$$\begin{aligned} \sum _{\ell =1}^{L}2^{\ell }\times \left( 2^{-\ell /2}\,N^{0.5}\right) ^{2} \sim \sum _{\ell =1}^{L}2^{\ell }\times 2^{-\ell }\,N \sim \sum _{\ell =1}^{L}N \sim NL\sim N\log N. \end{aligned}$$
The cost of the downwards sweep is the same. However, the application of the matrices \(\varvec{\mathsf {F}}_\mathrm{c,ci}^{\tau }\) at the leaves is more expensive since these are of size \(O(q^{2}) \times O(q^{2})\), which adds up to an overall cost of \(2^{L}\,q^{4} = N\,q^{2}\).
The analysis of the asymptotic storage requirements perfectly mirrors the analysis of the flop count for the solve stage, since each matrix that is stored is used precisely once in a matrix-vector multiplication. In consequence, the amount of storage required is
$$\begin{aligned} R \sim N\,q^{2} + N\,\log N. \end{aligned}$$
(23)
Remark 3
(A storage efficient version) The storage required for all solution operators can become prohibitive when the local order q is high, due to the term \(N\,q^{2}\) in (23). One way to handle this problem is to not store the local solution operators for a leaf, but instead simply perform small dense solves each time the “solve stage” is executed. This makes the solve stage slower, obviously, but has the benefit of completely eliminating the \(Nq^{2}\) term in (23). In fact, in this modified version, the overall storage required is \(\sim NL \approx N\,\log _{2}(N/q^{2})\), so we see that the storage costs decrease as q increases (as should be expected since we do all leaf computations from scratch in this case). Figure 9 provides numerical results illustrating the memory requirements of the various approaches.

6 Local refinement

When solving a boundary value problem like (1) it is common to have a localized loss of regularity due to, e.g., corners on the boundary, a locally non-smooth body load or boundary condition, or a localized loss of regularity in the coefficient functions in the differential operator. A common approach to efficiently regain high accuracy without excessively increasing the number of degrees of freedom used, is to locally refine the mesh near the troublesome location. In this manuscript, we assume the location is known and given, and that we manually specify the degree of local refinement. The difficulty that arises is that upon refinement, the collocation nodes on neighboring patches do not necessarily match up. To remedy this, interpolation operators are introduced to transfer information between patches (the more difficult problem of determining how to automatically detect regions that require mesh refinement is a topic of current research).

6.1 Refinement criterion

Suppose we desire to refine our discretization at some point \(\hat{\varvec{x}}\) in the computational domain (the point \(\hat{\varvec{x}}\) can be either in the interior or on the boundary). Consider as an example the situation depicted in Fig. 5. For each level of refinement, we split any leaf box that contains \(\hat{\varvec{x}}\) and any “close” leaf boxes into a \(2 \times 2\) grid of equal-sized leaf boxes. In Fig. 5 we perform one level of refinement and find there are 6 leaf boxes “close” to \(\hat{\varvec{x}}\), which is represented by the cross. These 6 boxes are refined into smaller leaf boxes.
A leaf box \(\varOmega _{\tau }\) is close to \(\hat{\varvec{x}}\) if the distance \(d_{\tau }\) from \(\hat{\varvec{x}}\) to the box \(\varOmega _{\tau }\) satisfies \(d_{\tau } \le tl_{\tau }\), where \(t= \sqrt{2}\) and \(2l_{\tau }\) is the length of one side of the leaf box \(\varOmega _{\tau }\). In Fig. 5 we show circles of size \(tl_{\gamma }\) and \(tl_{\beta }\) at the points in \(\varOmega _{\gamma }\) and \(\varOmega _{\beta }\) closest to \(\hat{\varvec{x}}\) (in this case the boxes are all the same size so \(l_{\gamma } = l_{\beta }\)). We see \(\hat{\varvec{x}}\) is “close” to \(\varOmega _{\gamma }\), but not “close” to \(\varOmega _{\beta }\). Just as in Sect. 5.1, we place a \(p\times p\) tensor product grid of Chebyshev nodes on each new leaf and a set of q Gaussian (“Legendre”) interpolation nodes on the edge of each leaf. The vector \(\{\varvec{y}_{j}\}_{j=1}^{N}\) holds the locations of all Gaussian nodes across all leaves in the domain.

6.2 Refined mesh

Notice that with the refined grid the nodes along common boundaries are no longer aligned. Figure 6 is an example of such a grid. This is a problem during the build stage of the method since the merge operation is performed by equating the Neumann data on the common boundary. We begin the discussion on how to address this problem by establishing some notation.
Define two boxes as neighbors if they are on the same level of the tree and they are adjacent. In the case that only one of two neighbors has been refined, such as \(\varOmega _\alpha \) and \(\varOmega _\beta \) in Fig. 6, special attention needs to be paid to the nodes on the common boundary. In order to merge boxes with different number of Gaussian nodes on the common edge, interpolation operators will be required. The next section describes this process in detail.
Consider the nodes on the common boundary between the two leaf boxes \(\varOmega _\alpha \) and \(\varOmega _\beta \). Let q denote the number of Gaussian nodes on one side of each leaf. Let \(\{{{J}_{\alpha ,i}\}}_{i=1}^{q}\) denote the index vector for the common boundary nodes from \(\varOmega _{\alpha }\) and \({\{{J}_{\beta ,i}}\}_{i=1}^{2q}\) denote the index vector for the common boundary nodes from \(\varOmega _{\beta }\). That is, recalling that \(\varvec{\mathsf {y}}\) holds the locations of all Gaussian nodes in the domain, \(\varvec{\mathsf {y}}({{J}_{\alpha ,i}})\) contains the q Gaussian nodes on the Eastern side of box \(\varOmega _{\alpha }\) and \(\varvec{\mathsf {y}}({\varvec{\mathsf {J}}_{\beta ,i}})\) contains the 2q nodes on the Western side of box \(\varOmega _{\beta }\).

6.3 Modifications to build stage

Once the grid with the Gaussian and Chebyshev nodes is constructed, as described in Sect. 6.2, the build stage starts with the construction of all leaf operators as described in Sect. 3. Then, for simplicity of presentation, boxes are merged from the lowest level moving up the tree. After merging the children of a refined parent, such as \(\varOmega _{\beta }\) in Fig. 6 it is seen that the parent’s exterior nodes do not align with the exterior nodes of any neighbor which has not been refined.
Recalling the index notation used in Sect. 6.2, we form the interpolation matrix \(\varvec{\mathsf {P}}_\mathrm{up,W}\) mapping data on \(\varvec{\mathsf {y}}({{J}_{\alpha ,i}})\) to data on \(\varvec{\mathsf {y}}({{J}_{\beta ,i}})\) and the interpolation matrix \(\varvec{\mathsf {P}}_\mathrm{down,W}\) mapping data on \(\varvec{\mathsf {y}}({\varvec{\mathsf {J}}_{\beta ,i}})\) to data on \(\varvec{\mathsf {y}}({{J}_{\alpha ,i}})\). Observe that when interpolating from two sets of q Gaussian nodes to a set of q Gaussian nodes, the interpolation must be done as two separate interpolations from q to q / 2 nodes. The matrix \(\varvec{\mathsf {P}}_\mathrm{down,W}\) is a block diagonal matrix consisting of two \(q/2 \times q\) matrices (assuming q is divisible by 2).
For a refined parent, such as \(\varOmega _{\beta }\) in Fig. 6, we form the operators \(\varvec{\mathsf {T}}^{\beta }\), \(\varvec{\mathsf {S}}^{\beta }\), and \(\varvec{\mathsf {X}}^{\beta }\) and form the interpolation operators for every side of the parent, regardless of whether the exterior nodes align with the neighbor’s exterior nodes. Observe that in the case of \(\varOmega _{\beta }\) in Fig. 6 the Eastern and Northern sides of \(\varOmega _{\beta }\) will have \(\varvec{\mathsf {P}}_\mathrm{down} = \varvec{\mathsf {P}}_\mathrm{up}= \varvec{\mathsf {I}}\), where \(\varvec{\mathsf {I}}\) is the identity matrix.
Then interpolation operators mapping the entire boundary data between fine and coarse grids are given by block diagonal matrices \(\varvec{\mathsf {P}}_\mathrm{{up}}\) and \(\varvec{\mathsf {P}}_\mathrm{{down}}\) whose diagonal blocks are the interpolation operators for each edge. The interpolation operators for \(\varOmega _\beta \) are
$$\begin{aligned} \varvec{\mathsf {P}}^{\beta }_\mathrm{{up}}=\texttt {blkdiag}\left( \varvec{\mathsf {P}}^{\beta }_\mathrm{{up},S},\varvec{\mathsf {P}}^{\beta }_\mathrm{{up},E},\varvec{\mathsf {P}}^{\beta }_\mathrm{{up},N},\varvec{\mathsf {P}}^{\beta }_\mathrm{{up},W}\right) \end{aligned}$$
and
$$\begin{aligned} \varvec{\mathsf {P}}^{\beta }_\mathrm{{down}}=\texttt {blkdiag}\left( \varvec{\mathsf {P}}^{\beta }_\mathrm{{down},S},\varvec{\mathsf {P}}^{\beta }_\mathrm{{down},E},\varvec{\mathsf {P}}^{\beta }_\mathrm{{down},N},\varvec{\mathsf {P}}^{\beta }_\mathrm{{down},W}\right) . \end{aligned}$$
(the text blkdiag denotes the function that forms a block diagonal matrix from its arguments). Then we form the new operators \(\varvec{\mathsf {T}}^{\beta }_\mathrm{new}\) and \(\varvec{\mathsf {S}}^{\beta }_\mathrm{new}\) for the parent box \(\varOmega _{\beta }\) as follows
$$\begin{aligned} \varvec{\mathsf {T}}^{\beta }_\mathrm{new} = \varvec{\mathsf {P}}^{\beta }_\mathrm{down} \varvec{\mathsf {T}}^{\beta } \varvec{\mathsf {P}}^{\beta }_\mathrm{up} \end{aligned}$$
and
$$\begin{aligned} \varvec{\mathsf {S}}^{\beta }_\mathrm{new} = \varvec{\mathsf {S}}^{\beta } \varvec{\mathsf {P}}^{\beta }_\mathrm{up}. \end{aligned}$$
Now \(\varvec{\mathsf {T}}^{\beta }_\mathrm{new}\) is a map defined on the same set of points as all of the neighbors of \(\varOmega _{\beta }\) and Neumann data can be equated on all sides.
Next, suppose a refined parent does not have a neighbor on one of its sides. Then on that side we use \(\varvec{\mathsf {P}}_\mathrm{down} = \varvec{\mathsf {P}}_\mathrm{up} = \varvec{\mathsf {I}}\). This could happen if the parent is on the boundary of our domain \(\varOmega \). For example, suppose \(\varOmega _{\alpha }\) in Fig. 6 was also refined. Then \(\varOmega _{\alpha }\) would not have a neighbor on its Western side. Additionally, if multiple levels of refinement are done then a refined parent could have no neighbors on one side. For example, suppose the Northwestern child of \(\varOmega _{\beta }\) in Fig. 6 was refined. Then the Northwestern child would not have a neighbor on its Western side since box \(\varOmega _{\alpha }\) is on a different level of the tree.
Forming the interpolation operators for each side of \(\varOmega _{\beta }\) before we perform any following merge operations is the easiest approach. The alternative would be to form an interpolation operator every time two boxes are merged and the nodes do not align.

6.4 Modifications to solve stage

On the upwards pass of the solve stage, the fluxes for the particular solution must be calculated on the same nodes so the particular solution can be calculated on those nodes. This is easily achieved by applying the already computed interpolation operator \(\varvec{\mathsf {P}}_\mathrm{down}\) to obtain \(\varvec{\mathsf {h}}^{\beta }_\mathrm{new} = \varvec{\mathsf {P}}_\mathrm{down} \varvec{\mathsf {h}}^{\beta }_\mathrm{old}\).
In the downwards pass of the solve stage, the application of the solution operators results in the approximate solution at the coarse nodes on the Western and Southern sides of \(\varOmega _{\beta }\). The solution operator \(\varvec{\mathsf {S}}^{\beta }_\mathrm{new}\) now maps the solution on the coarse nodes on the Western and Southern sides of \(\varOmega _{\beta }\) (and the dense nodes on the Eastern and Northern sides) to the solution on the interior of \(\varOmega _{\beta }\). However, we also need the solution on the dense nodes on the Western and Southern sides of \(\varOmega _{\beta }\). Let \(\varvec{\mathsf {u}}_{ge,\mathrm {new}}\) denote the solution on the boundary of \(\varOmega _{\beta }\) with the coarse nodes on the Western and Southern edges. Then the approximate solution on the dense nodes is given by \(\varvec{\mathsf {u}}_{ge,\mathrm {old}} = \varvec{\mathsf {P}}_\mathrm{up}\varvec{\mathsf {u}}_{ge,\mathrm {new}}\).

6.5 Impact of refinement on the asymptotic complexity

We demonstrated in Sect. 5.4 that the asymptotic complexity of the build stage is \(O(N^{3/2})\) for the case of a uniform balanced tree. It turns out that such a tree is the most adversarial case from the point of view of the number of flops required, just as for nested dissection and multifrontal solvers. While we will not carry out a detailed analysis of every possible tree, it is perhaps illustrative to consider what happens in the extreme case where we start with a course level grid consisting of \(4\times 4\) nodes, each with a \(q\times q\) tensor product grid. Then we recursively refine the middle \(2\times 2\) nodes, as shown in Fig. 7. In this case, each refinement adds roughly \(4q^{2}\) nodes, so that with L levels in the tree, the total number of nodes satisfies \(N \sim Lq^{2}\). During the build stage, the processing of each level in the tree requires dense operations to be carried out on matrices of size \(O(q^2) \times O(q^2)\), at cost \(O(q^{6})\). This leads to a total cost of \(T \sim L\,q^{6} \sim N\,q^{4}\) for the case of local refinement around a single point. In the general case, asymptotic complexity between O(N) and \(O(N^{3/2})\), depending on the structure of the tree, will be observed.

7 Numerical experiments

In this section, we present the results of numerical experiments that illustrate the performance of the scheme proposed. Section 7.1 reports on the computational cost and memory requirements. Sections 7.27.5 report on the accuracy of the proposed solution technique for a variety of problems where local mesh refinement is required. Finally, Sect. 7.6 illustrates the use of the proposed method in the acceleration of an implicit time stepping scheme for solving a parabolic partial differential equation.
For each experiment, the error is calculated by comparing the approximate solution with a reference solution \(\varvec{\mathsf {u}}_\mathrm{ref}\) constructed using a highly over resolved grid. Errors are measured in \(\ell ^{\infty }\)-norm, on all Chebyshev nodes on leaf boundaries.
In all of the experiments, each leaf is discretized using a \(p\times p\) tensor product mesh of Chebyshev nodes. The number of Legendre nodes per leaf edge is set to \(q = p-1\). In all experiments except the one described in Sect. 7.5, the computational domain is the square \(\varOmega = [0,1]^{2}\) discretized into \(n\times n\) leaf boxes, making the total number of degrees of freedom roughly \(N\approx p^{2}\times n^{2}\) (to be precise, \(N = n^{2} \times (p-1)^{2}+2n \times (p-1) + 1\)).
The proposed method was implemented in Matlab and all experiments were run on a laptop computer with a 4 core Intel i7-3632QM CPU running at 2.20 GHz with 12 GB of RAM.

7.1 Computational speed

The experiments in this section illustrate the computational complexity and memory requirements of the direct solver. Recall that the asymptotic complexity of the method scales as nested dissection or multifrontal methods, with execution times scaling as \(O(N^{3/2})\) and \(O(N \log N)\) for the “build” and “solve” stages, respectively. The asymptotic memory requirement is \(O(N \log N)\).
The computational complexity and memory requirements of the proposed method depend only on the domain and the computational mesh; the choice of PDE is irrelevant. In the experiments reported here, we used \(\varOmega = [0,1]^{2}\) with a uniform mesh.
Figure 8 reports the time in seconds for the (a) build and (b) solve stages of the proposed solution technique when there is a body load (BL), when the leaf computation is done on the fly as described in Remark 3 (BL(econ)), and when there is no body load (NBL). Results for two different orders of discretization (\(q = 8\) and 16) are shown. Notice that as expected the constant scaling factor for both stages is larger for the higher order discretization (it may be of interest that our numerical experiments in many cases demonstrate linear scaling for the build stage, despite its \(O(N^{3/2})\) asymptotic algorithmic complexity. The reason is that even for \(N \sim 10^{6}\), we have hardly yet entered the regime where the \(O(N^{3/2})\) term starts to dominate).
Figure 9 reports on memory requirements. Letting R denote total memory used, we plot R / N versus the number of discretization points N, where R is measured in terms of number of floating point numbers. We see that storing the solution operators on the leaves is quite costly in terms of memory requirements. The trade-off to be considered here is whether the main priority is to conserve memory, or to maximize the speed of the solve stage, cf. Remark 3. As an illustration, we see that for a problem with \(q=8\) and \(n=128\), for a total of \(10^6\) unknowns, the solve stage takes 1.7 s with the solution operators stored versus 9.8 s for performing the local solves on the fly. In situations where the solution is only desired in prescribed local regions of the geometry, computing the operators on the fly is ideal.
Remark 4
(Complexity for a locally refined tree) We claim in Sect. 6.5 that for a locally refined tree, the complexity of the scheme is no worse (and sometimes better) than the \(O(N^{3/2})\) complexity obtained for a balanced uniform tree. To substantiate this claim, we timed an experiment involving a locally refined tree, as shown in Fig. 7. The results are presented in Fig. 10, for two different orders of discretization (\(q = 16\) and 32). We use larger numbers for q in this section so that we obtain a significant increase in the number of nodes. If we use \(q = 8\) then each refinement only adds a few hundred nodes and it is hard to see any significant scaling without doing an unreasonable number of refinements. For reference, we include a comparison to the case of \(q = 16\) with a body load and no refinement.
Remark 5
When the underlying BVP that is discretized involves a constant coefficient operator, many of the leaf solution operators are identical. This observation can be used to greatly reduce storage requirements while maintaining very high speed in the solve stage. This potential acceleration was not exploited in the numerical experiments reported.

7.2 Variable coefficients

In this section, the proposed scheme is applied to the variable coefficient Helmholtz problem
$$\begin{aligned} -\Delta u - \kappa ^{2}(1-c(\varvec{x}))u = g,\quad \varvec{x}\in \varOmega , \end{aligned}$$
where \(\varOmega = [0,1] \times [0,1]\) and where c is a “scattering potential.” The body load is taken to be a Gaussian given by \(g = \exp (- \alpha |\varvec{x}-\hat{\varvec{x}}|^{2})\) with \(\alpha = 300\) and \(\hat{\varvec{x}} = [1/4, 3/4]\) while the variable coefficient is a sum of Gaussians \(c(\varvec{x}) = \frac{1}{2}\exp (- \alpha _{2} |\varvec{x}-\hat{\varvec{x}}_{2}|^{2}) + \frac{1}{2}\exp (- \alpha _{3} |\varvec{x}-\hat{\varvec{x}}_{3}|^{2})\) with \(\alpha _{2} = \alpha _{3} = 200\), \(\hat{\varvec{x}}_{2} = [7/20, 6/10]\), and \(\hat{\varvec{x}}_{3} = [6/10, 9/20]\) for the scattering potential. We set \(\kappa = 40\), making the domain \(6.4 \times 6.4\) wavelengths in size.
Figure 11 reports the \(l^\infty \) error versus the number of discretization points N. We get no accuracy for \(q=4\), but as q is increased, the errors rapidly decrease.

7.3 Concentrated body load

In this section, we consider a low frequency (\(\kappa = 20\)) Helmholtz boundary value problem
$$\begin{aligned} -\Delta u - \kappa ^{2} u = g,\qquad \varvec{x}\in \varOmega , \end{aligned}$$
with \(\varOmega = [0,1] \times [0,1]\) and a very concentrated Gaussian for the body load, \(g = \exp (- \alpha |\varvec{x}-\hat{\varvec{x}}|^{2})\) with \(\alpha = 3000\). In this case, we chose the Dirichlet boundary data to equal the solution to the free space equation \(-\Delta u - \kappa ^{2} u = g\) with a radiation condition at infinity. In other words, u is the convolution between g and the free space fundamental solution. We computed the boundary data and the reference solution by numerically evaluating this convolution to very high accuracy.
To test the refinement strategy, we build a tree first with a uniform grid, i.e. \(n\times n\) leaf boxes then add \(n_\mathrm{ref}\) levels of refinement around the point \(\hat{\mathbf {x}}\). Figure 12 reports the \(l^\infty \) norm of the error versus \(n_\mathrm{ref}\) for four choices of uniform starting discretization. When \(n = 4\) one level of refinement (i.e. 28 leaf boxes) results in approximately the same accuracy as when \(n = 8\) and no levels of refinement (i.e. 64 leaf boxes).

7.4 Discontinuous body load

In section, we consider a Poisson boundary value problem on \(\varOmega = [0,1]^2\) with an indicator function body load g that has support \([1/4, 1/2] \times [1/4, 1/2]\). Observe that the lines of discontinuity of g coincide with edges of leaves in the discretization. Figure 13 reports the \(l^\infty \) error versus the number of discretization points N with uniform refinement for four different orders of discretization. Note that the approximate solution and its first derivative are continuous through the boundaries of the leaves (even on the boundaries where the jump in the body load occurs) since the algorithm enforces them by derivation.
Remark 6
Applying the scheme to a problem where a discontinuity in the body load does not align with the leaf boundaries results in a very low accuracy approximation to the solution. For a problem analogous to the one described in this section, we observed slow convergence and attained no better than two or three digits of accuracy on the most finely resolved mesh.

7.5 Tunnel

This section reports on the performance of the solution technique when applied to the Helmholtz Dirichlet boundary value problem
$$\begin{aligned} -\Delta u - \kappa ^{2} u&= g,\qquad \varvec{x}\in \varOmega ,\\ u&= f \qquad \varvec{x}\in \partial \varOmega \end{aligned}$$
with \(\kappa = 60\) where the domain \(\varOmega \) is a “tunnel” as illustrated in Fig. 14. The body load is taken to be a Gaussian \(g = \exp (- \alpha |\varvec{x}-\hat{\varvec{x}}|^{2})\) with \(\alpha = 300\) and \(\hat{\varvec{x}} = [1, 3/4]\). The Dirichlet boundary data is given by
$$\begin{aligned} f(\varvec{x})= \left\{ \begin{array}{ll} 0 &{} \quad \mathrm{for} \ x_1 \ne \pm 3\\ \frac{1}{100}\sin \left( 2\pi (x_{2}-3)\right) &{} \quad \mathrm{for} \ x_1 = 3\\ \frac{1}{100}\sin \left( \pi (x_{2}-3)\right) &{} \quad \mathrm{for} \ x_1 = -3. \end{array}\right. \end{aligned}$$
Note that \(f(\varvec{x})\) is continuous on \(\partial \varOmega \) and with this choice of wave number \(\kappa \) the domain \(\varOmega \) is about 10 wavelengths wide and 115 wavelengths long. The presence of the re-entrant corners results in a solution that has strong singularities which require local refinement in order for the method to achieve high accuracy.
Figure 15 reports the \(l^\infty \) error versus the number of refinements into the corners with three choices of coarse grid. We use \(q = 16\) for all examples and h gives the width and height of each leaf box. When \(h= 1/4\), the discretization is only sufficient to resolve the Helmholtz equation with \(\kappa = 60\) within 1% of the exact solution. When \(h = 1/8\), the solution technique stalls at 5 digits of accuracy independent of the number of refinement levels.
Remark 7
(Symmetries) This problem is rich in symmetries that can be used to accelerate the build stage. In our implementation, we chose to exploit the fact that the tunnel is made up of four L-shaped pieces glued together. The DtN operator and corresponding solution operators were constructed for one L-shape. Then creating the solver for the entire geometry involved simply gluing the 4 L-shaped geometries together via three merge operations.

7.6 A parabolic problem

Our final numerical example involves a convection-diffusion initial value problem on \(\varOmega = [0,1]^2\) given by
$$\begin{aligned} \left( \epsilon \Delta - \frac{\partial }{\partial x_{1}} \right) u(\varvec{x},t)&= \frac{\partial u}{\partial t},&\quad \varvec{x}\in \varOmega , \ t>0\\ u(\varvec{x},0)&= \exp \left( - \alpha |\varvec{x}-\hat{\varvec{x}}|^{2}\right) ,&\quad \varvec{x}\in \varOmega . \end{aligned}$$
We imposed zero Neumann boundary conditions on the south and north boundaries (\(x_{2} = 0,1\)) and periodic boundary conditions on the west and east boundaries (\(x_{1} = 0,1\)). These boundary conditions correspond to fluid flowing through a periodic channel where no fluid can exit the top or bottom of the channel. To have a convection dominated problem, we chose \(\epsilon = 1/200\). Finally, the parameters in the body load were chosen to be \(\alpha = 50\) and \(\hat{\varvec{x}} = [1/4, 1/4]\).
Applying the Crank–Nicolson time stepping scheme with a time step size k results in having to solve the following elliptic problem at each time step:
$$\begin{aligned} \left( \frac{1}{k} I - \frac{1}{2} A \right) u_{n+1} = \left( \frac{1}{k} I + \frac{1}{2} A \right) u_{n}, \end{aligned}$$
(24)
where \(A = \epsilon \Delta - \partial /\partial x_{1}\) is our partial differential operator.
Observe that the algorithm does not change for this problem. The build stage execution time and memory requirement are identical to those seen in Sect. 7.1. The execution time for the solve stage for each individual time step is nearly identical to the solve stage execution time shown in Sect. 7.1. The only new step in the solve stage is the need to evaluate \((I/k + A/2)u_{n}\) at each time step.
Figure 16 reports the \(l^\infty \) error vs. the time step size k at three different times \(t = 0.025,\ 0.1,\) and 0.5. We use 16 leaf boxes per side with \(q=16\), to ensure that the spatial resolution error is far smaller than the time-stepping error. Note that even with a low order time stepping scheme, a high accuracy can still be attained since our fast time stepper allows us to use a very short time step without incurring an unduly long solution time.

8 Concluding remarks

We have described an algorithm for solving non-homogeneous linear elliptic PDEs in two dimensions based on a multidomain spectral collocation discretization. The solver is designed explicitly for being combined with a nested dissection type direct solver. Its primary advantage over existing methods is that it enables the use of very high order local discretization without jeopardizing computational efficiency in the direct solver. The scheme is an evolution on previously published methods [5, 6, 11]. The novelty in this work is that the scheme has been extended to allow for problems involving body loads, and for local refinement.
The scheme is particularly well suited for executing the elliptic solve required solving parabolic problems using implicit time-stepping techniques in situations where the domain is fixed, so that the elliptic solve is the same in every time step. In this environment, the cost of computing an explicit solution operator is amortized over many time-steps, and can also be recycled when the same equation is solved for different initial conditions.
The fact that the method can with ease incorporate high order local discretizations, and allows for very efficient implicit time-stepping appears to make it particularly well suited for solving the Navier–Stokes equations at low Reynolds numbers. Such a solver is currently under development and will be reported in future publications. Other extensions currently under way includes the development of adaptive refinement criteria (as opposed to the supervised adaptivity used in this work), and the extension to problems in three dimensions, analogous to the work in [7] for homogeneous equations.

Acknowledgements

The research reported was supported by DARPA, under the Contract N66001-13-1-4050, and by the NSF, under the Contracts DMS-1407340 and DMS-1620472.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Literatur
1.
Zurück zum Zitat Causley, M., Christlieb, A., Ong, B., Van Groningen, L.: Method of lines transpose: an implicit solution to the wave equation. Math. Comput. 83, 2763–2786 (2014)MathSciNetCrossRef Causley, M., Christlieb, A., Ong, B., Van Groningen, L.: Method of lines transpose: an implicit solution to the wave equation. Math. Comput. 83, 2763–2786 (2014)MathSciNetCrossRef
2.
Zurück zum Zitat Davis, T.: Direct methods for sparse linear systems, vol. 2. Siam, Philadelphia (2006)CrossRef Davis, T.: Direct methods for sparse linear systems, vol. 2. Siam, Philadelphia (2006)CrossRef
3.
Zurück zum Zitat Duff, I.S., Erisman, A.M., Reid, J.K.: Direct methods for sparse matrices. Oxford University Press, Oxford (1989)MATH Duff, I.S., Erisman, A.M., Reid, J.K.: Direct methods for sparse matrices. Oxford University Press, Oxford (1989)MATH
4.
5.
Zurück zum Zitat Gillman, A., Barnett, A., Martinsson, P.G.: A spectrally accurate direct solution technique for frequency-domain scattering problems with variable media. BIT Numer. Math. 55(1), 141–170 (2015)MathSciNetCrossRef Gillman, A., Barnett, A., Martinsson, P.G.: A spectrally accurate direct solution technique for frequency-domain scattering problems with variable media. BIT Numer. Math. 55(1), 141–170 (2015)MathSciNetCrossRef
6.
Zurück zum Zitat Gillman, A., Martinsson, P.: A direct solver with \(O(N)\) complexity for variable coefficient elliptic pdes discretized via a high-order composite spectral collocation method. SIAM J. Sci. Comput. 36(4), A2023–A2046 (2014). arXiv:1307.2665 MathSciNetCrossRef Gillman, A., Martinsson, P.: A direct solver with \(O(N)\) complexity for variable coefficient elliptic pdes discretized via a high-order composite spectral collocation method. SIAM J. Sci. Comput. 36(4), A2023–A2046 (2014). arXiv:​1307.​2665 MathSciNetCrossRef
7.
Zurück zum Zitat Hao, S., Martinsson, P.G.: A direct solver for elliptic PDEs in three dimensions based on hierarchical merging of Poincaré–Steklov operators. J. Comput. Appl. Math. 308, 419–434 (2016)MathSciNetCrossRef Hao, S., Martinsson, P.G.: A direct solver for elliptic PDEs in three dimensions based on hierarchical merging of Poincaré–Steklov operators. J. Comput. Appl. Math. 308, 419–434 (2016)MathSciNetCrossRef
8.
Zurück zum Zitat Khoromskij, B., Wittum, G.: Numerical solution of elliptic differential equations by reduction to the interface, vol. 36. Springer, Berlin (2004)MATH Khoromskij, B., Wittum, G.: Numerical solution of elliptic differential equations by reduction to the interface, vol. 36. Springer, Berlin (2004)MATH
9.
Zurück zum Zitat Kopriva, D.A.: A staggered-grid multidomain spectral method for the compressible Navier–Stokes equations. J. Comput. Phys. 143(1), 125–158 (1998)MathSciNetCrossRef Kopriva, D.A.: A staggered-grid multidomain spectral method for the compressible Navier–Stokes equations. J. Comput. Phys. 143(1), 125–158 (1998)MathSciNetCrossRef
10.
Zurück zum Zitat Martinsson, P.G.: A composite spectral scheme for variable coefficient Helmholtz problems, (2012). arXiv preprint arXiv:1206.4136 Martinsson, P.G.: A composite spectral scheme for variable coefficient Helmholtz problems, (2012). arXiv preprint arXiv:​1206.​4136
11.
Zurück zum Zitat Martinsson, P.G.: A direct solver for variable coefficient elliptic PDEs discretized via a composite spectral collocation method. J. Comput. Phys. 242, 460–479 (2013)MathSciNetCrossRef Martinsson, P.G.: A direct solver for variable coefficient elliptic PDEs discretized via a composite spectral collocation method. J. Comput. Phys. 242, 460–479 (2013)MathSciNetCrossRef
12.
Zurück zum Zitat Pfeiffer, H.P., Kidder, L.E., Scheel, M.A., Teukolsky, S.A.: A multidomain spectral method for solving elliptic equations. Comput. Phys. Commun. 152(3), 253–273 (2003)MathSciNetCrossRef Pfeiffer, H.P., Kidder, L.E., Scheel, M.A., Teukolsky, S.A.: A multidomain spectral method for solving elliptic equations. Comput. Phys. Commun. 152(3), 253–273 (2003)MathSciNetCrossRef
13.
Zurück zum Zitat Trefethen, L.N.: Spectral methods in matlab. SIAM, Philadelphia (2000)CrossRef Trefethen, L.N.: Spectral methods in matlab. SIAM, Philadelphia (2000)CrossRef
14.
Zurück zum Zitat Yang, B., Hesthaven, J.S.: Multidomain pseudospectral computation of Maxwell’s equations in 3-d general curvilinear coordinates. Appl. Numer. Math. 33(1–4), 281–289 (2000)MathSciNetCrossRef Yang, B., Hesthaven, J.S.: Multidomain pseudospectral computation of Maxwell’s equations in 3-d general curvilinear coordinates. Appl. Numer. Math. 33(1–4), 281–289 (2000)MathSciNetCrossRef
Metadaten
Titel
An accelerated Poisson solver based on multidomain spectral discretization
verfasst von
Tracy Babb
Adrianna Gillman
Sijia Hao
Per-Gunnar Martinsson
Publikationsdatum
05.07.2018
Verlag
Springer Netherlands
Erschienen in
BIT Numerical Mathematics / Ausgabe 4/2018
Print ISSN: 0006-3835
Elektronische ISSN: 1572-9125
DOI
https://doi.org/10.1007/s10543-018-0714-0

Weitere Artikel der Ausgabe 4/2018

BIT Numerical Mathematics 4/2018 Zur Ausgabe