We let
\(f(t_n)\) be the exact solution, and
\(f^{(n)}\) the numerical solution, at time
\(t_n\). We introduce some notation (see [
15]):
\(\varPi : f \rightarrow (f_{i,j})\) is the discretization (sampling) operator on a uniform 2D grid, and
\(\mathcal {T}\) (resp.
\(\widetilde{\mathcal {T}}\)) is the numerical (resp. exact) transport operator in direction
\({\mathbf {b}}\), over one time step
\(\Delta t\). The (global) error then reads
$$\begin{aligned} e^{(n+1)}= & {} \varPi f(t_{n+1})-f^{(n+1)} = \varPi \widetilde{\mathcal {T}} f(t_n) - \mathcal {T}\left( \varPi f(t_n)-e^{(n)}\right) \\= & {} \left( \varPi \widetilde{\mathcal {T}}-\mathcal {T}\varPi \right) f(t_n) + \mathcal {T} e^{(n)}, \end{aligned}$$
where we identify in
\((\varPi \widetilde{\mathcal {T}}-\mathcal {T}\varPi )f(t_n)\) the “truncation error” introduced by the numerical scheme between time
\(t_n\) and
\(t_n+\Delta t\). Since the scheme is proven to be unconditionally stable, the error cannot grow in the
\(L^2\)-norm when transported by the numerical scheme, i.e.,
\(\left\| \mathcal {T}e^{(n)}\right\| _2 \le \Vert e^{(n)}\Vert _2\). The triangular inequality then yields
$$\begin{aligned} \left\| e^{(n+1)} \right\| _2 \le \left\| (\varPi \widetilde{\mathcal {T}}-\mathcal {T}\varPi ) f(t_n) \right\| _2 + \left\| e^{(n)} \right\| _2, \end{aligned}$$
and if we proceed recursively up to time
\(t_0\), where
\(e^{(0)}=0\) by construction, we obtain the upper bound
$$\begin{aligned} \left\| e^{(n)} \right\| _2 \ \le \ \sum _{k=0}^{n-1} \left\| (\varPi \widetilde{\mathcal {T}}-\mathcal {T}\varPi ) f(t_k) \right\| _2, \end{aligned}$$
(3.23)
that is, the norm of the (global) error at time
\(t_n\) cannot be larger than the sum of the norms of the previous
n truncation errors. Here we made use of the discrete
\(L^2\)-norm, defined as
$$\begin{aligned} \left\| f \right\| _2 = \left[ \frac{1}{N_\theta N_\varphi } \sum _{i_\theta =1}^{N_\theta } \sum _{i_\varphi =1}^{N_\varphi } \left( f_{i_\theta ,i_\varphi } \right) ^2 \right] ^{\frac{1}{2}}. \end{aligned}$$
(3.24)
The upper bound (
3.23) provides us with an error estimate at time
\(t_n\), if an upper bound for the truncation error is available. Similarly to the analysis in the previous section, we now compute the truncation error for harmonic initial condition
\(f_0(\theta ,\varphi ) = \exp (i(n_\varphi \varphi +m_\theta \theta ))\), for which the exact solution is simply
$$\begin{aligned} f(t,\theta ,\varphi ) = \exp (i(n_\varphi (\varphi -b_\varphi t)+m_\theta (\theta -b_\theta t))). \end{aligned}$$
Under this assumption, the local truncation error for our field-aligned semi-Lagrangian scheme can be decomposed into two parts, as
$$\begin{aligned} \left( \varPi \widetilde{\mathcal {T}}-\mathcal {T}\varPi \right) f(t_n)_{i_\theta ,i_\varphi } = A_1+A_2, \end{aligned}$$
where
\(A_1\) is the approximation error introduced by Lagrange interpolation in direction
\({\mathbf {b}}\),
$$\begin{aligned} A_1 = f(t_{n},\theta _{i_\theta }-b_\theta \Delta t,\varphi _{i_\varphi }-b_\varphi \Delta t) -\sum _{k=-d_b}^{d_b+1}L^{d_b}_k(\alpha _\varphi ) f(t_n,\theta _{i_\theta }-b_\theta \Delta t_k,\varphi _{i_\varphi +r_\varphi +k}), \end{aligned}$$
and
\(A_2\) is the approximation error of Lagrange interpolation along
\(\theta \), which is then interpolated along
\({\mathbf {b}}\):
$$\begin{aligned} A_2 = \sum _{k=-d_b}^{d_b+1}L^{d_b}_k(\alpha _\varphi )\left( f(t_n,\theta _{i_\theta }-b_\theta \Delta t_k,\varphi _{i_\varphi +r_\varphi +k})- \sum _{\ell =-d_\theta }^{d_{\theta }+1} L^{d_\theta }_\ell (\alpha _{\theta ,k}) f(t_n,\theta _{i_\theta +r_{\theta ,k}+\ell },\varphi _{i_\varphi +r_\varphi +k}) \right) . \end{aligned}$$
Here we recall the following definitions:
$$\begin{aligned} -b_\varphi \Delta t_{\phantom {k}}&= \Delta \varphi \,(r_\varphi + \alpha _\varphi ) \quad \text { with }r_\varphi \in {\mathbb {Z}} \text { and } \alpha _\varphi \in {\mathbb {R}}_{[0,1)},\\ -b_\varphi \Delta t_k&= \Delta \varphi \,(r_\varphi + k) \quad \text {with }\Delta t_k\in {\mathbb {R}} \text { and } k=-d_b,\dots ,d_b+1,\\ -b_\theta \Delta t_k&= \Delta \theta \,(r_{\theta ,k} + \alpha _{\theta ,k}) \quad \text { with }r_{\theta ,k}\in {\mathbb {Z}} \text { and } \alpha _{\theta ,k}\in {\mathbb {R}}_{[0,1)}. \end{aligned}$$
Furthermore, in the following calculation we will write
\(\theta _{i_\theta } = 2\pi i_\theta /N_\theta \) and
\(\varphi _{i_\varphi } = 2\pi i_\varphi /N_\varphi \). We first compute
\(A_2\): similarly to the previous section, we formulate the interpolation error in integral form and obtain
$$\begin{aligned} A_2= & {} \left( i\frac{m_\theta }{N_\theta }\right) ^{2d_\theta +2}\frac{(2\pi )^{2d_\theta +2}}{(2d_\theta +1)!}f(t_n,\theta _{i_\theta },\varphi _{i_\varphi +r_\varphi }) \sum _{k=-d_b}^{d_b+1}L^{d_b}_k(\alpha _\varphi )\\&e^{2\pi i\left( \frac{m_\theta }{N_\theta } r_{\theta ,k}+k\frac{n_\varphi }{N_\varphi }\right) }\prod _{\ell =-d_\theta }^{d_\theta +1}\left( \alpha _{\theta ,k}-\ell \right) \int _{0}^{1}B_{2d_\theta +2,\alpha _{\theta ,k}}(\sigma )e^{2\pi i(-d_\theta +(2d_\theta +1)\sigma )\frac{m_\theta }{N_\theta }}d\sigma . \end{aligned}$$
We then have
$$\begin{aligned} |A_2|\le & {} \left( \frac{|m_\theta |}{N_\theta }\right) ^{2d_\theta +2} \frac{(2\pi )^{2d_\theta +2}}{(2d_\theta +1)!}\int _{0}^{1} \left| \sum _{k=-d_b}^{d_b+1}L^{d_b}_k(\alpha _\varphi )\right. \\&\left. \quad e^{2\pi i\left( \frac{m_\theta }{N_\theta } r_{\theta ,k} +k\frac{n_\varphi }{N_\varphi }\right) } \prod _{\ell =-d_\theta }^{d_\theta +1}\left( \alpha _{\theta ,k}-\ell \right) B_{2d_\theta +2,\alpha _{\theta ,k}}(\sigma )\right| d\sigma . \end{aligned}$$
We then get
$$\begin{aligned} |A_2|\le \left( \frac{|m_\theta |}{N_\theta }\right) ^{2d_\theta +2} \frac{(2\pi )^{2d_\theta +2}}{(2d_\theta +2)!} \sum _{k=-d_b}^{d_b+1}|L^{d_b}_k(\alpha _\varphi )| \prod _{\ell =-d_\theta }^{d_\theta +1}\left| \alpha _{\theta ,k}-\ell \right| . \end{aligned}$$
As in the analysis for 1D interpolation, we can provide an error bound for the product term and write therefore
$$\begin{aligned} |A_2| \le \left( \frac{\pi |m_\theta |}{N_\theta }\right) ^{2d_\theta +2} \frac{4}{\sqrt{\pi (d_\theta +1)}} \sum _{k=-d_b}^{d_b+1} |L^{d_b}_k(\alpha _\varphi )|\alpha _{\theta ,k}(1-\alpha _{\theta ,k}). \end{aligned}$$
(3.25)
For
\(A_1\) we have, writing
\(n_b = m_\theta \frac{b_\theta }{b_\varphi }+n_\varphi \)
$$\begin{aligned} A_1= & {} \left( i\frac{n_b}{N_\varphi }\right) ^{2d_b+2}\frac{(2\pi )^{2d_b+2}}{(2d_b+1)!} f\left( t_n,\theta _{i_\theta }+\frac{b_\theta }{b_\varphi }r_\varphi ,\varphi _{i_\varphi +r_\varphi }\right) \\&\prod _{\ell =-d_b}^{d_b+1}\left( \alpha _\varphi -\ell \right) \int _{0}^{1}B_{2d_b+2,\alpha _\varphi }(\sigma )e^{2\pi i(-d_b+(2d_b+1)\sigma )\frac{n_b}{N_\varphi }}d\sigma , \end{aligned}$$
which leads to
$$\begin{aligned} |A_1|\le \left( \frac{|n_b|}{N_\varphi }\right) ^{2d_b+2}\frac{(2\pi )^{2d_b+2}}{(2d_b+2)!} \prod _{\ell =-d_b}^{d_b+1}\left| \alpha _\varphi -\ell \right| , \end{aligned}$$
and therefore
$$\begin{aligned} |A_1| \le \left( \frac{\pi |n_b|}{N_\varphi }\right) ^{2d_b+2} \frac{4\alpha _\varphi (1-\alpha _\varphi )}{\sqrt{\pi (d_b+1)}}. \end{aligned}$$
(3.26)
Now we want an upper bound for the
\(L^2\)-norm of the global error at time
\(t_n\). We have
$$\begin{aligned} \begin{aligned} \left\| e^{(n)}\right\| _2&\le \sum _{k=0}^{n-1} \left[ \frac{1}{N_\theta N_\varphi } \sum _{i_\theta =1}^{N_\theta } \sum _{i_\varphi =1}^{N_\varphi } \left( [A_1+A_2]^{(k)}_{i_\theta ,i_\varphi } \right) ^2 \right] ^{\frac{1}{2}} \le \sum _{k=0}^{n-1} \max _{i_\theta ,i_\varphi } \left| [A_1+A_2]^{(k)}_{i_\theta ,i_\varphi } \right| \\&\le n \max _{i_\theta ,i_\varphi ,k} \left| [A_1]^{(k)}_{i_\theta ,i_\varphi } \right| + n \max _{i_\theta ,i_\varphi ,k} \left| [A_2]^{(k)}_{i_\theta ,i_\varphi } \right| . \end{aligned} \end{aligned}$$
Our estimates for
\(|A_1|\) and
\(|A_2|\) are independent of the grid indices
\(i_\theta \) and
\(i_\varphi \), and therefore they also apply to the maximum over the domain. Moreover, we observe that such estimates apply to any time instant, because they are invariant to the rigid translation that the exact solution undergoes in time. Accordingly, our upper bound for the global error of the field-aligned semi-Lagrangian scheme is simply
\(\Vert e^{(n)}\Vert _2 \le n|A_1| + n|A_2|\), with
\(|A_1|\) bounded by (
3.26) and
\(|A_2|\) bounded by (
3.25):
$$\begin{aligned} \left\| e^{(n)}\right\| _2\le & {} n \left( \frac{\pi |m_\theta |}{N_\theta }\right) ^{2d_\theta +2} \frac{4\sum _{k=-d_b}^{d_b+1}|L^{d_b}_k(\alpha _\varphi )| \alpha _{\theta ,k}(1-\alpha _{\theta ,k})}{\sqrt{\pi (d_\theta +1)}} \nonumber \\&\quad +\, n\left( \frac{\pi |n_b|}{N_\varphi }\right) ^{2d_b+2} \frac{4\alpha _\varphi (1-\alpha _\varphi )}{\sqrt{\pi (d_b+1)}}. \end{aligned}$$
(3.27)
We notice that for sufficiently small values of
\(b_\theta \) we have
\(r_{\theta ,k}=0\) and
\(\alpha _{\theta ,k}(1-\alpha _{\theta ,k})\propto |b_\theta |\). Therefore in the limit as
\(b_\theta \rightarrow 0\) the first error term goes to zero and we recover the classical error bound for 1D semi-Lagrangian schemes with
\(n_b=n_\varphi \). In the following discussion we will assume that
\(b_\theta \ne 0\).