main-content

## Swipe to navigate through the articles of this issue

01-09-2021 | Issue 3/2021 Open Access

# Analysis of the spectral symbol associated to discretization schemes of linear self-adjoint differential operators

Journal:
Calcolo > Issue 3/2021
Author:
Davide Bianchi
Important notes
Partially supported by INdAM-GNAMPA and INdAM-GNCS.

## Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## 1 Introduction

If $$\mathscr {L}\,^{(n)}$$ is a matrix that discretizes a linear self-adjoint differential operator $$\mathscr {L}$$, obtained by a discretization scheme like Finite Differences (FD), Finite Elements (FE), Isogeometric Galerkin Analysis (IgA), etc., several problems require that the spectrum of $$\mathscr {L}\,^{(n)}$$ converges uniformly to the (point) spectrum of the operator $$\mathscr {L}$$. More precisely, let us set the notation: for any fixed $$k,j \in {\mathbb {N}}\setminus \{0\}$$, indicate with $$\lambda _{k}$$ and $$\lambda _{-j}$$ the k-th nonnegative and the j-th negative real eigenvalue of a given operator, respectively. Given a $$d_n\times d_n$$ matrix $$\mathscr {L}\,^{(n)}$$ with $$d_n$$ real eigenvalues, denote with $$d^+_n$$ the sum of the null and positive indices of inertia and with $$d^-_n$$ the negative index of inertia, i.e.,
\begin{aligned} d^+_n:= \left| \left\{ \lambda \left( \mathscr {L}\,^{(n)}\right) \, : \, \lambda \left( \mathscr {L}\,^{(n)}\right) \ge 0 \right\} \right| , \qquad d^-_n:= \left| \left\{ \lambda \left( \mathscr {L}\,^{(n)}\right) \, : \, \lambda \left( \mathscr {L}\,^{(n)}\right) < 0 \right\} \right| . \end{aligned}
Clearly, $$d_n = d^-_n + d^+_n$$. Finally, sort the eigenvalues in nondecreasing order, that is,
\begin{aligned} \ldots \le \lambda _{-j}^{(n)} \le \cdots \le \lambda _{-1}^{(n)}< 0 \le \lambda _1^{(n)} \le \cdots \le \lambda _k^{(n)} \le \ldots \end{aligned}
With this notation, the property of uniform spectral convergence we mentioned above translates into asking that
\begin{aligned} \mathscr {E}:= \limsup _{n \rightarrow \infty }\mathscr {E}_n=0, \qquad \mathscr {E}_n:=\max _{\begin{array}{c} k=-d^-_n,\ldots , d^+_n\\ k\ne 0 \end{array}}\left| \frac{\lambda _k^{(n)}}{\lambda _k}-1\right| , \end{aligned}
(1.1)
where n is the mesh fineness parameter, and $$\lambda _k^{(n)}$$ and $$\lambda _{k}$$ are the k-th eigenvalues of the discretized and the continuous operator, respectively. Typically, $$\mathscr {E}>0$$: the relative error estimates for the eigenvalues and eigenfunctions are good only in the lowest modes, that is, for $$|k|=1,\ldots , K_n$$, where $$K_n/d_n\rightarrow \sigma$$ as $$n\rightarrow \infty$$ and such that $$\sigma \ll 1$$. In general, a large portion of the eigenvalues, the so-called higher modes, are not approximations of their continuous counterparts in any meaningful sense. This may negatively affect the solutions obtained by discrete approximations of elliptic boundary-value problems, or parabolic and hyperbolic initial value problems. In these cases, all modes may contribute in the solution to some extent and inaccuracies in the approximation of higher modes can not always be ignored. Regarding this, see for example the spectral-gap problem of the one dimensional wave equation for the uniform observability of the control waves [ 8, 20, 28, 35] or structural engineering problems [ 27, Sects. 3–6].
In this setting, the theory of Generalized Locally Toeplitz (GLT) sequences provides the necessary tools to understand whether the methods used to discretize the operator $$\mathscr {L}$$ are effective in approximating the whole spectrum. The GLT theory originated from the seminal work of P. Tilli on Locally Toeplitz sequences [ 48] and was later developed by S. Serra-Capizzano in [ 41, 42]. It was devised to compute and analyze the spectral distribution of matrices arising from the numerical discretization of integral equations and differential equations. Let $$\hat{\mathscr {L}}\,^{(n)}$$ be the discrete operator normalized by an appropriate power of $$d_n$$, which depends on the dimension of the underlying space and on the maximum order of derivatives involved. It usually happens that the sequence of matrices $$\left\{ \hat{\mathscr {L}\,}^{(n)} \right\} _{n}$$ enjoys an asymptotic spectral distribution as $$n\rightarrow \infty$$, i.e., as the mesh goes to zero. More precisely, for any test function $$F(t) \in C_c({\mathbb {C}})$$ it holds that
\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{d_n} \sum _{\begin{array}{c} k=-d^-_n\\ k\ne 0 \end{array}}^{d^+_n}F\left( \lambda _k\left( \hat{\mathscr {L}\,}^{(n)}\right) \right) = \frac{1}{m(D)}\int _D F(\omega ({\varvec{y}}))m(d{\varvec{y}}), \end{aligned}
where $$\lambda _k\left( \hat{\mathscr {L}\,}^{(n)}\right) , k=-d^-_n,\ldots ,-1,1,\ldots ,d^+_n$$ are the eigenvalues of the normalized operator $$\hat{\mathscr {L}\,}^{(n)}$$ and $$\omega : D\subset {\mathbb {R}}^d \rightarrow {\mathbb {C}}$$ is referred to as the spectral symbol of the sequence $$\left\{ \hat{\mathscr {L}\,}^{(n)} \right\} _{n}$$, see [ 49, Eq. (3.2)] in relation to Toeplitz matrices. The GLT theory allows to compute the spectral symbol $$\omega$$ related to $$\mathscr {L}\,^{(n)}$$, especially if the numerical method employed to produce $$\mathscr {L}\,^{(n)}$$ belongs to the family of the so-called local methods, such as FD methods, FE methods and collocation methods with locally supported basis functions.
In several recent papers the sampling of the spectral symbol was suggested to be used to approximate the spectrum of the discrete matrix operators $$\hat{\mathscr {L}\,}^{(n)}$$ and $$\mathscr {L}\,^{(n)}$$. Unfortunately, this approach is not always successful in general, as we will show with an example. The main reference is the paper by T. J. R. Hughes and coauthors [ 24], which reviews the state-of-the-art of the symbol-based analysis for the eigenvalues distribution carried on in the framework of the isogeometric Galerkin approximation (IgA). Inspired by the latter work [ 24], we show that the symbol $$\omega$$ can measure the maximum spectral relative error $$\mathscr {E}$$ defined in ( 1.1) and it suggests a suitable (non-uniform) grid such that, if coupled to an increasing refinement of the order of accuracy of the scheme, guarantees $$\mathscr {E}=0$$.
The paper is organized as follows.
• In Sect. 2, the spectral symbol $$\omega$$ and the monotone rearrangement $$\omega ^*$$ are introduced.
• In Sect. 3 we present an asymptotic result, Theorem 3.1, which connects the eigenvalue distribution of a matrix sequence and the monotone rearrangement of its spectral symbol. It is the main tool for the results in Sect. 4. In particular, under suitable regularity assumptions on $$\omega$$, thanks to Theorem 3.2 and Theorem 3.3 we prove that
\begin{aligned} \mathscr {E}=\sup _{\begin{array}{c} x\in (0,1)\\ x:\zeta ^{*}(x)\ne 0 \end{array}}\left| \frac{\omega ^{*}(x)}{\zeta ^{*}(x)} -1\right| , \end{aligned}
(1.2)
where $$\zeta$$ is the (normalized) Weyl distribution function of the eigenvalues of $$\mathscr {L}\,$$ and $$\omega$$ is the spectral symbol associated to the numerical scheme applied to discretize $$\mathscr {L}\,$$.
• Section 4 is devoted entirely to numerical experiments and applications. The validity of ( 1.2) is shown in Table 2 and Fig. 2. Moreover, in Sect. 4.1.1 we provide an example about the unfeasibility to obtain an accurate approximation of the eigenvalues of a differential operator by just uniformly sampling the spectral symbol $$\omega$$. In Sects. 4.1.2 and 4.1.3 we generalize the aforementioned results to the case of central FD and IgA methods of higher order. By means of a suitable non-uniform grid, suggested by the spectral symbol and combined by an increasing order of the approximation, we obtain ( 1.1), namely $$\mathscr {E}=0$$. Finally, in Sect. 4.2, we describe a simple application to the general d-dimensional Laplacian.
• Section 5 is devoted to draw conclusions and set the lines for future research.
Due to the technicalities that involve the results presented in Sects. 3 and 4, all the proofs are moved to the appendix to improving the fluency of the reading. The appendix is therefore organized as follows.
• Section A collects the proofs of the results presented in Sect. 3.
• Section B provides rigorous proofs for some numerical evidences observed in Sect. 4.
• In Sect. C is provided the FD numerical scheme used in Sect. 4.1.1 and Sect. 4.1.2.
• In Sect. D are listed some known results about the IgA numerical scheme used in Sect. 4.1.3.

## 2 Spectral symbol and monotone rearrangement

In this section we provide the definitions of spectral symbol of a sequence of matrices and its monotone rearrangement, which are the main tools we will use throughout this paper to study the asymptotic spectral distribution of sequences of discretization matrices. We will use the notation $$X^{(n)},Y^{(n)}$$ to denote general $$d_n\times d_n$$ square matrices, while $$T^{(n)}$$ will be used for Toeplitz matrices, i.e., matrices with constant elements along their diagonals: $$\left( T^{(n)}\right) _{i,j}= t_{i-j}$$ for all $$i,j=1,\ldots ,d_n$$, and with
\begin{aligned} \mathbf{t }=\left[ t_{-d_n+1}, \ldots , t_0, \ldots , t_{d_n-1}\right] \in {\mathbb {C}}^{2d_n-1}.\end{aligned}
If $$t_k$$ is the k-th Fourier coefficient of a complex integrable function f defined over the interval $$[-\pi , \pi ]$$, then $$T^{(n)}= T^{(n)}(f)$$ is referred to be the Toeplitz matrix generated by f.
With the symbol $$\left\{ X^{(n)}\right\} _n$$ we will denote a sequence of square matrices with increasing dimensions $$d_n\times d_n$$, i.e., such that $$d_n \rightarrow \infty$$ as $$n \rightarrow \infty$$. When we will confront the spectrum of two sequences of matrices, we will assume that they have the same dimension. The eigenvalues will be sorted in nondecreasing order, that is,
\begin{aligned} \lambda _{-d_n^-}^{(n)}\le \cdots \le \lambda _{-j}^{(n)} \le \cdots \le \lambda _{-1}^{(n)}< 0 \le \lambda _1^{(n)} \le \cdots \lambda _k^{(n)} \le \cdots \le \lambda _{d_n^+}^{(n)}, \end{aligned}
where $$d_n^-$$ is the negative index of inertia and $$d_n^+$$ is the sum of the null and positive indices of inertia. Clearly, $$d_n= d_n^- + d_n^+$$. If the eigenvalues do not depend on the parameter n, then we will omit the superscript ( n).
We will consider all the Euclidean spaces equipped with the usual Lebesgue measure $$m(\cdot )$$. We will not specify the dimension of $$m(\cdot )$$ by a subscript since it will be clear from the context.

### 2.1 Spectral symbol

The following definition of spectral symbol has been slightly modified in accordance to our notation and purposes.
Definition 2.1
(Spectral symbol) Let $$\left\{ X^{(n)}\right\} _n$$ be a sequence of matrices and let $$\omega :D\subseteq {\mathbb {R}}^d\rightarrow {\mathbb {K}}$$ ( $${\mathbb {K}}={\mathbb {R}}$$ or $${\mathbb {C}}$$) be a measurable function and D a measurable set with $$0<m(D)<\infty$$, such that the Lebesgue integral
\begin{aligned} \int _{D} \omega ({{\varvec{y}}})m(d{{\varvec{y}}}) \end{aligned}
exists, finite or infinite. We say that $$\{X^{(n)}\}_n$$ is distributed like $$\omega$$ in the sense of the eigenvalues, in symbols $$\{X^{(n)}\}_n \sim _\lambda \omega$$, if
\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{d_n}\sum _{\begin{array}{c} k=-d^-_n\\ k\ne 0 \end{array}}^{d^+_n}F\left( \lambda _k\left( X^{(n)}\right) \right) =\frac{1}{m(D)}\int _{D} F\left( \omega ({{\varvec{y}}})\right) m(d{{\varvec{y}}}),\qquad \forall F\in C_c(\mathbb K). \end{aligned}
(2.1)
We will call $$\omega$$ the (spectral) symbol of $$\{X^{(n)}\}_n$$.
Relation ( 2.1) is satisfied for example by Hermitian Toeplitz matrices generated by real-valued functions $$\omega \in \text {L}^1([-\pi ,\pi ])$$, i.e., $$\left\{ T^{(n)}(\omega ) \right\} _n\sim _\lambda \omega$$, see [ 26, 49, 50]. For a general overview on Toeplitz operators and spectral symbol, see [ 10, 11]. What is interesting to highlight is that matrices with a Toeplitz-like structure naturally arise when discretizing over a uniform grid problems which have a translation invariance property, such as linear differential operators with constant coefficients.
Remark 1
If D is compact, $$\omega$$ continuous, and $$\lambda _k \in R_\omega = \left[ \min \omega , \max \omega \right]$$ for every $$k\in {\mathbb {N}}$$, then taking $$F(t)=t\chi _{R_\omega }(t)$$, with $$\chi _{R_\omega }$$ a $$C^\infty$$ cut-off such that $$\chi _{R_\omega }(t)\equiv 1$$ on $$R_\omega$$, gives
\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{d_n} \sum _{\begin{array}{c} k=-d^-_n\\ k\ne 0 \end{array}}^{d^+_n}\lambda _k\left( X^{(n)}\right) = \frac{1}{m(D)}\int _{D} \omega ({{\varvec{y}}})m(d{{\varvec{y}}}). \end{aligned}
(2.2)
Because the Riemannian sum over equispaced points converges to the integral of the right hand side of the above formula, ( 2.2) could suggest that the eigenvalues $$\lambda _k\left( X^{(n)}\right)$$ can be approximated by a pointwise evaluation of the symbol $$\omega ({{\varvec{y}}})$$ over an equispaced grid of D, for $$n\rightarrow \infty$$, except for at most an $$o(d_n)$$ of outliers, see Definition 2.3. This is mostly the content of [ 22, Remark 3.2] and [ 24, Sect. 2.2].
We remark that in the special case of Toeplitz matrices generated by regular enough functions $$\omega$$,
\begin{aligned} \lambda _k\left( T^{(n)}(\omega )\right) = \omega \left( \frac{k\pi }{d_n+1}\right) + O(d_n^{-1}) \qquad \text{ for } \text{ every } k=1,\ldots ,d_n, \end{aligned}
see [ 51] and [ 9]. In some sense, this justifies the informal meaning given above to the spectral symbol.
That said, unfortunately the discretization of a linear differential operator does not always own a Toeplitz-like structure. Nevertheless, the GLT theory provides tools to prove the validity of ( 2.1) for more general matrix sequences. For a review of the GLT theory and its applications we mainly refer to [ 22, 23], and to [ 5, 6]. We conclude this subsection with two definitions.
Definition 2.2
(Essential range) Let $$\omega$$ be a real valued measurable function and define the set $$R_\omega \subset {\mathbb {R}}$$ as
\begin{aligned} t\in R_\omega \quad \Longleftrightarrow \quad m\left( \left\{ {\varvec{y}} \in D \, : \, \left| \omega ({\varvec{y}}) - t\right| < \epsilon \right\} \right)>0 \quad \forall \epsilon >0. \end{aligned}
We call $$R_\omega$$ the essential range of $$\omega$$. Notice that $$R_\omega$$ is closed.
Definition 2.3
(Outliers) Given a matrix sequence $$\{X^{(n)}\}$$ such that $$\left\{ X^{(n)}\right\} _{n} \sim _{\lambda } \omega$$, if $$\lambda _k\left( X^{(n)}\right) \notin R_\omega$$ we call it an outlier.

### 2.2 Monotone rearrangement

Dealing with a univariate and monotone spectral symbol $$\omega ({{\varvec{y}}}) = \omega (\theta )$$ has several advantages. Unfortunately, in general $$\omega$$ is multivariate or not monotone. Nevertheless, it is possible to consider a rearrangement $$\omega ^*: (0,1) \rightarrow (\inf R_\omega , \sup R_\omega )$$ such that $$\omega ^*$$ is univariate, monotone nondecreasing and still satisfies the limit relation ( 2.1). This can be achieved in the following way.
Definition 2.4
Let $$\omega : D \subset {\mathbb {R}}^d \rightarrow {\mathbb {R}}$$ be measurable. Define
\begin{aligned} \omega ^* : (0,1)\rightarrow R_\omega , \qquad \omega ^*(x) = \inf \left\{ t \in R_\omega \,:\, \phi _\omega (t)> x\right\} \end{aligned}
(2.3)
where
\begin{aligned} \phi _\omega : {\mathbb {R}}\rightarrow [0,1], \qquad \phi _\omega (t) := \frac{1}{m(D)}m \left( \left\{ {{\varvec{y}}}\in D \, : \, \omega ({{\varvec{y}}}) \le t \right\} \right) , \end{aligned}
(2.4)
and where, in case of bounded $$R_\omega$$, we consider the extension $$\omega ^*: [0,1] \rightarrow R_\omega$$.
In analysis, $$\omega ^*$$ is called monotone rearrangement (see [ 44, pg. 189]) while in probability theory it is called (generalized) inverse distribution function (see [ 47, pg. 260]). For “historical” reasons we will use the analysts’ name; see [ 18, Definition 3.1 and Theorem 3.3] and [ 39], where the monotone rearrangement was first introduced in the context of spectral symbol functions. Clearly, $$\omega ^*$$ is a.e. unique, univariate, and monotone increasing, which make it a good choice for an equispaced sampling. On the other hand, $$\omega ^*$$ could not have an analytical expression or it could be not feasible to compute, therefore it is often needed an approximation $$\omega ^*_r$$. In Algorithm 1 we summarize the steps presented in [ 22, Example 10.2] and [ 24, Sect. 3] in order to approximate the eigenvalues $$\lambda _k\left( X^{(n)}\right)$$ by means of an equispaced sampling of the rearranged symbol $$\omega ^*$$ (or its approximated version $$\omega ^*_r$$). For the sake of clarity and without loss of generality, we suppose $$D=[0,1]\times [-\pi ,\pi ]$$ and $$\omega$$ continuous. As standard result in approximations of monotone rearrangements, it holds that $$\Vert \omega ^*_{r(n)} - \omega ^* \Vert _\infty \rightarrow 0$$ as $$n\rightarrow \infty$$, see [ 13, 45]. For a detailed survey about monotone rearrangements we refer to the recent work by G. Talenti [ 46].

## 3 Asymptotic spectral distribution

In this section we present the main results about the asymptotic spectral distribution of the eigenvalues of a matrix sequence with given spectral symbol. Since the proofs are rather technical, they can be found in Sect. A to not break the flow of the reading.
Theorem 3.1 connects the monotone rearrangement of the symbol function to the asymptotic eigenvalues distribution of a matrix sequence. Theorem 3.2 provides an analytic way to measure the maximum spectral relative error between two sequences of matrices, see Definition 3.1. Finally, Theorem 3.3 exploits the previous results to give an effective measure for the maximum spectral relative error that characterizes a discretization scheme, when approximating a linear self-adjoint differential operator.

### 3.1 Monotone rearrangement and spectral distribution

Let $$\{X^{(n)}\}_n$$ be a matrix sequence such that $$\{X^{(n)}\}_n \sim _\lambda \omega$$ and define the (discrete) eigenvalues counting function $$N(X^{(n)},\cdot ) : {\mathbb {R}}\rightarrow \{0,1,\ldots ,d_n\}$$,
\begin{aligned} N\left( X^{(n)},t\right) := \left| \left\{ k=-d^-_n,\ldots ,-1, 1,\ldots ,d^+_n \, : \, \lambda _k \left( X^{(n)}\right) \le t \right\} \right| . \end{aligned}
Since the monotone rearrangement takes as argument values in [0, 1], it is helpful to define a new indexing $${\hat{k}}$$ associated to $$X^{(n)}$$ such that $${\hat{k}}/d_n \in [0,1]$$. Let us set
\begin{aligned}&\vartheta _{X^{(n)}} : \left\{ - d^-_n\left( X^{(n)}\right) , \ldots ,-1,1,\ldots , d^+_n\left( X^{(n)}\right) \right\} \rightarrow \{1,\ldots , d_n\},\nonumber \\&{\hat{k}}=\vartheta _{X^{(n)}}(k):= {\left\{ \begin{array}{ll} k + d^-_n\left( X^{(n)}\right) +1 &{} \text{ if } k <0,\\ k + d^-_n\left( X^{(n)}\right) &{} \text{ if } k >0. \end{array}\right. } \end{aligned}
(3.1)
We have the following asymptotic relations.
Theorem 3.1
(Discrete Weyl’s law) Let $$\{X^{(n)}\}_n$$ be a matrix sequence such that $$\{X^{(n)}\}_n \sim _\lambda \omega$$. Then
\begin{aligned} \left\{ X^{(n)}\right\} _n \sim _\lambda \omega ^*(x), \quad x\in (0,1), \end{aligned}
(3.2)
and
\begin{aligned} \lim _{n \rightarrow \infty }\frac{N\left( X^{(n)},t\right) }{d_n} = \phi _\omega (t) \quad \text{ for } \text{ every } \text{ point } \text{ of } \text{ continuity } \text{ of } \phi _{\omega }. \end{aligned}
(3.3)
Suppose now that $$\phi _{\omega }$$ is continuous or, equivalently, that
\begin{aligned} m\left( \left\{ {{\varvec{y}}}\,:\, \omega ({{\varvec{y}}})=t\right\} \right) =0 \quad \forall \, t \in {\mathbb {R}}. \end{aligned}
Let $${\hat{k}}={\hat{k}}(n)$$ as in ( 3.1) be such that the sequence $$\{{\hat{k}}(n)/d_n\}_n$$ converges from the left to $$x_0 \in (0,1)$$, as $$n\rightarrow \infty$$. Then, there exists the limit for the sequence $$\left\{ \lambda _{{\hat{k}}(n)}\left( X^{(n)}\right) \right\} _n$$ and it holds that
\begin{aligned} \lim _{x\rightarrow x_0^-} \omega ^*\left( x\right) =\sup \left\{ t \in R_\omega \, : \, t\le \lim _{n \rightarrow \infty } \lambda _{{\hat{k}}(n)}\left( X^{(n)}\right) \right\} . \end{aligned}
(3.4)
In particular, if $$\omega ^*$$ is continuous at $$x_0$$,
\begin{aligned} \omega ^*\left( x_0\right) =\lim _{n \rightarrow \infty } \lambda _{{\hat{k}}(n)}\left( X^{(n)}\right) . \end{aligned}
(3.5)
Finally, if $$\lambda _{{\hat{k}}(n)}\left( X^{(n)}\right) \ge \inf \left( R_\omega \right)$$ ( $$\le \sup (R_\omega )$$) eventually, then ( 3.5) holds for $$x_0=0$$ ( $$x_0=1$$) as well.
Proof
See Sect. A. $$\square$$
In some sense, the limit relation ( 2.2) can be viewed as the strong law for large numbers for specially chosen sequences of dependent complex/real-valued random variables $$\left\{ \lambda _{k(n)}\left( X^{(n)}\right) \right\} _n$$. See [ 26] for the link between the spectrum of Hermitian matrices and equally distributed sequences (in the sense of Weyl) as defined in [ 31, Definition 7.1], and [ 34] as a recent survey about equidistributions from a probabilistic point of view.
Corollary 3.1
With the same notations of Theorem 3.1 and assuming $$\phi _{\omega }$$ continuous, it holds that
\begin{aligned} \lim _{n \rightarrow \infty }\frac{\left| \left\{ k=-d^-_n,\ldots ,-1,1,\ldots ,d^+_n \, : \, \lambda _k \left( X^{(n)}\right) \notin R_\omega \right\} \right| }{d_n} = 0, \end{aligned}
that is, the number of possible outliers is $$o(d_n)$$. Moreover,
\begin{aligned} \lim _{n \rightarrow \infty }\frac{\left| \left\{ k=-d^-_n,\ldots ,-1,1,\ldots ,d^+_n \, : \, \lambda _k \left( X^{(n)}\right) \le t,\, \lambda _k \left( X^{(n)}\right) \in R_\omega \right\} \right| }{d_n} = \phi _\omega (t). \end{aligned}
Proof
See Sect. A. $$\square$$
Corollary 3.2
With the same notations of Theorem 3.1, assume that $$\phi _{\omega }$$ is continuous and $$\omega ^*$$ is absolutely continuous. Let $$\omega _n^*$$ be the (pseudo) inverse function of $$\frac{N\left( X^{(n)}, \cdot \right) }{d_n}$$, then $$\left\| \omega _n^* - \omega ^*\right\| _\infty \rightarrow 0$$ as $$n\rightarrow \infty$$. Let $$h : {\mathbb {N}}\rightarrow \infty$$ be such that
$$\lim _{n \rightarrow \infty }h(d_n)=\infty , \qquad \lim _{n \rightarrow \infty }\frac{\left\| \omega _n^* - \omega ^*\right\| _\infty }{\frac{1}{h(d_n)}}=0.$$
Let $$g: (\inf R_\omega ,\sup R_\omega ) \rightarrow {\mathbb {R}}$$ be a differentiable real function and let $$\{\hat{k}(n)\}_n$$ be a sequence of integers such that
(i)
$$\frac{\hat{k}(n)}{d_n} \rightarrow x_0 \in [0,1]$$;

(ii)
$$\lambda _{\hat{k}(n)+1}\left( X^{(n)}\right) > \lambda _{\hat{k}(n)}\left( X^{(n)}\right) \in (\inf R_\omega ,\sup R_\omega )$$ eventually for $$n \rightarrow \infty$$.

Then
$$\lim _{n\rightarrow \infty } h(d_n)\left[ g\left( \lambda _{\hat{k}(n)+\left\lceil \frac{d_n}{h(d_n)}\right\rceil }\left( X^{(n)}\right) \right) - g\left( \lambda _{\hat{k}(n)}\left( X^{(n)}\right) \right) \right] = \lim _{x\rightarrow x_0} \left( g(\omega ^*(x))\right) '$$
almost everywhere.
Proof
See Sect. A. $$\square$$
Corollary 3.3
Let $$\phi _{\omega }$$ and $$\omega ^*$$ be continuous, and suppose that $$R_\omega$$ is bounded. Then, in presence of no outliers (eventually), the absolute error between a uniform sampling of $$\omega ^*$$ and the eigenvalues of $$X^{(n)}$$ converges to zero, namely
\begin{aligned} \max _{\begin{array}{c} k=-d^-_n,\ldots ,d^+_n\\ k\ne 0 \end{array}} \left\{ \left| \lambda _{k}\left( X^{(n)}\right) - \omega ^*\left( \frac{{\hat{k}}}{d_n+1}\right) \right| \right\} \rightarrow 0 \qquad \text{ as } n\rightarrow \infty . \end{aligned}
Proof
See Sect. A. $$\square$$

### 3.2 Local and maximum spectral relative errors

Exploiting the results obtained in the preceding subsection, we can now prescribe a way to measure the maximum relative error between two sequences of eigenvalues. Given two sequences of matrices $$\{X^{(n)}\}_n, \{Y^{(n)}\}_n$$ of the same dimension $$d_n$$, we will use the notation $${\hat{k}}$$ as in ( 3.1) to have an index with a common set, $${\hat{k}}\in \left\{ 1,\ldots , d_n \right\}$$, for both the eigenvalues associated to $$X^{(n)}$$ and $$Y^{(n)}$$. Moreover, we define
\begin{aligned} \vartheta := \vartheta _{X^{(n)}}^{-1}\circ \vartheta _{Y^{(n)}}, \end{aligned}
(3.6)
where $$\vartheta _{(\cdot )}$$ was introduced in ( 3.1) as well. Let us observe that, if $$\lambda _j\left( Y^{(n)}\right)$$ is the j-th eigenvalue associated to $$Y^{(n)}$$ and $$\lambda _i\left( X^{(n)}\right)$$ is the i-th eigenvalue associated to $$X^{(n)}$$, then $$\vartheta (j)=i$$ if and only if $${\hat{j}}={\hat{i}}$$. See Fig. 1.
Definition 3.1
(Local and maximum spectral relative errors) Given two sequences of matrices $$\{X^{(n)}\}_n, \{Y^{(n)}\}_n$$ of the same dimension $$d_n$$, define the sequence
\begin{aligned} \mathscr {E}_n\left( X^{(n)}, Y^{(n)}\right) := \max _{{\hat{k}}=1,\ldots ,d_n}\left\{ \delta _{{\hat{k}}}^{(n)}\right\} , \end{aligned}
where
\begin{aligned} \delta _{{\hat{k}}}^{(n)}:={\left\{ \begin{array}{ll}\left| \frac{\lambda _{{\hat{k}}}\left( X^{(n)}\right) }{\lambda _{{\hat{k}}}\left( Y^{(n)}\right) }-1\right| &{} \text{ if } \lambda _{{\hat{k}}}\left( Y^{(n)}\right) \ne 0,\\ 0 &{} \text{ if } \lambda _{{\hat{k}}}\left( Y^{(n)}\right) =\lambda _{{\hat{k}}}\left( X^{(n)}\right) = 0,\\ \infty &{} \text{ if } \lambda _{{\hat{k}}}\left( X^{(n)}\right) \ne \lambda _{{\hat{k}}}\left( Y^{(n)}\right) =0. \end{array}\right. } \end{aligned}
We call $$\delta _{{\hat{k}}}^{(n)}$$ the local spectral relative error, and we call
\begin{aligned} \mathscr {E}\left( X^{(n)}, Y^{(n)}\right) :=\limsup _{n\rightarrow \infty } \mathscr {E}_n\left( X^{(n)}, Y^{(n)}\right) \end{aligned}
the maximum spectral relative error (MSRE). When both the matrix sequences $$\{X^{(n)}\}_n$$, $$\{Y^{(n)}\}_n$$ are well understood from the context, with abuse of notation we will just write $$\mathscr {E}$$, and this applies in particular when we will compare the eigenvalues of a discretized operator $$\mathscr {L}\,^{(n)}$$ with the eigenvalues of the continuous operator $$\mathscr {L}\,$$, whose first $$d_n$$ eigenvalues can be arranged as the diagonal entries of a diagonal matrix.
For brevity, let us set $$\kappa ^-:= \inf _{n}\left\{ d_n^-\left( Y^{(n)}\right) \right\} , \kappa ^+:= \sup _{n}\left\{ d_n^+\left( Y^{(n)}\right) \right\}$$. Moreover, for any fixed $$x_0 \in [0,1]$$ let us define
\begin{aligned}&F:= \left\{ \left\{ a_{m}\right\} _{m} \, : \, \exists \lim _{m \rightarrow \infty } a_{m} \; \text{ finite } \text{ or } \text{ infinite } \right\} ,\\&G_{x_0}:= \left\{ \left\{ \delta _{\hat{k}(n_j)}^{(n_j)} \right\} _{n_j} \, : \, \lim _{n_j\rightarrow \infty } \frac{\hat{k}(n_j)}{d_{n_j}}= x_0 \; \text{ eventually } \text{ by } \text{ passing } \text{ to } \text{ a } \text{ subsequence } \right\} ,\\&H_{x_0}:=\left\{ \lim _{n_j \rightarrow \infty } \delta _{k(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F\cap G_{x_0} \right\} , \quad \sigma _{k}=\limsup _{n \rightarrow \infty }\left| \frac{\lambda _{\vartheta (k)}\left( X^{(n)}\right) }{\lambda _{k}\left( Y^{(n)}\right) }-1\right| . \end{aligned}
Theorem 3.2
Fix two sequences of matrices such that $$\{X^{(n)}\}_n \sim _\lambda \omega _1$$ and $$\{Y^{(n)}\}_n \sim _{\lambda } \omega _2$$. If
$$m\left( \left\{ {{\varvec{y}}}\,:\, \omega _1({{\varvec{y}}})=t\right\} \right) = m\left( \left\{ \varvec{z}\,:\, \omega _2(\varvec{z})=t\right\} \right) = 0 \quad \forall \, t \in {\mathbb {R}},$$
then
\begin{aligned} \mathscr {E}\left( X^{(n)},Y^{(n)}\right) \ge \max \left\{ \sup _{\begin{array}{c} x\in (0,1);\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| ; \sigma \right\} , \end{aligned}
(3.7)
where
\begin{aligned}&\sigma :={\left\{ \begin{array}{ll} 0 &{} \text{ if } \omega _2^*(x)\ne 0 \, \forall \, x\in [0,1],\\ \sup H_{x_0} &{} \text{ if } \exists \, x_0\in [0,1] \text{ such } \text{ that } \omega _2^*(x_0)=\omega _1^*(x_0)=0,\\ +\infty &{} \text{ if } \exists \, x_0\in [0,1] \text{ such } \text{ that } \omega _2^*(x_0)=0\ne \omega _1^*(x_0). \end{array}\right. } \end{aligned}
If $$\omega _2^*(x_0)=0$$ and
\begin{aligned} \limsup _{n \rightarrow \infty }\left| \left| \frac{\lambda _{\vartheta (k(n))}\left( X^{(n)}\right) }{\lambda _{k(n)}\left( Y^{(n)}\right) }-1\right| - \sigma _{k(n)} \right| =0 \quad \text{ for } \text{ every } k(n) \text{ such } \text{ that } \lim _{n \rightarrow \infty }\frac{k(n)}{d_n}=0, \end{aligned}
then
$$\sup H_{x_0}=\sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+] \\ k\ne 0 \end{array}} \{\sigma _{k}\}.$$
Moreover, if $$\omega ^*_1, \omega ^*_2$$ are continuous, $$R_{\omega _1}$$ and $$R_{\omega _2}$$ are bounded and both the sequences do not have outliers, then equality holds in ( 3.7).
Proof
See Sect. A. $$\square$$

### 3.3 Linear self-adjoint differential operators and eigenvalue distribution

The asymptotic distribution of the eigenvalues for partial differential operators on general manifolds has been widely studied and developed, see for example [ 32, 38] and all the references therein. The topic is too vast to cover it properly, therefore we will concentrate our examples only on a couple of cases: Sturm-Liouville operators for the one dimensional case and elliptic self-adjoint operators for the multi-dimensional case, see Sect. 4.1. Nevertheless, the tools presented in this section can be applied to study the quality of a discretization scheme to preserve the discrete spectrum of many classes of self-adjoint operators. The approach is the following: given an operator $$\mathscr {L}\,$$ and its discretized version $$\mathscr {L}\,^{(n)}$$, if $$\{h(d_n)\mathscr {L}\,^{(n)}\}_n \sim _{\lambda } \omega$$ for a function $$h : {\mathbb {N}}\rightarrow {\mathbb {R}}$$ (which depends on the dimension of the underlying space and the higher order of derivatives involved), then study the limit of
\begin{aligned} \zeta _n(t)=\frac{N^{(n)}\left( \mathscr {L},t\right) }{d_n}:=\frac{\left| k=-d^-_n,\ldots, -1, 1, \ldots, d^+_n \, : \, h(d_n)\lambda _k\left( \mathscr {L}\right) \le t\right| }{d_n}. \end{aligned}
(3.8)
We have the following result.
Theorem 3.3
Let $$\mathscr {L}$$ be a self-adjoint linear operator and let $$\mathscr {L}\,^{(n)}$$ be a $$d_n\times d_n$$ matrix obtained from $$\mathscr {L}$$ by a discretization scheme. Let $$d^-_n, d^+_n$$ be the negative and nonnegative indices of inertia of $$\mathscr {L}\,^{(n)}$$. Define
\begin{aligned} X^{(n)}:= \mathscr {L}\,^{(n)}, \qquad Y^{(n)}:=\mathop {\text {diag}}_{\begin{array}{c} k=-d^-_n,\ldots ,d^+_n\\ k\ne 0 \end{array}} \left\{ \lambda _{k}\left( \mathscr {L}\right) \right\} . \end{aligned}
Suppose that:
(i)
$$\left| \frac{\lambda _{\vartheta (k)}\left( \mathscr {L}^{(n)}\right) }{\lambda _{k}\left( \mathscr {L}\right) } -1\right| \rightarrow 0$$ as $$n \rightarrow \infty$$ for every $$k=k(n)$$ such that $$\frac{k(n)}{d_n}\rightarrow 0$$, where $$\vartheta$$ is defined in ( 3.6);

(ii)
$$\{h(d_n)\mathscr {L}\,^{(n)}\}_n \sim _{\lambda } \omega$$ for some fixed $$h : {\mathbb {N}}\rightarrow {\mathbb {R}}$$;

(iii)
$$m\left( \left\{ {{\varvec{y}}}\, : \, \omega ({{\varvec{y}}})=t \right\} \right) =0$$ for every $$t \in {\mathbb {R}}$$, and $$R_\omega$$ bounded;

(iv)
$$\lim _{n \rightarrow \infty } \zeta _n(t) = \zeta (t)$$, where $$\zeta _n$$ is defined in ( 3.8) and $$\zeta : {\mathbb {R}}\rightarrow [0,1]$$ is continuous and surjective.

Then
\begin{aligned} \mathscr {E}=\mathscr {E}\left( X^{(n)}, Y^{(n)}\right) \ge \sup _{\begin{array}{c} x\in [0,1]\\ x\,:\, \zeta ^*(x)\ne 0 \end{array}} \left| \frac{\omega ^*(x)}{\zeta ^*(x)} -1\right| . \end{aligned}
Moreover, if $$\omega ^*$$, $$\zeta ^*$$ are continuous and $$h(d_n)\lambda _k\left( \mathscr {L}\,^{(n)} \right) \in R_\omega$$ eventually, then equality holds.
Proof
See Sect. A. $$\square$$
The (normalized) Weyl function $$\zeta$$ is known for many kind of differential operators. See the next section for a couple of examples.

## 4 Numerical experiments

We will write $$\mathscr {L}\,^{(n,\eta )}$$ to denote a $$d_n\times d_n$$ square matrix which is the discretization of a linear differential operator $$\mathscr {L}$$ by means of a numerical scheme of order of approximation $$\eta$$. In the case where the approximation order $$\eta$$ is clear by the context, then we will omit it. If the discretized operator $$\mathscr {L}\,^{(n)}$$ is normalized by a constant depending on the fineness mesh parameter n, then we will denote it with $$\hat{\mathscr {L}\,}^{(n)}$$. We will use the subscripts dir and BCs to denote a (discretized) linear differential operator characterized with Dirichlet or generic boundary conditions, respectively. When it will be necessary to highlight the dependency of the differential operator on a variable coefficient $$p({{\varvec{x}}})$$, we will write it as subscript. So, for example, the normalized discretization of a diffusion operator with Dirichlet BCs by means of the IgA scheme of order $$\eta$$ will be denoted by
\begin{aligned} \hat{\mathscr {L}\,}^{(n,\eta )}_{\text {dir},p({{\varvec{x}}})}. \end{aligned}
All the computations are performed on MATLAB R2018b running on a desktop-pc with an Intel i5-4460H @3.20 GHz CPU and 8GB of RAM.

### 4.1 Application to Euler-Cauchy differential operator

We begin our analysis with respect to a toy-model example. In this subsection, the main objectives are:
• to show numerical evidences of Theorem 3.3, i.e., that the monotone rearrangement $$\omega ^*(x)$$ of the spectral symbol $$\omega ({{\varvec{y}}})$$ measures the maximum spectral relative error $$\mathscr {E}$$. See Sects. 4.1.1, 4.1.2 and 4.1.3;
• to disprove that, in general, a uniform sampling of the spectral symbol $$\omega ({{\varvec{y}}})$$ can provide an accurate approximation of the eigenvalues of the normalized and unnormalized discrete operators $$\hat{\mathscr {L}\,}^{(n)}$$ and $$\mathscr {L}\,^{(n)}$$, respectively. See Sect. 4.1.1;
• to show that a discretization scheme can lead to $$\mathscr {E}=0$$ if coupled with a suitable (possibly non-uniform) grid and an increasing order of approximation. See Sects. 4.1.2 and 4.1.3.
Let us fix $$\alpha >0$$ and let us consider the following self-adjoint operator with Dirichlet BCs,
\begin{aligned} \mathscr {L}_{\text {dir},\alpha x^2}[u](x) := -\left( \alpha x^2 u'(x)\right) ', \qquad \text {dom}\left( \mathscr {L}_{\text {dir},\alpha x^2}\right) =W^{1,2}_0(1,\text {e}^{\sqrt{\alpha }}). \end{aligned}
(4.1)
The formal equation is an Euler-Cauchy differential equation and by means of the Liouville transformation
\begin{aligned} y(x)= \int _1^x \left( \sqrt{\alpha }t\right) ^{-1}dt, \end{aligned}
(4.2)
the operator ( 4.1) has the same spectrum of
\begin{aligned} \varDelta _{\text {dir},\alpha }[v](y):= -v''(y) + \frac{\alpha }{4}v(y), \qquad \text {dom}\left( \varDelta _{\text {dir},\alpha }\right) =W^{1,2}_0(0,1), \end{aligned}
which is a self-adjoint operator in Schroedinger form with constant potential $$V(y)\equiv \alpha /4$$. For a general review, see for example [ 17, 21, 52]. It is clear that
\begin{aligned} \lambda _k \left( \mathscr {L}_{\text {dir},\alpha x^2}\right) = \lambda _k\left( \varDelta _{\text {dir},\alpha } \right) = \lambda _k\left( \varDelta _{\text {dir}} \right) +\frac{\alpha }{4} = k^2 \pi ^2 +\frac{\alpha }{4}\quad \text{ for } \text{ every } k\ge 1, \end{aligned}
where
\begin{aligned} \varDelta _{\text {dir}}[v](y):= -v''(y), \qquad \text {dom}\left( \varDelta _{\text {dir}}\right) =W^{1,2}_0(0,1). \end{aligned}
For later reference, notice that
\begin{aligned} \lim _{\alpha \rightarrow 0} \lambda _k \left( \mathscr {L}_{\text {dir},\alpha x^2}\right) = \lambda _k\left( \varDelta _{\text {dir}} \right) = k^2 \pi ^2 \quad \text{ for } \text{ every } k\ge 1, \end{aligned}
(4.3)
namely, the diffusion coefficient $$p(x)=\alpha x^2$$ produces a constant shift of $$\alpha /4$$ to the eigenvalues of the unperturbed Laplacian operator with Dirichlet BCs, i.e., $$\varDelta _{\text {dir}}$$.
We introduce the following definition.
Definition 4.1
(Numerical and analytic spectral relative errors) Let $$\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}$$ be the discrete differential operator obtained from ( 4.1) by means of a generic numerical discretization method. If
\begin{aligned} \left\{ (d_n+1)^{-2}\;\;\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\right\} _n \sim _{\lambda } \omega _\alpha (x,\theta ), \qquad (x,\theta ) \in [1,\text {e}^{\sqrt{\alpha }}]\times [-\pi ,\pi ], \end{aligned}
then fix $$n,n',r \in {\mathbb {N}}$$, with $$n'\gg n$$ and $$r=r(n)\ge n$$ and compute the following quantities
\begin{aligned} \mathbf{err }_{\alpha ,k}^{(n)}= \left| \frac{\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}^{(n)}\right) }{\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}^{(n')}\right) } -1\right| , \qquad \mathbf{err }_{\alpha ,r,k}^{*,(n)}= \left| \frac{(d_n+1)^2\omega ^{*,(n)}_{\alpha ,r,k}}{\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}^{(n')}\right) } -1 \right| \end{aligned}
for $$k=1,\cdots , d_n$$. Specifically, $$\omega ^{*,(n)}_{\alpha ,r,k}= \omega ^*_{\alpha ,r}\left( \frac{k}{d_n+1}\right)$$, where $$\omega ^*_{\alpha ,r}$$ is the (approximated) monotone rearrangement of the spectral symbol $$\omega _\alpha$$ obtained by the procedure described in Algorithm 1. We call $$\mathbf{err }_{\alpha ,k}^{(n)}$$ the numerical spectral relative error and $$\mathbf{err }_{\alpha ,r,k}^{*,(n)}$$ the analytic spectral relative error. The difference between these definitions and Definition 3.1 is that in this case we are using as a comparing sequence the eigenvalues of the same discrete operator on a finer mesh, since supposedly we do not know the exact eigenvalues of the continuous operator but we do know that $$\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}^{(n')}\right)$$ converges to the exact eigenvalue $$\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}\right)$$ as $$n'\rightarrow \infty$$. We say that $$\omega _\alpha$$ spectrally approximates the discrete differential operator $$\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}$$ if
\begin{aligned} \limsup _{n \rightarrow \infty }\; \left( \mathbf{err }_{\alpha ,r,k}^{*,(n)}\right) =0 \qquad \text{ for } \text{ every } \text{ fixed } k. \end{aligned}

#### 4.1.1 Approximation by 3-points central FD method on uniform grid

In our example, if we apply the standard central 3-points FD scheme, see [ 43], then the sequence of the normalized discretization matrices $$\hat{\mathscr {L}}_{\text {dir},\alpha x^2}^{(n)}= (n+1)^{-2}\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}$$ of the operator ( 4.1) has spectral symbol
\begin{aligned} \omega _\alpha (x,\theta )= \frac{\alpha x^2}{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}4\sin ^2\left( \frac{\theta }{2}\right) , \qquad \text{ with }\quad D=[1,\text {e}^{\sqrt{\alpha }}]\times [0,\pi ], \end{aligned}
see [ 22, Theorem 10.5]. Working with this toy-model problem using the 3-points central FD scheme provides us a further advantage, since we can analytically compute the monotone rearrangement $$\omega _\alpha ^*$$, or at least a finer approximation than $$\omega ^*_{\alpha ,r}$$ which does not depend on the extra parameter r and is less computationally expensive. Indeed, from Eq. ( 2.4) we have that
\begin{aligned} \phi _{\omega _\alpha } : \left[ 0,\frac{4\alpha \text {e}^{2\sqrt{\alpha }}}{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}\right] \rightarrow \left[ 0,1\right] , \qquad \end{aligned}
(4.4)
where
\begin{aligned} \phi _{\omega _\alpha }(t) = \frac{1}{\pi \left( \text {e}^{\sqrt{\alpha }}-1\right) } m \left(\left\{ (x,\theta ) \in [1,\text {e}^{\sqrt{\alpha }}]\times [0,\pi ] \, : \, \frac{4\alpha x^2\sin ^2(\theta /2)}{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2} \le t \right\}\right) . \end{aligned}
After some analytical manipulation, $$\phi _{\omega _\alpha }$$ can be expressed explicitly
\begin{aligned}&\phi _{\omega _\alpha }(t) = \frac{1}{\pi \left( \text {e}^{\sqrt{\alpha }}-1\right) }& \cdot {\left\{ \begin{array}{ll} \varPhi \left( t,\text {e}^{\sqrt{\alpha }}\right) - \varPhi \left( t,1\right) &{} \text{ if } t \in \left[ 0, \frac{4\alpha }{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}\right] ,\nonumber \\ \pi \left( \frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) \sqrt{t}}{2\sqrt{\alpha }}-1\right) + \left[ \varPhi \left( t,\text {e}^{\sqrt{\alpha }}\right) - \varPhi \left( t,\frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) \sqrt{t}}{2\sqrt{\alpha }}\right) \right] &{} \text{ if } t \in \left[ \frac{4\alpha }{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}, \frac{4\alpha \text {e}^{2\sqrt{\alpha }}}{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}\right] , \end{array}\right. } \end{aligned}
where
\begin{aligned} \varPhi (t,x) = \frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) \sqrt{t}\log \left( 2\alpha x\left( \sqrt{1-\frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2}{4\alpha x^2}}\right) +1\right) }{\sqrt{\alpha }} + 2x\arcsin \left( \frac{\left( \text {e}^{\sqrt{\alpha }}-1\right) \sqrt{t}}{2\sqrt{\alpha }x}\right) . \end{aligned}
Since we have an analytical expression for $$\phi _{\omega _\alpha }(t)$$, it is then possible to compute a numerical approximation of its generalized inverse $$\omega _\alpha ^*$$over the uniform grid $$\left\{ \frac{k}{n+1} \right\} _{k=1}^n$$, for example by means of a Newton method. This approximation of the monotone rearrangement does not depend on the extra parameter r: therefore, when we will compute the analytical spectral relative error with respect to $$\omega _\alpha ^*$$ we will write $$\mathbf{err }_{\alpha ,k}^{*,(n)}$$ without the subscript r.
In Fig. 2 it is possible to check that an equispaced sampling of $$(n+1)^2\omega ^*_{\alpha ,r}$$ asymptotically distributes exactly as the eigenvalues of the unnormalized discrete operator $$\mathscr {L}_{\text {dir},\alpha x^2}^{(n)}$$. Indeed, $$\phi _{\omega _\alpha }$$ is continuous and strictly monotone increasing which implies that $$\omega _\alpha ^*$$ is continuous: then relation ( 3.5) applies. Moreover, according to Eq. ( 4.3), we observe that
\begin{aligned}&\lim _{\alpha \rightarrow 0} \phi _{\omega _\alpha }(t) = \frac{2}{\pi }\arcsin \left( \frac{\sqrt{t}}{2}\right)&t\in [0,4],\\&\lim _{\alpha \rightarrow 0}\omega _\alpha ^* (x) = 4\sin ^2\left( \frac{\pi x}{2}\right)&x \in [0,1], \end{aligned}
which means that the monotone rearrangement $$\omega _\alpha ^*$$ converges to the spectral symbol $$\omega$$ as $$\alpha \rightarrow 0$$, that is, to the spectral symbol which characterizes the differential operator $$\varDelta _{\text {dir}}$$ discretized by means of a 3-points FD scheme. The eigenvalues of $$(n+1)^{-2}\left( \varDelta _{\text {dir}}^{(n)}\right)$$ are the exact sampling of $$\omega (\theta )=4\sin ^2\left( \theta /2\right)$$ over the uniform grid $$\left\{ \frac{k\pi }{n+1} \right\} _{k=1}^n$$, see [ 43, p. 154]. This asymptotic behaviour reflects what we already observed in ( 4.3).
All these remarks would suggest that $$\omega _\alpha ^*$$, or equivalently $$\omega ^*_{\alpha ,r}$$, spectrally approximates the normalized discrete operator $$\hat{\mathscr {L}}_{\text {dir},\alpha x^2}^{(n)}$$. Unfortunately, this conjecture looks to be partially proven wrong by Fig. 3: it shows the comparison between the graphs of the numerical spectral relative error $$\mathbf{err }_{\alpha ,k}^{(n)}$$ and the analytic spectral relative error $$\mathbf{err }_{\alpha ,r,k}^{*,(n)}$$, for several different increasing values of the parameter r. We observe a discrepancy in the analytical prediction of the eigenvalue error $$\mathbf{err }_{\alpha ,r,k}^{*,(n)}$$, for small $$k\ll n$$, with respect to the numerical relative error $$\mathbf{err }_{\alpha ,k}^{(n)}$$.
In particular, the maximum discrepancy is achieved at $$k=1$$, for every r. The discrepancy apparently decreases as the number of grid points r increases, as well observed in [ 24, Figure 48] for some test-problems in the setting of Galerkin discretization by linear $$C^0$$ B-splines. In that same paper, some plausible hypotheses and suggestions were advanced:
• The discrepancy could depend on the fact that it was used $$\omega ^*_{\alpha ,r}$$ instead of $$\omega _{\alpha }^*$$, and then that discrepancy should tend to zero in the limit $$r\rightarrow \infty$$, since $$\omega ^*_{\alpha ,r} \rightarrow \omega _{\alpha }^*$$.
• Numerical instability of the analytic relative error $$\mathbf{err }_{\alpha ,r,k}^{*,(n)}$$ for small eigenvalues, [ 24, Remark 3.1].
• Change the sampling grid into an “almost uniform” grid: see [ 24, Remark 3.2] for details.
The problem is that these hypotheses, which stem from numerical observations, cannot be validated: the descent to zero of the observed discrepancy as r increases is only apparent. Indeed, what happens is that, for every fixed k it holds
\begin{aligned} \left| \mathbf{err }_{\alpha ,r,k}^{*,(n)}\right| \rightarrow c_{\alpha ,k}= \frac{\frac{\alpha }{4}}{k^2\pi ^2+\frac{\alpha }{4}}>0 \quad \text{ as } r,n \rightarrow \infty , \end{aligned}
(4.5)
with $$c_{\alpha ,k}$$ independent of n and $$n'$$. This is the content of Proposition B.1 and Remark 2 in Sect. B, with $$p(x)=\alpha x^2,w(x)\equiv 1, q(x)\equiv 0$$. We then have a lower bound for the analytic spectral relative error which can not be avoided by refining the grid points. Of course, as $$n\rightarrow \infty$$, then $$c_{\alpha ,k} \rightarrow 0$$ as k increases. Those remarks are summarized in Table 1.
Table 1
For every fixed k and $$\alpha$$, the analytic relative error $$\mathbf{err }_{\alpha ,k}^{*,(n)}$$ converges to the lower bound $$c_{\alpha ,k}$$ as n increases, where $$c_{\alpha ,k}$$ is given in ( 4.5)

$$|\mathbf{err} ^{*,(n)}_{\alpha ,k}/c_{\alpha ,k} -1|$$
$$n=10^2$$
$$n=10^3$$
$$n=10^4$$
$$\alpha =0.1$$
$$c_{\alpha ,1}=0.0025$$
0.0326
3.3223e-04
3.3283e-06
$$c_{\alpha ,5}=1.0131e-04$$
20.3811
0.2076
0.0021
$$c_{\alpha ,10}=2.5330e-05$$
325.3811
3.3222
0.0333
$$\alpha =1$$
$$c_{\alpha ,1}=0.0247$$
0.0041
4.1363e-05
4.1438e-07
$$c_{\alpha ,5}=0.0010$$
2.5395
0.0259
2.5899e-04
$$c_{\alpha ,10}=2.5324e-04$$
40.6422
0.4136
0.0041
$$\alpha =2$$
$$c_{\alpha ,1}=0.0482$$
0.0026
2.6120e-05
2.6167e-07
$$c_{\alpha ,5}=0.0020$$
1.6056
0.0163
1.6354e-04
$$c_{\alpha ,10}=5.0635e-04$$
25.7979
0.2612
0.0026
$$\alpha =5$$
$$c_{\alpha ,1}=0.1124$$
0.0020
2.0008e-05
2.0044e-07
$$c_{\alpha ,5}=0.0050$$
1.2389
0.0125
1.2528e-04
$$c_{\alpha ,10}=0.0013$$
20.4017
0.2002
1.2528e-04
Observe that $$\mathbf{err }_{\alpha ,k}^{*,(n)}$$ seems to be monotone decreasing of order $$O(n^{-2})$$. The approximation of $$\omega ^*_\alpha$$ is obtained evaluating $$\phi ^{-1}_\alpha$$ from ( 4.4) by means of the fzero() function from Matlab r2018b
The problem lies on the wrong informal interpretation given to the limit relation in Definition 2.1, and suggested by Remark 1. Indeed, the asymptotic equality ( 2.1) tells us that
\begin{aligned} \left( \frac{k(n)}{n}, \lambda _{k(n)}\left( \hat{\mathscr {L}}_{\text {dir},\alpha x^2}^{(n)}\right) \right) \rightarrow \left( x, \omega ^*_\alpha (x) \right) \qquad \text{ as } n\rightarrow \infty \end{aligned}
for every k( n) such that $$k(n)/n\rightarrow x \in [0,1]$$, see Theorem 3.1. Therefore, since $$\omega ^*_\alpha$$ is continuous and $$R_\omega$$ is bounded, it follows that
\begin{aligned} \left\| \omega ^{*,(n)}_{\alpha ,r,k}-\lambda _k\left( \hat{\mathscr {L}}\,_{\text {dir},\alpha x^2}^{(n)}\right) \right\| _\infty \rightarrow 0 \quad \text{ as } n\rightarrow \infty , \end{aligned}
by Corollary 3.3 and as observed for example in [ 22, Example 10.2 p. 198]. On the contrary, a uniform sampling of the symbol $$\omega$$ does not necessarily provide an accurate approximation of the eigenvalues of the operator $$\hat{\mathscr {L}}\,_{\text {dir},\alpha x^2}^{(n)}$$, in the sense of the relative error. The uniform sampling of the symbol works perfectly only for specific subclasses of discretization schemes and operators, but it fails in general.
As a last remark, there does not exist an “almost” uniform grid as well, nor in an asymptotic sense as described in [ 24, Remark 3.2]. Knowing the exact sampling grid which guarantees $$\omega ^*_\alpha$$ to spectrally approximate the discrete differential operator is equivalent to know the eigenvalue distribution of the original differential operator.
What we can do instead is to apply Theorem 3.3. With reference to ( 3.8), it is easy to prove that
\begin{aligned} \zeta _n(t)= \frac{\left| \left\{ k=1,\ldots ,n \, : \, \frac{\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}\right) }{n^2} \le t \right\} \right| }{n}\rightarrow \zeta (t)=\frac{\sqrt{t}}{\pi }, \end{aligned}
that is,
\begin{aligned} \zeta ^*(x)= \pi ^2x^2. \end{aligned}
(4.6)
For every fixed k it holds that $$\lambda _{k}\left( \hat{\mathscr {L}\,}^{(n)}_{\text {dir},\alpha x^2}\right)$$ converges to $$\lambda _{k}\left( \hat{\mathscr {L}}_{\text {dir},\alpha x^2}\right)$$, see [ 43], and by the continuity of $$\phi _{\omega _\alpha }$$ and by Eq. ( 4.6), then items (i)–(iv) of Theorem 3.3 are satisfied. Finally, since it holds that $$\hat{\mathscr {L}}_{\text {dir},\alpha \tau (x)^2}^{(n)}$$ does not have outliers by [ 40, Theorem 2.2], we conclude that
\begin{aligned} \mathscr {E}= \max _{x\in [0,1]} \left| \frac{\omega ^*_\alpha (x)}{x^2\pi ^2} -1\right| >0. \end{aligned}
(4.7)
In Fig. 4 and Table 2 it is numerically checked the validity of ( 4.7).
Table 2
In this table we check numerically the validity of Theorem 3.3 for different values of $$\alpha$$ and n

$$n=10^2$$
$$n=10^3$$
$$n=5\cdot 10^3$$
$$\alpha =0.5$$
$$|\frac{\max |\lambda ^{(n)}_k/\lambda _k|}{\max | \omega _{\alpha }^*(x)/x^2\pi ^2|}-1|$$
0.0104
0.0010
2.0853e-04
$$k_0/n$$
0.7900
0.7880
0.7878
$$\alpha =1$$
$$|\frac{\max |\lambda ^{(n)}_k/\lambda _k|}{\max | \omega _{\alpha }^*(x)/x^2\pi ^2|}-1|$$
0.0158
0.0016
3.1754e-04
$$k_0/n$$
0.6700
0.6680
0.6676
$$\alpha =1.2$$
$$|\frac{\max |\lambda ^{(n)}_k/\lambda _k|}{\max | \omega _{\alpha }^*(x)/x^2\pi ^2|}-1|$$
0.0180
0.0018
3.6226e-04
$$k_0/n$$
0.64
0.6310
0.6302
$$\alpha =3$$
$$|\frac{\max |\lambda ^{(n)}_k/\lambda _k|}{\max | \omega _{\alpha }^*(x)/x^2\pi ^2|}-1|$$
0.0518
0.0097
0.0032
$$k_0/n$$
1
1
1
It can be seen that for every $$\alpha$$, as n increases then the relative error between $$\max _{k=1,\ldots ,n}\left| \lambda _k\left( \mathscr {L}\,_{\text {dir},\alpha x^2}^{(n)}\right) /\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}\right) \right|$$ and $$\max _{x\in [0,1]}\left| \omega _{\alpha }^*(x)/x^2\pi ^2\right|$$ decreases, validating ( 4.7). In the table it is reported as well the ratio $$k_0/n$$, where $$k_0$$ is the $$k_0$$-th eigenvalue which achieves the maximum relative error between $$\lambda _k\left( \mathscr {L}\,_{\text {dir},\alpha x^2}^{(n)}\right)$$ and $$\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}\right)$$. We can notice that $$k_0/n$$ tends to a fixed value in (0, 1] as n increases. The approximation of $$\omega _{\alpha }^*$$ is obtained evaluating $$\phi ^{-1}_\alpha$$ from ( 4.4) by means of the fzero() function from Matlab r2018b

#### 4.1.2 Discretization by $$(2\eta +1)$$-points central FD method on non-uniform grid

Clearly, everything said in the preceding Sect. 4.1.1 remains valid even if we increase the order of accuracy of the FD method. Within this subsection we will make use of the FD scheme presented in Appendix C. The theoretical analysis of the scheme requires a proper standalone treatment, therefore we leave it to an upcoming future work and some of its properties are listed as conjectures in Conjecture C.1. Nevertheless, we introduce it now since it is helpful to the flow of our discussion and to provide (numerical) evidence to our conclusions.
First of all, we can say that the spectral symbol $$\omega _{\alpha ,\eta }$$ in ( C.2) does not spectrally approximate the discrete differential operator $$\mathscr {L}_{\text {dir},\alpha x^2}^{(n,\eta )}$$, in the sense of the relative error, for any $$\eta \ge 1$$, as we have already seen in the previous subsection for $$\eta =1$$.
What is interesting instead is to change the sampling grid and to increase the order of accuracy $$\eta$$ of the FD discretization method. Indeed, as it was observed in ( 4.7), it is not possible to achieve $$\mathscr {E}=0$$ if $$\omega _{\alpha ,\eta }^*(x) \ne x^2\pi ^2$$. From ( C.2), for every $$\eta \ge 1$$ it is easy to check that $$\max _{(x,\theta )\in [a,b]\times [0,\pi ]}\omega _{\alpha ,\eta }(x,\theta )= \omega _{\alpha ,\eta }^*(1)\ne \pi ^2$$, and so we do not have any improvement by just increasing the order of accuracy $$\eta$$. On the other hand, observe that if we fix a new sampling grid $$\left\{ {\bar{x}}_j\right\} _{j=1}^n = \left\{ \tau (x_j)\right\} _{j=1}^n$$, with $$\tau : [1,\text {e}^{\sqrt{\alpha }}]\rightarrow [1,\text {e}^{\sqrt{\alpha }}]$$ a diffeomorphism, then
• from Corollary C.1 and ( C.2),
\begin{aligned} \lim _{\eta \rightarrow \infty }\omega _{\alpha ,\eta }(x,\theta ) = \frac{\alpha \tau (x)^2}{\left( \tau '(x)\right) ^2\left( \text {e}^{\sqrt{\alpha }}-1\right) ^2} \theta ^2 \qquad \text{ for } \text{ every } (x,\theta ) \in [1,\text {e}^{\sqrt{\alpha }}]\times [0,\pi ]; \end{aligned}
• defining $$\tau _2(y) = \text {e}^{\sqrt{\alpha }y}$$, then $$\frac{\alpha \tau _2(y)^2}{\left( \tau _2'(y)\right) ^2}\equiv 1$$.
In some sense, the spectral symbol $$\omega _{\alpha ,\eta }$$ suggests us to change the uniform grid $$\left\{ x_j\right\} _{j=1}^n \subset [1,\text {e}^{\sqrt{\alpha }}]$$ by means of the diffeomorphism induced by the Liouville transformation. Indeed, from ( 4.2) we have that
\begin{aligned} y(x)= \frac{\log (x)}{\sqrt{\alpha }} \quad \text{ for } x \in [1,\text {e}^{\sqrt{\alpha }}], \qquad x(y)= \text {e}^{\sqrt{\alpha }y} \quad \text{ for } y \in [0,1], \end{aligned}
and therefore we can compose a $$C^\infty$$-diffeomorphism $$\tau : [1,\text {e}^{\sqrt{\alpha }}]\rightarrow [1,\text {e}^{\sqrt{\alpha }}]$$ such that
\begin{aligned} \tau : [1,\text {e}^{\sqrt{\alpha }}] \xrightarrow []{\tau _1} [0,1] \xrightarrow []{\tau _2} [1,\text {e}^{\sqrt{\alpha }}], \qquad \tau _1(x) = \frac{1}{\text {e}^{\sqrt{\alpha }}-1}\left( x-1\right) , \qquad \tau _2(y)= \text {e}^{\sqrt{\alpha }y}. \end{aligned}
(4.8)
The new non-uniform grid is then given by
\begin{aligned} \left\{ {\bar{x}}_j\right\} _{j=1}^n = \left\{ \tau (x_j)\right\} _{j=1}^n, \end{aligned}
(4.9)
and it holds that
\begin{aligned} \lim _{\eta \rightarrow \infty }\omega _{\alpha ,\eta }^*(x) =\pi ^2 x^2 \qquad x \in [0,1], \end{aligned}
that is exactly ( 4.6). Since the discretization scheme is convergent, for every fixed k the local spectral relative error $$\delta _k^{(n)}$$ (see Definition 3.1) converges to zero as $$n \rightarrow \infty$$, and by [ 40, Theorem 2.2] it would hold that $$\lambda _k \left( \mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\right) \in R_{\omega _{\alpha ,\eta }}$$ for every $$k,\ldots , n$$, and for every fixed $$\eta \ge 1$$. From these remarks, if the validity of Conjecture C.1 would be proved right, then we would be able to apply Theorem 3.3 and it would follow that
\begin{aligned} \mathscr {E}=\mathscr {E}(\eta ) = \left| \frac{\omega _{\alpha ,\eta }^*(x)}{\pi ^2 x^2} -1 \right| \rightarrow 0 \qquad \text{ as } \eta \rightarrow \infty . \end{aligned}
See Fig. 5 and Table 3.
Table 3
Comparison between the maximum spectral relative errors of the discrete differential operators $$\mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}$$ obtained by means of $$(2\eta +1)$$-points central FD discretization on uniform ( $$\tau (x)=x$$) and non-uniform grids ( $$\tau$$ as in ( 4.8)), for increasing values of n and $$\eta$$

$$\mathscr {E}_n=\mathscr {E}_n(\eta )$$
$$\eta =1$$
$$\eta =10$$
$$\eta =15$$
$$n=10^2$$
$$n=10^3$$
$$n=15\cdot 10^2$$
Uniform grid
0.3155
0.9057
1.0101
Non-uniform grid
0.5867
0.2210
0.1819
The parameter $$\alpha$$ is fixed, with $$\alpha =1$$. We observe that in the uniform grid case, as n and $$\eta$$ increase the maximum increases as well. On the contrary, in the non-uniform grid case given by ( 4.9), the maximum decreases as both n and $$\eta$$ increase

#### 4.1.3 IgA discretization by B-spline of degree $$\eta$$ and smoothness $$C^{\eta -1}$$

In this subsection we continue our analysis in the IgA framework, see Appendix D where we present some known results that are useful for our analysis. Here we just collect all the numerical results of the tests, which confirm again what observed in Sects. 4.1.1 and 4.1.2. The only difference relies on the fact that we took out the largest eigenvalues of the discrete operator $$\mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}$$. This is due to the fact that the IgA discretization suffers of a fixed number of outliers which depends on the degree $$\eta$$ and it is independent of n, see [ 15, Chapter 5.1.2 p. 153]. So, we consider only the eigenvalues $$\lambda _k\left( \mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\right)$$ which belong to $$R_{\omega _{\alpha ,\eta }}$$.
In Fig. 6 we compare the graphs of the eigenvalue distributions between the discrete eigenvalues $$\lambda _k\left( \mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}\right)$$ and the exact eigenvalues $$\lambda _k\left( \mathscr {L}_{\text {dir},\alpha x^2}\right)$$, for different values of $$\eta$$ on uniform ( $$\tau (x)=x$$) and non-uniform grids ( $$\tau$$ as in ( 4.8)). They line up with the numerics of Table 4: if the sampling grid is given by ( 4.9), the maximum spectral relative error $$\mathscr {E}$$ decreases as the order of approximation increases.
Table 4
Comparison between the maximum spectral relative errors of the discrete differential operators $$\mathscr {L}_{\text {dir},\alpha \tau (x)^2}^{(n,\eta )}$$ obtained by means of IgA discretization of order $$\eta$$ on uniform ( $$\tau (x)=x$$) and non-uniform grids ( $$\tau$$ as in ( 4.8)), for increasing values of $$\eta$$ and n

$$\mathscr {E}_n=\mathscr {E}_n(\eta )$$
$$\eta =1$$
$$\eta =5$$
$$\eta =10$$
$$n=10^2$$
$$n=5\cdot 10^2$$
$$n=10^3$$
Uniform grid
1.7653
1.1971
1.2005
Non-uniform grid
0.4433
0.0513
0.0268
The parameter $$\alpha$$ is fixed, with $$\alpha =1$$. We observe that in the non-uniform grid case given by ( 4.9), the maximum spectral relative error decreases as n and $$\eta$$ increase. Let us notice that we did not take in consideration the outliers, see Definition 2.3

### 4.2 d-dimensional Dirichlet Laplacian

The results of Sect. 3 can be generalized to study the maximum spectral relative error between more general linear self-adjoint differential operators and the numerical scheme implemented for their discretization. Let us sketch the ideas with a plain example.
Let $$\varDelta : W_0^{1,2}\left( [0,1]^d\right) \rightarrow \text {L}^2\left( [0,1]^d\right)$$ be the second order elliptic operator defined by the formal equation
\begin{aligned} \varDelta [u]({{\varvec{x}}}) := - \sum _{j=1}^d\frac{\partial ^2 u}{\partial _{x^2_j}}({{\varvec{x}}}), \qquad [0,1]^d \subset {\mathbb {R}}^d. \end{aligned}
It is well-known that it has empty essential spectrum and that
\begin{aligned} \lambda _k\left( \varDelta \right)&= \lambda _{k_1\cdots k_d}\left( \varDelta \right) =\sum _{j=1}^d \pi ^2k_j^2 \quad \text{ for } k_1,\ldots ,k_d\ge 1,\\ \zeta (t)&=\lim _{n \rightarrow \infty }\frac{\left| k=1,\ldots ,n^d \, : \, \frac{\lambda _k\left( \varDelta \right) }{(n+1)^2}\le t\right| }{n^d} = c_d\left( \frac{t}{4\pi ^2}\right) ^{\frac{d}{2}}, \quad t \in [0,4\pi ^2/(c_d)^{2/d}] , \end{aligned}
where $$c_d$$ is the volume of the unit ball in $${\mathbb {R}}^d$$, see [ 32]. Define
\begin{aligned} Y^{(n)}:= (n+1)^{-2}\mathop {\text {diag}}_{k=1,\ldots ,n^d}\left\{ \lambda _{k}\left( \varDelta \right) \right\} . \end{aligned}
Then it holds that $$\left\{ Y^{(n)}\right\} _n \sim _{\lambda } \zeta ^*(x) = 4\pi ^2\left( \frac{x}{c_d}\right) ^{2/d}$$, $$x\in [0,1]$$.
On the other hand, a discretization of the operator $$\varDelta$$ by means of separation of variables and the classic equispaced $$(2d+1)$$-points FD scheme, leads to
\begin{aligned} \varDelta ^{(n)}= (n+1)^2\sum _{k=1}^d \underbrace{I^{(n)}\otimes \cdots \otimes I^{(n)}}_{k-1}\otimes T^{(n)}\left( 2-2\cos (\theta _k)\right) \otimes \underbrace{I^{(n)}\otimes \cdots \otimes I^{(n)}}_{d-k}, \end{aligned}
for $$\theta _k \in [0,\pi ]$$, where $$I^{(n)}$$ is the identity matrix and $$\otimes$$ is the Kronecker product, and all the matrices are of dimension $$d_n = n^d$$. It holds that
\begin{aligned} \left\{ (n+1)^{-2}\varDelta ^{(n)}\right\} _n=\left\{ {\hat{\varDelta }}^{(n)}\right\} _n \sim _\lambda \omega \left( {\varvec{\theta }}\right) = \sum _{k=1}^d (2-2\cos (\theta _k)) \end{aligned}
where
\begin{aligned} {\varvec{\theta }}=(\theta _1,\ldots ,\theta _d)\in [0,\pi ]^d, \end{aligned}
see [ 23, Chapter 7.3]. Since the discretization scheme is convergent, for every fixed k the local spectral relative error $$\delta _k^{(n)}$$ (see Definition 3.1) converges to zero as $$n \rightarrow \infty$$, and by [ 40, Theorem 2.2] it holds that $$\lambda _k \left( \varDelta ^{(n)}\right) \in R_{\omega }$$ for every kn. Moreover, it is immediate to check that both $$\omega ^*,\zeta ^*$$ are continuous, therefore we can apply Theorem 3.3 to get the maximum spectral relative error:
\begin{aligned} \mathscr {E}=\sup _{x\in (0,1]} \left| \frac{\omega ^*(x)}{\zeta ^*(x)} -1\right| \ge \left| \frac{\omega ^*(1)}{\zeta ^*(1)} -1\right| = \left| \frac{4d}{\frac{4\pi ^2}{(c_d)^{2/d}}} -1\right| >0. \end{aligned}

## 5 Conclusions

Given a differential operator $$\mathscr {L}$$ discretized by means of a numerical scheme, the knowledge of the spectral symbol $$\omega$$ provides a way to measure how far the spectrum of the discretized operator is from a uniform approximation of all the spectrum of the original differential operator $$\mathscr {L}$$. Moreover, the symbol itself gives insights about how to build a discretization scheme such that the discrete operator could benefit of a spectrum that converges uniformly to the spectrum of the continuous operator, i.e., such that $$\mathscr {E}=0$$, see ( 1.1). This is crucial in some engineering applications and our results improve those in the recent review paper by T. J. R. Hughes and coauthors.
In light of the proposed analysis, it becomes a priority to devise new specific discretization schemes with mesh-dependent order of approximation which guarantee a good balance between convergence to zero of the maximum spectral relative error and computational costs.
Finally, other applications of the spectral symbol related to the computation of the maximum spectral relative error have been recently addressed in the graphs setting, for PDEs approximation, see [ 1].

## Proofs of Section 3

In this section we provide the rigorous proofs of the results that we presented in Sect. 3. Before going further in the details, we need some preliminary definitions and notations from measure theory, see [ 29].
Given a probability space $$(S, \mathscr {S}, \nu )$$ and a measurable function $$g:S\rightarrow {\mathbb {R}}$$, the distribution function $$\phi _g : {\mathbb {R}}\rightarrow [0,1]$$ associated to g is defined by
\begin{aligned} \phi _g (t) := \nu \left( \left\{ s \in S \, : \, g(s)\le t \right\} \right) . \end{aligned}
It holds that $$\phi _g$$ is right-continuous, proper and with nonnegative increments. Moreover, any function $$\phi : {\mathbb {R}}\rightarrow [0,1]$$ satisfying those stated properties determines a probability measure $$\mu$$ on $${\mathbb {R}}$$ such that
\begin{aligned} \mu ((-\infty ,t])=\phi (t), \end{aligned}
and vice-versa, see [ 29, Theorem 4.25]. In this case, we call $$\phi$$ the distribution associated to $$\mu$$.
Definition A.1
(Vague convergence) A sequence $$\{\mu _n\}_n$$ of locally-finite measures on $${\mathbb {R}}$$ is said to converge vaguely to a measure $$\mu$$ if and only if
\begin{aligned} \lim _{n \rightarrow \infty } \int F(t) \mu _n(dt) = \int F(t) \mu (dt), \quad \forall \, F \in C_c({\mathbb {R}}). \end{aligned}
(A.1)
In this case, we write $$\mu _n \xrightarrow {v} \mu$$.
If $$\{\mu _n\}_n$$ and $$\mu$$ are probability measures, that is, if $$\mu _n({\mathbb {R}})=\mu ({\mathbb {R}})=1$$, and $$\{\phi _n\}_n$$ and $$\phi$$ are their distributions, respectively, then
\begin{aligned} \mu _n \xrightarrow {v} \mu \quad \text{ if } \text{ and } \text{ only } \text{ if }\quad \phi _{n}(t) \rightarrow \phi (t) \end{aligned}
(A.2)
for every $$t\in {\mathbb {R}}$$ which is a point of continuity of $$\phi$$, see [ 14, Theorem 4.4.1].
Definition A.1 and Definition 2.1 are connected. Let us set the atomic measure $$\delta _x : \mathscr {B} \rightarrow \{0,1\}$$ from the Borel sets $$\mathscr {B}$$ of $${\mathbb {R}}$$ such that
\begin{aligned} \delta _x (B):={\left\{ \begin{array}{ll} 1 &{} \text{ if } x \in B,\\ 0 &{} \text{ if } x \notin B, \end{array}\right. } \end{aligned}
and define
\begin{aligned} \mu _n(\cdot ):=\frac{1}{d_n}\sum _{\begin{array}{c} k=-d^-_n\\ k\ne 0 \end{array}}^{d^+_n} \delta _{\lambda _k\left( X^{(n)}\right) }(\cdot ). \end{aligned}
(A.3)
Clearly, $$\mu _n(\cdot )$$ is a probability measure and for every measurable function $$F: {\mathbb {R}}\rightarrow {\mathbb {R}}$$ it holds that
\begin{aligned} \int F(t)\mu _n(dt) = \frac{1}{d_n}\sum _{\begin{array}{c} k=-d^-_n\\ k\ne 0 \end{array}}^{d^+_n}F\left( \lambda _k\left( X^{(n)}\right) \right) . \end{aligned}
Lemma A.1
Let $$\{X^{(n)}\}_n$$ be a matrix sequence and $$\omega : D \subset {\mathbb {R}}^M \rightarrow {\mathbb {R}}$$ be a measurable function, D a measurable set with $$0<m(D)<\infty$$, and such that the Lebesgue integral $$\int _D \omega ({{\varvec{y}}}) m(d{{\varvec{y}}})$$ exists, finite or infinite. Then $$\{X^{(n)}\}_n \sim _\lambda \omega$$ if and only if $$\mu _n \xrightarrow {v} \mu$$, where $$\mu$$ is the measure on $${\mathbb {R}}$$ determined by the distribution function $$\phi _\omega$$ associated to $$\omega$$ and defined in ( 2.4), i.e., such that $$\mu ((-\infty ,t])=\phi _\omega (t)$$.
Proof
It is an immediate consequence of ( A.1) and ( 2.1), and the fact that
\begin{aligned}&\int F(t)\mu (dt) =\frac{1}{m(D)}\int _{D}F(\omega ({{\varvec{y}}}))m(d{{\varvec{y}}}) \end{aligned}
for every measurable function F, see ( 2.4) and [ 29, Lemma 1.24]. $$\square$$
The next lemma is a slight generalization of a well know result by Pólya, [ 36, Satz I] (see also [ 47, Exercise 6.4.1]).
Lemma A.2
Let $$\phi _n, \phi : (a,b)\subseteq {\mathbb {R}}\rightarrow {\mathbb {R}}$$, where $$-\infty \le a < b \le +\infty$$. Suppose that $$\phi _n$$ and $$\phi$$ are nonnegative, monotone nondecreasing and that
(i)
$$\lim _{t\rightarrow a}\phi _n(t)=a_n$$ and $$\lim _{n \rightarrow \infty }a_n= \lim _{t\rightarrow a}\phi (t)=0$$;

(ii)
$$\lim _{t\rightarrow b}\phi _n(t)=b_n$$ and $$\lim _{n \rightarrow \infty }b_n= \lim _{t\rightarrow b}\phi (t)=c\in (0,+\infty )$$;

(iii)
$$\lim _{n \rightarrow \infty }\phi _n(t) = \phi (t)$$ for every t in a dense subset I of [ ab].

If $$\phi$$ is continuous then $$\phi _n$$ converges uniformly to $$\phi$$.
Proof
For the convenience of the reader we report here the proof. Fix $$\epsilon >0$$. Clearly, there exists $$\delta _1=\delta _1(\epsilon )>0$$ such that $$0\le \phi (t)< \epsilon$$ and $$c-\epsilon < \phi (t) \le c$$ for every $$t\in (a,a+\delta _1)$$ and $$(b-\delta _1,b)$$, respectively, with $$a+\delta _1<b-\delta _1$$. Choose $$t' \in (a,a+\delta _1)\cap I$$, $$t''\in (b-\delta _1,b)\cap I$$. For the continuity of $$\phi$$ and the compactness of $$[t', t'']$$, there exists $$\delta _2=\delta _2(\epsilon )>0$$ such that $$0\le \phi (t_{i}) - \phi (t_j) < \epsilon$$ for every $$t_i,t_j$$ in $$[t',t'']$$ such that $$0< t_{i} - t_j < \delta _2$$. Choose $$k=k(\epsilon )$$ and a set of points $$\{t_i\}_{i=0}^{k+1}$$ such that $$t_0=t'$$, $$t_{k+1}=t''$$, $$t_i \in (t',t'')\cap I$$ for $$i=1,\ldots ,k$$ and
\begin{aligned} 0<t_{i+1} - t_i < \delta _2 \quad \text{ for } i=0,\ldots ,k. \end{aligned}
By hypothesis, $$\lim _{n \rightarrow \infty }a_n= 0$$, $$\lim _{n \rightarrow \infty }b_n= c$$ and $$\lim _{n \rightarrow \infty }\phi _n(t_i) = \phi (t_i)$$ for every $$i=0,\ldots ,k+1$$, and therefore there exists $$N=N(\epsilon )$$ such that
\begin{aligned} 0\le a_n< \epsilon , \qquad 0\le |b_n -c|< \epsilon , \qquad 0\le |\phi _n(t_i) - \phi (t_i)|<\epsilon \end{aligned}
for every $$n> N$$ and for every $$i=0,\ldots ,k+1$$. By monotonicity, for every $$t \in (a,b)$$ it holds that
\begin{aligned}&\phi _n(t) - \phi (t) \\&\quad \le {\left\{ \begin{array}{ll} (\phi _n(t_{0})- \phi (t_0)) + \phi (t_0)< 2\epsilon &{} \text{ if } t \in (a,t_{0}),\\ \left( \phi _n(t_{i+1}) - \phi (t_{i+1})\right) - \left( \phi (t_{i+1})-\phi (t_i) \right)< 2\epsilon &{} \text{ if } t \in [t_i,t_{i+1}], \, i=0,\ldots ,k,\\ b_n - \phi (t_{k+1})< c + \epsilon - \phi (t_{k+1})< 2\epsilon &{} \text{ if } t \in (t_{k+1},b), \end{array}\right. } \end{aligned}
for every $$n>N$$. By a symmetric argument it can be shown that $$\phi (t) - \phi _n(t) < 2\epsilon$$, and therefore
\begin{aligned} \sup _{t\in [a,b]} |\phi _n(t) - \phi (t)| < 2\epsilon \qquad \forall \, n>N. \end{aligned}
$$\square$$
We are ready to provide the proofs of Theorem 3.1, Corollary 3.1 and Corollary 3.2.
Theorem  3.1.
Proof
Let us observe that $$\omega ^*$$ is monotone increasing, right continuous and the possibly non-empty set of its point of discontinuity (jumps) is at most countable. Moreover, the function $$\phi _\omega$$ defined in ( 2.4) is the distribution function associated to both $$\omega$$ and $$\omega ^*$$, that is,
\begin{aligned} \frac{1}{m(D)} m\left(\{ {\varvec{y}} \in D \, : \, \omega ({\varvec{y}}) \le t \right\}) = \phi _{\omega }(t) = \phi _{\omega ^*}(t) =m\left(\{ x \in (0,1) \, : \, \omega ^*(x) \le t \right\}) . \end{aligned}
The proof of the last statement follows immediately by adapting [ 47, Lemma 6.3.10] to our definition of nondecreasing monotone rearrangement. Therefore, if we indicate with $$\mu$$ the measure determined by $$\phi _{\omega }$$, by [ 29, Lemma 1.24] it holds that
\begin{aligned} \frac{1}{m(D)}\iint _{D} F\left( \omega ({{\varvec{y}}})\right) m(d{{\varvec{y}}})= \int F(t)\mu (dt)= \int _0^1 F\left( \omega ^*(x)\right) m(dx) \end{aligned}
for every measurable function F. The above identity plus ( 2.1) gives ( 3.2). Define now
\begin{aligned} \phi _n (t) := \frac{N\left( X^{(n)},t\right) }{d_n}. \end{aligned}
Clearly, $$\phi _n(t)=\mu _n((-\infty ,t])$$ for every $$t\in {\mathbb {R}}$$, where $$\mu _n$$ is the probability measure defined in ( A.3). By Lemma A.1 we have that $$\mu _n \xrightarrow {v} \mu$$, and by the equivalence ( A.2) we get ( 3.3). Suppose now that $$\phi _\omega$$ is continuous. Without loss of generality and for the sake of simplicity, let now $$d^-_n\left( X^{(n)}\right) =0$$, such that $${\hat{k}}=k$$, and define $$\lambda _{k(n)}:= \lambda _{k(n)}\left( X^{(n)}\right)$$. Applying Lemma A.2 it holds that $$\phi _n \rightarrow \phi _{\omega }$$ uniformly. On the other hand, it holds that
\begin{aligned} x_0=\lim _{n \rightarrow \infty } \frac{k(n)}{d_n}= \lim _{n \rightarrow \infty }\frac{N\left( X^{(n)},\lambda _{k(n)}\right) }{d_n} = \lim _{n \rightarrow \infty } \phi _n\left( \lambda _{k(n)}\right) . \end{aligned}
Therefore, for every $$\epsilon >0$$ there exist $$N_1=N_1(\epsilon ),N_2=N_2(\epsilon ) \in {\mathbb {N}}$$ such that
\begin{aligned}&\sup _{t \in {\mathbb {R}}} \left| \phi _n\left( t\right) - \phi _{\omega }\left( t\right) \right|< \epsilon \quad \forall \, n>N_1, \\&\left| \phi _n\left( \lambda _{k(n)}\right) - x_0 \right| < \epsilon \quad \forall \, n>N_2. \end{aligned}
It follows easily that
\begin{aligned} \lim _{n \rightarrow \infty }\phi _{\omega }\left( \lambda _{k(n)}\right) =\phi _{\omega }\left( \lim _{n \rightarrow \infty }\lambda _{k(n)}\right) = x_0. \end{aligned}
Since $$x_0 \in (0,1)$$, then by ( 2.4) it holds that
\begin{aligned} \lim _{n \rightarrow \infty }\lambda _{k(n)}=t_0 \in (\inf R_{\omega }, \sup R_{\omega }). \end{aligned}
Finally, by the above relation and by ( 2.3), we can conclude that
\begin{aligned} \lim _{x\rightarrow x_0^-} \omega ^*\left( x\right) =\lim _{n \rightarrow \infty }\omega ^*\left( \frac{k(n)}{d_n}\right) = \sup _{t \in R_{\omega }} \left\{ t\le t_0 \right\} , \end{aligned}
which is ( 3.4). Let us observe now that $$x_0$$ is a jump discontinuity point for $$\omega ^*$$ if and only if there exist $$t_1<t_2 \in R_\omega$$ such that $$R_\omega \subseteq (\inf R_\omega , t_1] \cup [t_2,\sup R_\omega )$$, and $$\phi _{\omega }(t)=x_0$$ if and only if $$t\in [t_1,t_2]$$. Therefore, if $$\omega ^*$$ is continuous in $$x_0$$, then $$t_0=t_1=t_2 \in R_\omega$$ and we have ( 3.5). $$\square$$
Corollary  3.1.
Proof
It is immediate from ( 3.3). Let us observe that
\begin{aligned} 0\le \lim _{n \rightarrow \infty } \frac{\left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) < \inf R_\omega \right\} \right| }{d_n} \le \lim _{n \rightarrow \infty } \frac{N(X^{(n)},\inf R_\omega )}{d_n}= \phi _\omega \left( \inf R_\omega \right) =0. \end{aligned}
Moreover, since
\begin{aligned} \frac{\left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \notin R_\omega \right\} \right| }{d_n}&= \frac{\left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \in {\mathbb {R}}\right\} \right| }{d_n} - \frac{\left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \in R_\omega \right\} \right| }{d_n}\\&= 1 - \frac{\left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \in R_\omega \right\} \right| }{d_n}\\&= 1 - \frac{N(X^{(n)},\sup R_\omega )}{d_n}+\frac{\left| \left\{ k\, : \, \lambda _k \left( X^{(n)}\right) < \inf R_\omega \right\} \right| }{d_n}, \end{aligned}
then passing to the limit we get
\begin{aligned} \lim _{n \rightarrow \infty } \frac{\left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \notin R_\omega \right\} \right| }{d_n}&= 1- \phi _\omega \left( \sup R_\omega \right) =0. \end{aligned}
The second part of the thesis follows instead by ( 3.3) and the easy fact that
\begin{aligned} \left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \le t,\, \lambda _k \left( X^{(n)}\right) \in R_\omega \right\} \right|&= \left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \le t \right\} \right| \\&\quad - \left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \le t,\, \lambda _k \left( X^{(n)}\right) \notin R_\omega \right\} \right| , \end{aligned}
where
\begin{aligned} \left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \le t,\, \lambda _k \left( X^{(n)}\right) \notin R_\omega \right\} \right| \le \left| \left\{ k \, : \, \lambda _k \left( X^{(n)}\right) \notin R_\omega \right\} \right| = o(d_n). \end{aligned}
$$\square$$
Corollary  3.2.
Proof
Since $$\omega ^*$$ is absolutely continuous then it is differentiable almost everywhere. Let $$x_0 \in [0,1]$$ such that $$(\omega ^*)'_{|x=x_0}$$ exists finite. It is clear that $$\omega ^*_n$$ is monotone (piece-wise constant) and right-continuous for any n, that $$\omega ^*_n(\hat{m}/d_n)= \lambda _{\hat{m}}\left( X^{(n)}\right)$$ for any $$\hat{m} \in \{1,\ldots ,d_n\}$$, and that $$\omega ^*_n \rightarrow \omega ^*$$ uniformly as $$n\rightarrow \infty$$, because of ( 3.5) and Lemma A.2. Therefore, by hypothesis we have that
\begin{aligned} \lim _{n \rightarrow \infty }\frac{g\left( \lambda _{\hat{k}(n)+\left\lceil \frac{d_n}{h(d_n)}\right\rceil }\left( X^{(n)}\right) \right) - g\left( \lambda _{\hat{k}(n)}\left( X^{(n)}\right) \right) }{\frac{1}{h(d_n)}}&= \lim _{n \rightarrow \infty }\frac{g\left( \omega _n^*\left( \frac{\hat{k}(n)}{d_n} +\frac{1}{h(d_n)} \right) \right) - g\left( \omega _n^*\left( \frac{\hat{k}(n)}{d_n}\right) \right) }{\frac{1}{h(d_n)}}\\&= g'(\omega ^*(x_0))\lim _{n \rightarrow \infty }\frac{\omega ^*_n\left( \frac{\hat{k}(n)}{d_n} +\frac{1}{h(d_n)} \right) - \omega ^*_n\left( \frac{\hat{k}(n)}{d_n}\right) }{\frac{1}{h(d_n)}}\\&= g'(\omega ^*(x_0))\lim _{n \rightarrow \infty }\frac{\omega ^*\left( \frac{\hat{k}(n)}{d_n} +\frac{1}{h(d_n)} \right) - \omega ^*\left( \frac{\hat{k}(n)}{d_n}\right) }{\frac{1}{h(d_n)}}\\&= g'(\omega ^*(x_0))\lim _{n \rightarrow \infty } \omega '\left( \xi _n\right) , \; \xi _n \in \left[ \frac{\hat{k}(n)}{d_n},\frac{\hat{k}(n)}{d_n}+\frac{1}{h(d_n)}\right] \\&=g'(\omega ^*(x_0))(\omega ^*)'(x_0). \end{aligned}
Since $$m \left( \left\{ x \in [0,1] \, : \, \not \exists \left( \omega ^*\right) '(x) \text{ or } \omega ^*(x)=0 \right\} \right) =0$$, we conclude.
Corollary  3.3.
Proof
Let us observe that since there are no outliers, then eventually $$\min (R_\omega )\le \lambda _{k}\left( X^{(n)}\right) \le \max (R_\omega )$$ for every k. Without loss of generality and for the sake of simplicity, let $$d^-_n=0$$. Suppose the thesis is false. Then there exists a sequence $$k=k(n)$$ such that $$|\lambda _{k(n)}\left( X^{(n)}\right) - \omega ^*\left( k(n)/(d_n+1)\right) | >\epsilon$$ for some $$\epsilon >0$$. On the other hand, by the boundedness of $$k(n)/(d_n+1)$$ and by Theorem 3.1, passing to a subsequence we can assume that $$k(n)/(d_{n}+1)$$ converges to a point $$x_0$$ in [0, 1] and that ( 3.5) holds. If $$\omega ^*\left( k(n)/(d_n+1)\right) =0$$ eventually, then $$|\lambda _{k(n)}\left( X^{(n)}\right) - \omega ^*\left( k(n)/(d_n+1)\right) |\rightarrow 0$$, which is a contradiction. At the same time, if $$\omega ^*\left( k(n)/(d_n+1)\right)$$ is not eventually identical to zero, then by passing again to a subsequence we can assume that $$\omega ^*\left( k(n)/(d_n+1)\right) \ne 0$$, and by the boundedness of $$R_\omega$$
\begin{aligned} \left| \lambda _{k(n)}\left( X^{(n)}\right) - \omega ^*\left( \frac{k(n)}{d_n+1}\right) \right| \le c\left| \frac{\lambda _{k(n)}\left( X^{(n)}\right) }{ \omega ^*\left( \frac{k(n)}{d_n+1}\right) } - 1\right| \rightarrow 0, \end{aligned}
which is again a contradiction. $$\square$$
Before introducing the proofs of Theorem 3.2 and Theorem 3.3, we need a couple of lemmas.
Lemma A.3
With the same notations of Theorem 3.2, if $$\phi _{\omega _2}$$ is continuous and $$\omega _2^*(x_0)=0$$ then
\begin{aligned} \frac{{\hat{k}}(n)}{d_n}\rightarrow x_0 \quad \text{ if } \text{ and } \text{ only } \text{ if } \quad \frac{k(n)}{d_n}\rightarrow 0. \end{aligned}
Proof
Notice that
\begin{aligned} \frac{d_n^-\left( Y^{(n)}\right) }{d_n}= \mu _n\left( (-\infty ,0)\right) \end{aligned}
where $$\mu _n$$ is the probability measure defined in ( A.3). Since $$\phi _{\omega _2}$$ is continuous, applying [ 14, Theorem 4.3.2]
\begin{aligned} \lim _{n \rightarrow \infty }\frac{d_n^-\left( Y^{(n)}\right) }{d_n}= \phi _{\omega _2}(0). \end{aligned}
By definition, $$\phi _{\omega _2}$$ is the inverse of $$\omega _2^*$$ and therefore $$\omega _2^*(x_0)=0$$ implies $$\phi _{\omega _2}(0)=x_0$$. From this result and ( 3.1) we get that
\begin{aligned} \lim _{n \rightarrow \infty } \frac{{\hat{k}}(n)}{d_n}= & {} \lim _{n \rightarrow \infty } \frac{k(n)}{d_n} + \lim _{n \rightarrow \infty }\frac{d_n^-(Y^{(n)})}{d_n} + \lim _{n \rightarrow \infty }\frac{\mathbb {1}_{(-\infty ,0)}(k(n))}{d_n}\\= & {} \lim _{n \rightarrow \infty } \frac{k(n)}{d_n} + \phi _{\omega _2}(0) =\lim _{n \rightarrow \infty } \frac{k(n)}{d_n} + x_0, \end{aligned}
and we conclude. $$\square$$
Lemma A.4
With the same hypotheses and notations of Theorem 3.2, it holds that
\begin{aligned} \mathscr {E}= \sup \left\{ \lim _{n_j \rightarrow \infty } \delta _{\hat{k}(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F \right\} , \end{aligned}
(A.4)
or equivalently
\begin{aligned} \mathscr {E}&= \max \left\{ \sup \left\{ \lim _{n_j \rightarrow \infty } \delta _{\hat{k}(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F\cap G_{x_0}^c \right\} ; \nonumber \right. \\&\left. \qquad \sup \left\{ \lim _{n_j \rightarrow \infty } \delta _{\hat{k}(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F\cap G_{x_0} \right\} \right\} \nonumber \\&=\max \left\{ \sup \left\{ \lim _{n_j \rightarrow \infty } \delta _{\hat{k}(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F\cap G_{x_0}^c \right\} ; \sup H_{x_0} \right\} . \end{aligned}
(A.5)
In particular, if $$\omega _2^*(x_0)=0$$, then we have that
\begin{aligned} \sup H_{x_0} \ge \sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+]\\ k\ne 0 \end{array}}\left\{ \sigma _k \right\} , \end{aligned}
and if
$$\limsup _{n \rightarrow \infty }\left| \left| \frac{\lambda _{\vartheta (k(n))}\left( X^{(n)}\right) }{\lambda _{k(n)}\left( Y^{(n)}\right) }-1\right| - \sigma _{k(n)} \right| =0 \quad \text{ for } \text{ every } k(n) \text{ such } \text{ that } \lim _{n \rightarrow \infty }\frac{k(n)}{d_n}=0,$$
then
\begin{aligned} \sup H_{x_0}=\sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+] \\ k\ne 0 \end{array}} \{\sigma _{k}\}. \end{aligned}
(A.6)
Proof
The first part follows from the definition. Indeed,
\begin{aligned} \mathscr {E}= \limsup _n \mathscr {E}_n&= \limsup _n \max _{\hat{k}=1,\ldots ,d_n}\left\{ \delta ^{(n)}_{\hat{k}} \right\} \\&= \sup \left\{ \lim _{n_j \rightarrow \infty } \max _{\hat{k}=1,\ldots ,d_{n_j}}\left\{ \delta ^{(n_j)}_{\hat{k}} \right\} \, : \, \left\{ \max _{\hat{k}=1,\ldots ,d_{n_j}}\left\{ \delta ^{(n_j)}_{\hat{k}} \right\} \right\} _{n_j} \in F \right\} \\&=\sup (A). \end{aligned}
If we call $$B:= \left\{ \lim _{n_j \rightarrow \infty } \delta _{\hat{k}(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F \right\}$$, then it is easy to check that $$A\subset B$$, and then $$\sup (A)\le \sup (B)$$. On the other hand, for every element $$b\in B$$ there exists an increasing sequence of indices $$\{n_{j_m}\}\subset \{n_j\}$$ such that
$$b=\lim _{n_j \rightarrow \infty }\delta _{\hat{k}(n_j)}^{(n_j)} \le \limsup _{n_j\rightarrow \infty } \max _{\hat{k}=1,\ldots ,d_{n_j}}\left\{ \delta ^{(n_j)}_{\hat{k}} \right\} = \lim _{n_{j_m} \rightarrow \infty } \max _{\hat{k}=1,\ldots ,d_{n_{j_m}}}\left\{ \delta ^{(n_{j_m})}_{\hat{k}} \right\} .$$
Therefore, for every $$b\in B$$ there exists $$a \in A$$ such that $$b\le a$$, and then it follows that $$\sup (B)\le \sup (A)$$, that is, $$\sup (B)=\sup (A)$$ which is exactly ( A.4). Identity ( A.5) follows trivially by the definition of $$H_{x_0}$$. By Lemma A.3, if $$\omega _2^*(x_0)=0$$ we have that
$$G_{x_0}= \left\{ \left\{ \delta _{\hat{k}(n_j)}^{(n_j)} \right\} _{n_j} \, : \, \lim _{n_j\rightarrow \infty } \frac{k(n_j)}{d_{n_j}}= 0 \; \text{ eventually } \text{ by } \text{ passing } \text{ to } \text{ a } \text{ subsequence } \right\} ,$$
where k( n) is the sub-index associated to $$\lambda _{\hat{k}(n)}\left( Y^{(n)}\right)$$, that is, $$k(n)= \vartheta ^{-1}_{Y^{(n)}}(\hat{k}(n))$$. Without loosing the generality of the proof, we will suppose hereafter that $$\lim _{n_j\rightarrow \infty } \frac{k(n_j)}{d_{n_j}}= 0$$ without passing to a subsequence, and that $$d_{n}^-\left( Y^{(n)}\right) =d_{n}^-\left( X^{(n)}\right)$$, that is, $$\vartheta (k)=k$$ and then we can avoid to pass to the index $$\hat{k}$$. Because of this last assumption, we have that
$$\left| \frac{\lambda _{\vartheta _{Y\rightarrow X}(k)}\left( X^{(n)}\right) }{\lambda _{k}\left( Y^{(n)}\right) }-1\right| = \left| \frac{\lambda _{k}\left( X^{(n)}\right) }{\lambda _{k}\left( Y^{(n)}\right) }-1\right| = \delta _{k}^{(n)}, \quad k\in \left\{ d_{n}^-\left( Y^{(n)}\right) ,\ldots , d_{n}^+\left( Y^{(n)}\right) \right\} .$$
Let us define now
$$\limsup _{n\rightarrow \infty }\delta _{k}^{(n)}= \sigma _{k} \in [0, \infty ].$$
Then there exists a subsequence $$\{n_j\}$$ such that
$$\sigma _{k}= \lim _{n_j\rightarrow \infty }\delta _{k}^{(n_j)}.$$
So, $$\left\{ \delta _{k}^{(n_j)}\right\} \in F$$ and clearly $$\frac{k}{d_{n_j}}\rightarrow 0$$, consequently $$\sigma _{k} \in H_{x_0}$$. Therefore $$\sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+] \\ k\ne 0 \end{array}} \{\sigma _{k}\} \le \sup H_{x_0}$$. On the other hand, assuming that
$$\limsup _{n \rightarrow \infty }\left| \delta _{k(n)}^{(n)} - \sigma _{k(n)} \right| =0 \quad \text{ for } \text{ every } k(n) \text{ such } \text{ that } \lim _{n \rightarrow \infty }\frac{k(n)}{d_n}=0,$$
then, given an element $$\delta _{k(n)}^{(n)} \in H_{x_0}$$, there exists a subsequence such that
$$\lim _{j \rightarrow \infty }\left| \delta _{k(n_j)}^{(n_j)} - \sigma _{k(n_j)} \right| =0.$$
Therefore,
\begin{aligned} \lim _{n \rightarrow \infty } \delta _{k(n)}^{(n)}&\le \lim _{j \rightarrow \infty }\left| \delta _{k(n_j)}^{(n_j)} - \sigma _{k(n_j)}\right| + \sigma _{k(n_j)}\\&\le \lim _{j \rightarrow \infty }\left| \delta _{k(n_j)}^{(n_j)} - \sigma _{k(n_j)}\right| + \sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+] \\ k\ne 0 \end{array}} \{\sigma _{k}\}\\&= \sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+] \\ k\ne 0 \end{array}} \{\sigma _{k}\}, \end{aligned}
and it follows that $$\sup H_{x_0} \le \sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+] \\ k\ne 0 \end{array}} \{\sigma _{k}\}$$.
Theorem  3.2.
Proof
Let us begin observing that, by ( A.4),
$$\mathscr {E}= \sup \left\{ \lim _{n_j \rightarrow \infty } \delta _{\hat{k}(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F \right\} ,$$
and that the set J of points such that $$\omega _1^*$$ or $$\omega _2^*$$ are discontinuous is at most countable, and the possible discontinuity points can be only jumps. Moreover, due to the strict monotonicity of $$\omega _2^*$$, there can exists only one point $$x_0\in (0,1)$$ such that $$\omega _2^*(x_0)=0$$. Fix $$x_0\in (0,1)$$ such that both $$\omega _1^*$$ and $$\omega _2^*$$ are continuous in $$x_0$$, and such that $$\omega ^*_{2}(x_0)\ne 0$$. Let $$\left\{ \hat{k}(n_j)\right\}$$ be a sequence such that $$\hat{k}(n_j)/d_{n_j}\rightarrow x_0^-$$ as $$n_j \rightarrow \infty$$. Then
$$\lim _{n_j \rightarrow \infty } \delta _{\hat{k}(n_j)}^{(n_j)}= \left| \frac{\omega _1^*(x_0)}{\omega _2^*(x_0)} -1\right| <\infty$$
due to ( 3.4). This proves that
$$\mathscr {E} \ge \sup _{\begin{array}{c} x\in (0,1)\setminus J;\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| =\sup _{\begin{array}{c} x\in (0,1);\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| = \max \left\{ \sup _{\begin{array}{c} x\in (0,1);\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| ; 0\right\} .$$
Let now assume that there exists $$x_0 \in [0,1]$$ such that $$\omega _2(x_0)=0$$. If there exists $$x_0 \in [0,1]$$ such that $$\omega _2(x_0)=0\ne \omega _1(x_0)$$, then by the right-continuity of $$\omega _2^*$$,
$$\sup _{\begin{array}{c} x\in (0,1);\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| =+\infty = \sigma .$$
On the contrary, if there exists $$x_0 \in [0,1]$$ such that $$\omega _2(x_0)=0= \omega _1(x_0)$$, then by ( A.5) it holds that
$$\mathscr {E} \ge \sup \left\{ \lim _{n_j \rightarrow \infty } \delta _{\hat{k}(n_j)}^{(n_j)} \, : \, \left\{ \delta _{\hat{k}(n_j)}^{(n_j)}\right\} _{n_j} \in F\cap G_{x_0} \right\} = \sup H_{x_0} = \sigma .$$
Therefore, we can conclude that
$$\mathscr {E} \ge \max \left\{ \sup _{\begin{array}{c} x\in (0,1);\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| ; \sigma \right\} ,$$
which is exactly ( 3.7).
Let us now suppose that both $$\omega _1^*, \omega _2^*$$ are continuous, that $$\{X^{(n)}\}_n, \{Y^{(n)}\}_n$$ do not have outliers and that $$R_{\omega _1}, R_{\omega _2}$$ are bounded, that is, compact. If we prove the reverse inequality in ( 3.7), then we can conclude. By our assumptions it follows easily that
\begin{aligned} \lambda _{{\hat{k}}}\left( x^{(n)}\right) \in R_{\omega _1}= \omega _1^*([0,1]), \qquad \lambda _{{\hat{k}}}\left( Y^{(n)}\right) \in R_{\omega _2}= \omega _2^*([0,1]) \end{aligned}
for every n and $${\hat{k}}= 1,\ldots , d_n$$, that is, for every n and for every $${\hat{k}}$$ there exist $$x_1,x_2\in [0,1]$$ such that
\begin{aligned} \lambda _{{\hat{k}}}\left( X^{(n)}\right) = \omega _1^*(x_1), \qquad \lambda _{{\hat{k}}}\left( Y^{(n)}\right) = \omega _2^*(x_2). \end{aligned}
Notice that if there exists $$x_0 \in [0,1]$$ such that $$\omega _2^*(x_0)=0\ne \omega _1^*(x_0)$$ then
\begin{aligned} \mathscr {E} = +\infty = \sup _{\begin{array}{c} x\in (0,1);\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| = \sigma . \end{aligned}
Therefore, hereafter we will suppose that either $$\omega _2^*(x)\ne 0$$ for every $$x \in [0,1]$$ or that there exists $$x_0 \in [0,1]$$ such that $$\omega _2^*(x_0)=0=\omega _1^*(x_0)$$. Fix a sequence $$\left\{ \delta _{{\hat{k}}(n_j)}^{(n_j)}\right\} _{n_j} \in F$$. By the boundedness of $${\hat{k}}(n_{j})/d_{n_{j}}$$, there exists a subsequence $$\{n_{j_m}\}$$ such that
\begin{aligned} \lim _{n_{j_m} \rightarrow \infty } \frac{{\hat{k}}(n_{j_m})}{d_{n_{j_m}}} = x_0 \in [0,1]. \end{aligned}
Then by Theorem 3.1 it happens that
\begin{aligned} {\left\{ \begin{array}{ll} \lim _{n_{j_m} \rightarrow \infty } \lambda _{{\hat{k}}(n_{j_m})}\left( X^{(n_{j_m})}\right) = \omega _1^*(x_0) \in (-\infty ,+\infty ),\\ \lim _{n_{j_m} \rightarrow \infty } \lambda _{{\hat{k}}(n_{j_m})}\left( Y^{(n_{j_m})}\right) = \omega _2^*(x_0)\in (-\infty ,+\infty ). \end{array}\right. } \end{aligned}
If $$\omega ^*_{2}(x_0)\ne 0$$, then $$\lim _{n_j \rightarrow \infty } \delta _{{\hat{k}}(n_j)}^{(n_j)}$$ is finite, in particular
\begin{aligned} \lim _{n_j \rightarrow \infty } \delta _{{\hat{k}}(n_j)}^{(n_j)} = \lim _{n_{j_m} \rightarrow \infty }\left| \frac{\lambda _{{\hat{k}}(n_{j_m})}\left( X^{(n_{j_m})}\right) }{\lambda _{{\hat{k}}(n_{j_m})}\left( Y^{(n_{j_m})}\right) }- 1 \right| = \left| \frac{\omega _1^*(x_0)}{\omega _2^*(x_0)} -1 \right| . \end{aligned}
If, instead, $$x_0$$ is such that $$\omega _2^*(x_0)=0= \omega _1^*(x_0)$$, then by ( A.6) it holds that
\begin{aligned} \lim _{n_j \rightarrow \infty } \delta _{{\hat{k}}(n_j)}^{(n_j)}\le \sup _{\begin{array}{c} k\in [\kappa ^-,\kappa ^+]\\ k\ne 0 \end{array}}\left\{ \limsup _{n \rightarrow \infty }\left| \frac{\lambda _{\vartheta (k)}\left( X^{(n)}\right) }{\lambda _{k}\left( Y^{(n)}\right) }-1\right| \right\} = \sigma . \end{aligned}
Combining all the results obtained till now, we can conclude that
\begin{aligned} \mathscr {E} \le \max \left\{ \sup _{\begin{array}{c} x\in (0,1);\\ x:\omega _2^*(x)\ne 0 \end{array}} \left| \frac{\omega _1^*(x)}{\omega _2^*(x)} -1 \right| ; \sigma \right\} . \end{aligned}
$$\square$$
Theorem  3.3.
Proof
Clearly, if we set
\begin{aligned} {\hat{X}}^{(n)}:= h(d_n)X^{(n)}, \qquad {\hat{Y}}^{(n)}:= h(d_n)Y^{(n)}, \end{aligned}
then
\begin{aligned} \mathscr {E}\left( X^{(n)}, Y^{(n)}\right) = \mathscr {E}\left( {\hat{X}}^{(n)}, {\hat{Y}}^{(n)}\right) . \end{aligned}
By Item (iv), Lemma A.1 and Theorem 3.1, it follows immediately that $$\left\{ {\hat{Y}}^{(n)}\right\} \sim _\lambda \zeta ^*$$, that $$\lambda _k\left( {\hat{Y}}^{(n)} \right) \in R_{\zeta ^*}= \overline{\zeta ^*([0,1])}$$ for every $$k=-d^-_n,\ldots ,-1,1,\ldots ,d^+_n$$, and that therefore the sequence $$\left\{ {\hat{Y}}^{(n)}\right\}$$ satisfies the hypotheses of Theorem 3.2. By items (ii)-(iii), we have that $$\left\{ {\hat{X}}^{(n)}\right\} \sim _\lambda \omega$$ and it also satisfies the hypotheses of Theorem 3.2. Therefore,
\begin{aligned} \mathscr {E}\left( {\hat{X}}^{(n)}, {\hat{Y}}^{(n)}\right) \ge \max \left\{ \sup _{\begin{array}{c} x\in [0,1]\\ x\,:\, \zeta ^*(x)\ne 0 \end{array}} \left| \frac{\omega ^*(x)}{\zeta ^*(x)} -1\right| ; \sigma \right\} . \end{aligned}
By Item (i), it follows that $$\sigma =0$$ and we conclude. $$\square$$

## Proofs of Section 4

Proposition B.1
Let us consider a Sturm-Liouville operator defined by the formal equation
\begin{aligned} \mathscr {L}_{\text {BCs},p(x),q(x),w(x)}[u](x):= \frac{1}{w(x)}\left[ -\partial _x\left( p(x)\partial _xu(x)\right) +q(x)u(x)\right] , \qquad x\in (a,b)\subset {\mathbb {R}}, \end{aligned}
such that
(i)
$$p, p',w,w',q,(pw)',(pw)'' \in C([a,b])$$;

(ii)
$$p,w>0$$;

(iii)
regular BCs.

Discretize the above self-adjoint linear differential operator $$\mathscr {L}_{\text {BCs},p(x),q(x),w(x)}$$ by means of a numerical scheme, and let $$\mathscr {L}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )}$$ be the correspondent discrete operator, where n is the mesh fineness parameter and $$\eta$$ is the order of approximation of the numerical scheme. Define $$B=\int _a^b \sqrt{\frac{w(x)}{p(x)}}m(dx)$$. If:
(a)
for every fixed $$k\in {\mathbb {N}}$$,
\begin{aligned} \lim _{n \rightarrow \infty } \lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )}\right) = \lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)}\right) ; \end{aligned}

(b)
there exists $$\omega : [a,b]\times [-\pi ,\pi ] \rightarrow {\mathbb {R}}$$, $$\omega \in \text {L}^1\left( [a,b]\times [-\pi ,\pi ]\right)$$ such that
\begin{aligned}&\left\{ (d_n+1)^{-2}\mathscr {L}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )}\right\} _n\\&\quad =\left\{ \hat{\mathscr {L}}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )}\right\} _n \sim _{\lambda }\omega (x,\theta ) \quad (x,\theta ) \in [a,b]\times [-\pi ,\pi ]; \end{aligned}

(c)
for every $$\eta$$, the monotone rearrangement $$\omega ^*_\eta$$, as defined in ( 2.3), is such that
\begin{aligned} \omega ^*_\eta (x) \sim \frac{x^2\pi ^2}{B^2}, \qquad \text{ as } x\rightarrow 0. \end{aligned}

Then, for every fixed $$k\in {\mathbb {N}}$$
\begin{aligned}&\lim _{n \rightarrow \infty } \left| \frac{(d_n+1)^2\omega ^*_\eta \left( \frac{k}{d_n+1}\right) }{\lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )} \right) } -1 \right| \\&\quad = \lim _{n \rightarrow \infty } \left| \frac{(d_n+1)^2\omega ^*_\eta \left( \frac{k}{d_n+1}\right) }{\lambda _k\left( \mathscr {L}_{\text {BCs},V}\right) } -1 \right| =c_k\ge 0, \qquad \lim _{k \rightarrow \infty } c_k =0, \end{aligned}
with $$c_k$$ independent of $$\eta$$, and where $$\mathscr {L}_{\text {BCs},V}$$ is the differential operator associated to the normal form of $$\mathscr {L}_{\text {BCs},p(x),q(x),w(x)}$$ by the Liouville transform $$y(x):=\int _a^x\sqrt{\frac{w(t)}{p(t)}}m(dt)$$. In particular,
\begin{aligned} \lim _{n \rightarrow \infty }\mathbf{err }_{\alpha ,r,k}^{*,(n)}= c_k, \end{aligned}
where $$\mathbf{err }_{\alpha ,r,k}^{*,(n)}$$ is the analytic spectral relative error of Definition 4.1.
Proof
By standard theory it holds that
\begin{aligned} \lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)} \right) = \lambda _k\left( \mathscr {L}_{\text {BCs},V(y)}\right) \qquad \text{ and } \qquad \lambda _k\left( \mathscr {L}_{\text {dir},V\equiv 0}\right) = \lambda _k\left( \varDelta _{\text {dir}}\right) =\frac{k^2\pi ^2}{B^2}. \end{aligned}
Therefore, from item (c)
\begin{aligned} \lim _{n \rightarrow \infty } (d_n+1)^2\omega ^*_\eta \left( \frac{k}{d_n+1}\right) = \frac{k^2\pi ^2}{B^2} = \lambda _k\left( \varDelta _{\text {dir}}\right) . \end{aligned}
(B.1)
Then it is immediate to prove that if
\begin{aligned} \lambda _k\left( \mathscr {L}_{\text {BCs},V(y)}\right) \ne \frac{k^2\pi ^2}{B^2} = \lambda _k\left( \varDelta _{\text {dir}}\right) , \end{aligned}
then
\begin{aligned} \lim _{n \rightarrow \infty } \left| \frac{(d_n+1)^2\omega ^*_\eta \left( \frac{k}{d_n+1}\right) }{\lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )} \right) } -1 \right|&= \lim _{n \rightarrow \infty } \left| \frac{(d_n+1)^2\omega ^*_\eta \left( \frac{k}{d_n+1}\right) }{\lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)} \right) } -1 \right| \\&=\lim _{n \rightarrow \infty } \left| \frac{\frac{k^2\pi ^2}{B^2}}{\lambda _k\left( \mathscr {L}_{\text {BCs},V(y)} \right) } -1 \right| \\&=c_k>0. \end{aligned}
Moreover,
\begin{aligned} \lim _{n \rightarrow \infty }\lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )} \right)= & {} \lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)} \right) \\= & {} \lambda _k\left( \mathscr {L}_{\text {BCs},V(y)}\right) \sim \frac{k^2\pi ^2}{B^2} \quad \text{ for } k \rightarrow \infty , \end{aligned}
and then $$c_k \rightarrow 0$$ as $$k \rightarrow \infty$$. $$\square$$
Corollary B.1
If
\begin{aligned} \omega _\eta (x,\theta )= \frac{p(x)}{w(x)(b-a)^2}f_\eta (\theta ), \qquad (x,\theta ) \in [a,b]\times [0,\pi ], \end{aligned}
with $$f_\eta (\theta )$$ nonnegative, nondecreasing and such that $$f_\eta (\theta ) \sim \theta ^2$$ as $$\theta \rightarrow 0$$, then item (c) of Proposition B.1 is satisfied.
Proof
From ( 2.4) and ( 2.3), for all $$t \in \left[ 0, t_0\right]$$, with
\begin{aligned} t_0 = (b-a)^{-2}\max _{[a,b]}\left( p(y)/w(y)\right) \sup _{[0,\pi ]}\left( f_\eta (\theta )\right) , \end{aligned}
we have that
\begin{aligned} \phi _{\omega _\eta } (t) = \frac{1}{\pi (b-a)} \int _a^b \left( \int _0^\pi \mathbb {1}_{\varOmega _y(t)} (\theta ) d\theta \right) dy, \end{aligned}
where
\begin{aligned} \varOmega _y(t):=\left\{ \theta \in [0,\pi ] \, : \, 0\le f_\eta (\theta )\le \frac{(b-a)^2w(y)}{p(y)}t\right\} . \end{aligned}
By the monotonicity of $$f_\eta$$, it holds that $$\varOmega _y(t)=[0,\theta _y(t)]$$, $$\theta _y(t):=\sup \{\theta \in [0,\pi ] \, : \, f_\eta (\theta )\le \frac{(b-a)^2w(y)}{p(y)}t\}$$, and then $$\theta _y(t) \rightarrow 0$$ as $$t \rightarrow 0$$. So, by the boundedness of w( y)/ p( y), for every $$\epsilon >0$$ there exists $$\delta _\epsilon >0$$ independent of y such that for every $$t\in [0,\min \{t_0;\delta _\epsilon \}]$$ then $$(1-\epsilon )\theta ^2< f_\eta (\theta )< (1+\epsilon )\theta ^2$$, and
\begin{aligned} \frac{1}{\pi (b-a)} \int _a^b m\left( \varOmega _y^{+}(t) \right) m(dy) \le \phi _{\omega _\eta } (t) \le \frac{1}{\pi (b-a)} \int _a^b m\left( \varOmega _y^{-}(t) \right) m(dy), \end{aligned}
where
\begin{aligned}&\varOmega _y^{+}(t):=\left\{ \theta \in [0,\pi ] \, : \, \theta ^2\le \frac{(b-a)^2}{1+\epsilon }\frac{w(y)}{p(y)}t\right\} , \\&\varOmega _y^{-}(t):=\left\{ \theta \in [0,\pi ] \, : \, \theta ^2\le \frac{(b-a)^2}{1-\epsilon }\frac{w(y)}{p(y)}t\right\} . \end{aligned}
Then it holds that
\begin{aligned} \frac{B}{\pi \sqrt{1+\epsilon }}\sqrt{t} \le \phi _{\omega _\eta }(t) \le \frac{B}{\pi \sqrt{1-\epsilon }}\sqrt{t}, \qquad B=\int _a^b \sqrt{\frac{w(y)}{p(y)}}dy. \end{aligned}
By definition ( 2.3), $$t\rightarrow 0$$ as $$x\rightarrow 0$$ and then
\begin{aligned} (1-\epsilon )\frac{x^2\pi ^2}{B^2}\le \omega ^*_\eta (x) \le (1+\epsilon )\frac{x^2\pi ^2}{B^2} \qquad \text{ for } x \text{ small } \text{ enough }, \end{aligned}
and the thesis follows. $$\square$$
Remark 2
The matrix methods of Sects. C, D satisfy the hypotheses of Proposition B.1, see Conjecture C.1 and Theorem D.1, and Corollary C.1 and Corollary D.1. Therefore, in general, a uniform sampling of their spectral symbols does not provide an accurate approximation of the eigenvalues $$\lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )}\right)$$ and $$\lambda _k\left( \mathscr {L}_{\text {BCs},p(x),q(x),w(x)}\right)$$, in the sense of the relative error. See Sects. 4.1.1, 4.1.2 and 4.1.3 for numerical examples. On the other hand,
\begin{aligned} \left\| \omega ^*_\eta \left( \frac{k}{n+1}\right) - \lambda _k\left( \hat{\mathscr {L}}_{\text {BCs},p(x),q(x),w(x)}^{(n,\eta )}\right) \right\| _\infty \rightarrow 0 \qquad \text{ as } n\rightarrow \infty , \end{aligned}
if we exclude the outliers, see Corollary 3.3 and Fig. 2.

## $$(2\eta +1)$$-points central FD discretization

In this section we present an high-order central FD discretization scheme which is a slight modification of the standard ones available in the literature. Since it is used only in the numerical Sect. 4.1.2 and since the main focus of this paper is to analytically calculate the MSRE, see Theorem 3.2 and Theorem 3.3, we will not provide proofs of the main properties of this scheme, in order to keep the length of the paper contained. Even if the main properties will be stated as conjectures and the rigorous theoretic analysis will be left for a possible future work, we will give some hints and comment about how to prove them. Nevertheless, we wish to point out that in Sect. 4.1.2 there are consistent numerical evidences of the validity of Conjecture C.1.
Let us consider the following one dimensional self-adjoint operator
\begin{aligned} \mathscr {L}_{\text {dir},p(x)}[u](x):= -\left( p(x)u'(x)\right) ', \qquad \text {dom}\left( \mathscr {L}_{\text {dir},p(x)}\right) =W^{1,2}_0((a,b)), \end{aligned}
(C.1)
with $$p\in C^2([a,b])$$ and $$p>0$$ (see [ 17, 21]). For a general review of FD methods we refer to [ 43]. Fix $$\eta ,n\in {\mathbb {N}}$$ and $$\eta \ge 1$$. Given a standard equispaced grid $$\mathbf{x }=\left\{ x_j \right\} _{j=1-\eta }^{n+\eta }\subseteq \left[ {\bar{a}},{\bar{b}}\right]$$, with
\begin{aligned} {\bar{a}}=a-(b-a)\frac{\eta -1}{n+1}<a,\quad {\bar{b}}=b + (b-a)\frac{n+\eta }{n+1}>b,\quad x_j = a + (b-a)\frac{j}{n+1}, \end{aligned}
let us consider a $$C^1$$-diffeomorphism $$\tau : [a,b]\rightarrow [a,b]$$ such that $$\tau '(x)\ne 0$$, $$\tau (a)=a, \tau (b)=b$$ and let us consider its piecewise $$C^1$$-extension $${\bar{\tau }}: \left[ {\bar{a}}, {\bar{b}}\right] \rightarrow \left[ {\bar{a}}, {\bar{b}}\right]$$ such that
\begin{aligned} {\bar{\tau }}(x) ={\left\{ \begin{array}{ll} x &{} \text{ if } x \le a,\\ \tau (x) &{}\text{ if } x \in (a,b)\\ x &{} \text{ if } x\ge b. \end{array}\right. } \end{aligned}
By means of the piecewise $$C^1$$-diffeomorphism $${\bar{\tau }}$$ we have a new (extended) grid $${\bar{\mathbf{x }}}=\left\{ {\bar{x}}_j \right\} _{j=1-\eta }^{n+\eta }\subset [{\bar{a}},{\bar{b}}]$$, non necessarily uniformly equispaced, with $${\bar{x}}_j = {\bar{\tau }}(x_j)$$. Combining together the high-order central FD schemes presented in [ 2, 3, 33], we obtain the following matrix operator as approximation of ( C.1):
\begin{aligned} \mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})} = \left[ \begin{array}{lllllllllll} l_{1,1} &{} \cdots &{} l_{1,1+\eta } &{} 0 &{} \cdots &{} &{} &{} &{} &{} &{}0\\ l_{2,1}&{} l_{2,2} &{} \cdots &{} l_{2,2+\eta } &{}0&{}\cdots &{} &{} &{} &{} &{}0\\ &{} &{} \ddots &{} &{} \ddots &{} &{} &{} &{} &{} &{}\\ &{} &{} &{} &{} &{} &{} &{} &{} &{} &{} \\ &{} &{}\ddots &{} &{}\ddots &{} &{}\ddots &{} &{} &{} &{} \\ 0&{}\cdots &{} 0 &{} l_{k,k-\eta } &{} \cdots &{} l_{k,k} &{}\cdots &{} l_{k,k+\eta }&{} 0&{} \cdots &{}0\\ &{} &{} &{} &{}\ddots &{} &{}\ddots &{} \ddots &{} &{} &{} \\ &{} &{} &{} &{} &{}&{} &{} &{} &{}&{}\\ &{} &{} &{} &{} &{} &{} &{} &{} &{}&{}\\ &{} &{} &{} &{} &{}&{} &{} &{} &{}&{}\\ &{}&{}&{}&{} &{} &{} &{} &{} &{} \cdots &{}l_{n,n} \end{array}\right] \in {\mathbb {R}}^{n\times n} , \end{aligned}
where, if we define the $$C^0$$-extension of p( x) to $$[{\bar{a}},{\bar{b}}]$$ as
\begin{aligned} {\bar{p}}(x) = {\left\{ \begin{array}{ll} p(a) &{} \text{ for } x\le a,\\ p(x) &{} \text{ for } x \in (a,b)\\ p(b) &{} \text{ for } x\ge b, \end{array}\right. } \end{aligned}
and the element $$l_{i,j}$$ as
\begin{aligned} l_{i,j}=\frac{2{\bar{p}}\left( \frac{{\bar{x}}_j + {\bar{x}}_i}{2}\right) \sum _{\begin{array}{c} m=i-\eta \\ m\ne i,j \end{array}}^{i+\eta } \prod _{\begin{array}{c} k=i-\eta \\ k\ne i,j,m \end{array}}^{i+\eta }\left( {\bar{x}}_k - {\bar{x}}_i\right) }{\left( {\bar{x}}_j-{\bar{x}}_i\right) \prod _{\begin{array}{c} k=i-\eta \\ k\ne i,j \end{array}}^{i+\eta }\left( {\bar{x}}_k - {\bar{x}}_j\right) }, \qquad \text{ for } i\ne j,\; {\left\{ \begin{array}{ll} i= 1,\ldots ,n,\\ j=i-\eta ,\ldots , i+\eta , \end{array}\right. } \end{aligned}
then the generic matrix element of $$\mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})}$$ is given by
\begin{aligned} \left( \mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\right) _{i,j} = {\left\{ \begin{array}{ll}l_{i,j} &{} \text{ for } i\ne j,\; |i-j|\le \eta ,\; i,j=1,\ldots ,n,\\ -\sum _{\begin{array}{c} k=i-\eta \\ k\ne i \end{array}}^{i+\eta } l_{i,k} &{} \text{ for } i=j,\; i=1,\ldots ,n,\\ 0 &{} \text{ for } i\ne j,\; |i-j|>\eta ,\; i,j=1,\ldots ,n. \end{array}\right. } \end{aligned}
The extended functions $${\bar{\tau }},{\bar{p}}$$ on $$[{\bar{a}},{\bar{b}}]\supset [a,b]$$ serve to implement correctly the BCs. With abuse of notation, we will call $$\eta$$ the order of approximation of the central FD method. We conjecture now some of the properties of this scheme, providing then a sketch of the proofs.
Conjecture C.1
In the above assumptions, for $$\eta \ge 1$$ it holds that
(i)
the eigenvalues $$\lambda _k\left( \mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\right)$$ are real for every k and
\begin{aligned} \lim _{n \rightarrow \infty }\lambda _k\left( \mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\right) = \lambda _k\left( \mathscr {L}_{\text {dir},p(x)}\right) \qquad \text{ for } \text{ every } \text{ fixed } k. \end{aligned}

(ii)
\begin{aligned} \left\{ (n+1)^{-2}\mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})} \right\} _n=\left\{ \hat{\mathscr {L}}^{(n,\eta )}_{\text {dir},p({\bar{x}})} \right\} _n \sim _{\lambda } \omega _{\eta } \left( x,\theta \right) \qquad (x,\theta ) \in [a,b]\times [0,\pi ], \end{aligned}
(C.2)
where
\begin{aligned} \omega _{\eta } \left( x,\theta \right) =\frac{p\left( \tau (x)\right) }{\left( \tau '(x)\right) ^2(b-a)^2}f_{\eta }(\theta ), \end{aligned}
and
\begin{aligned} f_\eta (\theta ) = d_{\eta ,0} + 2\sum _{k=1}^\eta d_{\eta ,k}\cos (k\theta ), \qquad d_{\eta ,k}= {\left\{ \begin{array}{ll} (-1)^{k} \frac{\eta !\eta !}{(\eta -k)!(\eta +k)!}\frac{2}{k^2} &{} \text{ for } k=1,\cdots \eta ,\\ -2\sum _{j=1}^\eta d_{\eta ,j} &{}\text{ for } k=0. \end{array}\right. } \end{aligned}
(C.3)

(iii)
$$\lambda _k\left( \hat{\mathscr {L}}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\right) \in R_{\omega _{\eta }}$$ for every kn.

Item (i) can be proved by a straightforward generalization of standard techniques, see [ 12, Theorem 1] and [ 16, 25]. About item (ii), we recall that the “hat” means that the matrix operator has been normalized by $$(n+1)^2$$. Let us preliminarily observe that in the case of $$\tau (x)=x$$ and $$p(x)\equiv 1$$ then $$\hat{\mathscr {L}}^{(n,\eta )}_{\text {dir}}$$ is a symmetric Toeplitz matrix defined by
\begin{aligned} \left( \hat{\mathscr {L}}^{(n,\eta )}_{\text {dir}}\right) _{i,j} = \frac{d_{\eta ,|i-j|}}{(b-a)^{2}} \quad \text{ or } \text{ equivalently }\quad \mathscr {L}^{(n,\eta )}_{\text {dir}}= (b-a)^{-2}T_n\left( f_\eta \right) , \end{aligned}
where $$d_{\eta ,k}$$ are the coefficients of the trigonometric polynomial $$f_\eta$$ in ( C.3), see [ 33, Corollary 2.2] and [ 30, Eq. (27)]. By standard results on the eigenvalues distribution of Toeplitz matrices (see for example [ 22, item T 4 p. 168]) it holds that
\begin{aligned} \left\{ \hat{\mathscr {L}}^{(n,\eta )}_{\text {dir}}\right\} _n \sim _{\lambda } \frac{f_\eta (\theta )}{(b-a)^2}, \qquad \theta \in [0,\pi ]. \end{aligned}
(C.4)
The strategy to prove (ii) is the following:
• show that $$\hat{\mathscr {L}}^{(n,\eta )}_{\text {dir},p({\bar{x}})} = X_n + Y_n$$, with $$X_n$$ symmetric and such that $$\Vert X_n\Vert ,\Vert Y_n\Vert \le C$$, $$n^{-1}\Vert Y_n\Vert _1\rightarrow 0$$ as $$n \rightarrow \infty$$, where $$\Vert \cdot \Vert$$ and $$\Vert \cdot \Vert _1$$ are the spectral norm and the Schatten 1-norm, respectively;
• show that $$\{X_n\}_n \sim _{\text {GLT}} \omega _{\eta } \left( x,\theta \right)$$, see [ 22, Definition 8.1];
• Conclude that $$\{\hat{\mathscr {L}}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\}_n\sim _{\lambda } \omega _{\eta } \left( x,\theta \right)$$ by [ 22, item GLT 2 p. 170].
Finally, item (iii) can be recovered from [ 40, Theorem 2.2].
Remark 3
Notice that the spectral symbol $$\omega _\eta (x,\theta )$$ consists of the product of two functions:
• the function $$\frac{p\left( \tau (x)\right) }{\left( \tau '(x)\right) ^2(b-a)^2}$$ which consists of the diffusion coefficient p( x) and the diffeomorphism $$\tau (x)$$, which are all intrinsic to the differential operator ( C.1) itself and depend on the spatial variable x;
• $$f_\eta (\theta )$$, which is intrinsic to the discretization method and depends on the spectral variable $$\theta$$.
Let us now state some analytical properties of the symbol function $$f_\eta$$ in ( C.3). We want to highlight that it shares the same properties of the symbol function associated to the IgA scheme, see Corollary D.1. This is pretty interesting, since it seems to suggest that, whatever is the (consistent) numerical approximating scheme, the limit symbol converges to the inverse Weyl distribution function of the continuous operator, as the order of approximation increases.
Corollary C.1
For every fixed $$\eta$$, the function $$f_\eta$$ is differentiable, nonnegative, monotone increasing on $$[0,\pi ]$$ and it holds that
\begin{aligned} f_\eta (\theta )\sim \theta ^2 \quad \text{ as } \theta \rightarrow 0, \qquad \lim _{\eta \rightarrow \infty }\sup _{\theta \in [0,\pi ]}\left| f_\eta (\theta ) - \theta ^2\right| =0. \end{aligned}
Proof
$$f_\eta (\theta )$$ is obviously $$C^\infty ([0,\pi ])$$. Let us begin to prove that $$f_\eta (\theta )\sim \theta ^2$$ as $$\theta \rightarrow 0$$ for every fixed $$\eta$$, and that it is monotone nonnegative on $$[0,\pi ]$$. By the Taylor expansion at $$\theta =0$$ we get
\begin{aligned} f_\eta (\theta )&= d_{\eta ,0} + \sum _{\begin{array}{c} k=-\eta \\ k\ne 0 \end{array}}^\eta d_{\eta ,k}\cos (k\theta )\\&= d_{\eta ,0} + \sum _{\begin{array}{c} k=-\eta \\ k\ne 0 \end{array}}^\eta d_{\eta ,k}\left( 1 - \frac{(k\theta )^2}{2}\right) + o(\theta ^3)\\&= -\theta ^2\sum _{\begin{array}{c} k=-\eta \\ k\ne 0 \end{array}}^\eta (-1)^{k} \frac{\eta !\eta !}{(\eta -k)!(\eta +k)!} + o(\theta ^3)\\&= -\theta ^2 \left[ (-1)^\eta \frac{\eta !\eta !}{(2\eta )!}\sum _{\begin{array}{c} m=0\\ m\ne \eta \end{array}}^{2\eta } (-1)^{m}\left( {\begin{array}{c}2\eta \\ m\end{array}}\right) \right] + o(\theta ^3)\\&= \theta ^2 + o(\theta ^3). \end{aligned}
Moreover, let us observe that
\begin{aligned} f_\eta (\theta )= d_{\eta ,0}+ 2 \sum _{k=1}^\eta \left| d_{\eta ,k}\right| \cos (k(-\theta +\pi )),\quad f_\eta '(\theta )= 2 \sum _{k=1}^\eta k\left| d_{\eta ,k}\right| \sin (k(-\theta +\pi )). \end{aligned}
Define then
\begin{aligned} g (\psi ) = \sum _{k=1}^\eta a_{k} \sin (k\psi ), \qquad \text{ with } \psi =-\theta +\pi \in [0,\pi ], \qquad \text{ and } a_{k} = 2k\left| d_{\eta ,k}\right| . \end{aligned}
It is immediate to check that $$a_1 \ge a_2 \ge \ldots \ge a_n > 0$$ and that
\begin{aligned} (2k)a_{2k} \le (2k-1)a_{2k-1} \quad \forall k \ge 1. \end{aligned}
By [ 4, Theorem 1] we can conclude that $$g(\psi )>0$$ on $$(0,\pi )$$ and then $$f_\eta '(\theta )>0$$ on $$(0,\pi )$$. Since $$f_\eta (0)=0$$, we deduce that $$f_\eta (\theta )\ge 0$$ on $$[0,\pi ]$$. The second part of the thesis is an immediate consequence of identities ( C.3). Indeed,
\begin{aligned} |d_{\eta ,k}|\le \frac{2}{k^2}, \qquad \sum _{k=1}^\infty \frac{2}{k^2} < \infty , \end{aligned}
and then, by dominate convergence, it is easy to prove that for every fixed $$\theta \in [0,\pi ]$$ it holds that
\begin{aligned} \lim _{\eta \rightarrow \infty } f_\eta (\theta )&= \lim _{\eta \rightarrow \infty }\left( -2 \sum _{k=1}^\eta d_{\eta ,k}\right) + 2\lim _{\eta \rightarrow \infty }\left( \sum _{k=1}^\eta d_{\eta ,k}\cos (k\theta ) \right) \\&= \frac{\pi ^2}{3} + 4\sum _{k=1}^\infty \frac{(-1)^k}{k^2}\cos (k\theta )\\&= \theta ^2, \end{aligned}
since $$\left\{ \pi ^2/3\right\} \cup \left\{ (-1)^k4/k^2\right\} _{k\ge 1}$$ are the Fourier coefficients of $$\theta ^2$$ on $$[0,\pi ]$$. The convergence is uniform by Lemma A.2. $$\square$$

## Isogeometric Galerkin discretization by B-splines of degree $$\eta$$ and smoothness $$C^{\eta -1}$$

For a general review of the IgA discretization method we refer the reader to [ 15, 27], so we skip all the introductions and we present directly some known results.
Theorem D.1
Let ( ab) be discretized by a uniform mesh $$\left\{ x_j \right\} _{j=1}^n$$ of step-size $$(n+1)^{-1}$$ and let $$\tau : [a,b]\rightarrow [a,b]$$ a $$C^1$$ -diffeomorphism such that $$\tau (a)=a,\tau (b)=b$$ and $$\tau '(x)\ne 0$$ for every $$x \in [a,b]$$. Let $${\bar{x}}:=\tau (x)$$, $$\eta \ge 1$$, and let us indicate with $$\mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})}$$ the discrete operator obtained from ( C.1) by an IgA discretization scheme with B-splines of degree $$\eta$$ and smoothness $$C^{\eta -1}$$. Then
(i)
the eigenvalues $$\lambda _k\left( \mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\right)$$ are real for every fixed k and
\begin{aligned} \lim _{n \rightarrow \infty }\lambda _k\left( \mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\right) = \lambda _k\left( \mathscr {L}_{\text {dir},p(x)}\right) ; \end{aligned}

(ii)
it holds that
\begin{aligned} \left\{ (n+1)^{-2}\mathscr {L}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\right\} _n=\left\{ \hat{\mathscr {L}}^{(n,\eta )}_{\text {dir},p({\bar{x}})}\right\} _n\sim _{\lambda }\omega _{\eta }(x,\theta ), \qquad (x,\theta ) \in [a,b]\times [0,\pi ], \end{aligned}
where
\begin{aligned} \omega _\eta ( x,\theta ) =\frac{p(\tau (x))}{\left( \tau '(x)\right) ^2(b-a)^2}\,f_\eta (\theta ), \end{aligned}

For example, for $$\eta =1,2$$, $$f_\eta (\theta )$$ has the following analytical expressions
\begin{aligned} f_1(\theta )=\frac{6(1-\cos \theta )}{2+\cos \theta },\qquad f_2(\theta )=\frac{20(3-2\cos \theta -\cos (2\theta ))}{33+26\cos \theta +\cos (2\theta )}. \end{aligned}
Proof
For item (i) see [ 7, 37]. For item (ii), see [ 22, Theorem 10.15]. $$\square$$
About the spectral symbol $$\omega _{\eta }$$ see again Remark 3. We have an analogue of Corollary C.1.
Corollary D.1
For every fixed $$\eta$$, the function $$f_\eta$$ is differentiable, nonnegative, monotone increasing on $$[0,\pi ]$$ and it holds that
\begin{aligned} f_\eta (\theta )\sim \theta ^2 \quad \text{ as } \theta \rightarrow 0, \qquad \lim _{\eta \rightarrow \infty }\sup _{\theta \in [0,\pi ]}\left| f_\eta (\theta ) - \theta ^2\right| =0. \end{aligned}
Proof
See [ 21, Theorem 1, Theorem 2 and Lemma 1]. $$\square$$

## Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature