We consider quadrature rules and polynomials interpolation on the cubic curve.
3.1 Quadrature Rules
First we recall Gauss quadrature for a weight function
w defined on the real line. Let
\(\varPi _N^{(x)}\) denote the space of univariate polynomials of degree at most
N in
x variable. Let
\(x_{k,N}\),
\(1\le k \le N\), be the zeros of the orthogonal polynomial
\(p_N(w)\) of degree
N. These zeros are the nodes of the
N-point Gaussian quadrature rule, which is exact for polynomials of degree
\(2N-1\),
$$\begin{aligned} \int _{{\mathbb {R}}}f(x) w(x) \mathrm {d}x = \sum _{k=1}^N \lambda _{k,N} f(x_{k,N}), \quad \forall f\in \varPi _{2N-1}^{(x)}, \end{aligned}$$
where
\({\lambda }_{k,N}\) are the Gaussian quadrature weights. Let
\({\gamma }\) be a cubic curve. We denote by
\(\varPi _n({\gamma })\) the space of polynomials of degree at most
n restricted to the curve
\({\gamma }\). By Theorem
1,
$$\begin{aligned} \varPi _{2m+1}({\gamma }) = \varPi _{3m+1}^{(x)} \cup y\varPi _{3m}^{(x)} \quad \hbox {and}\quad \varPi _{2m}({\gamma }) = \varPi _{3m}^{(x)} \cup y \varPi _{3m-2}^{(x)}. \end{aligned}$$
(18)
From Proposition
1 it follows
$$\begin{aligned} \dim \varPi _0({\gamma }) =1 \quad \hbox {and}\quad \dim \varPi _n({\gamma }) = 3 n, \quad n \ge 1. \end{aligned}$$
The quadrature (
19) on the cubic curve is an analogue of the Gaussian quadrature rule on the real line. We now consider polynomial interpolation based on the nodes of this quadrature rule.
3.2 Lagrange Interpolation
First we recall the univariate Lagrange interpolation polynomial on the zeros
\(x_{k,N}\),
\(1 \le k \le N\), of
\(p_N(w)\), denoted by
\(L_N (w; f)\), which is the unique polynomial of degree at most
\(N-1\) that satisfies
$$\begin{aligned} L_N (w; f, x_{k,N}) = f(x_{k,N}), \quad 1 \le k \le N, \end{aligned}$$
for any continuous function
f. It is well-known that
\(L_N(w; f)\) is given by
$$\begin{aligned} L_N (w;f,x) = \sum _{k=1}^N f(x_{k,N})\ell _k(x), \quad \ell _k(x) = \frac{p_N(w;x)}{(x-x_{k,N}) p_N'(w_{k,N})}. \end{aligned}$$
(21)
By the Christoffel–Darboux formula, we can also write
\(\ell _k\) as
$$\begin{aligned} \ell _k(x) = \frac{K_N(w; x,x_{k,N})}{K_N(w; x_{k,N},x_{k,N})}, \quad K_N(x,y) = \sum _{k=0}^{N-1} \frac{p_k(w;x)p_k(w;y)}{h_k(w)}. \end{aligned}$$
(22)
3.3 Interpolation via Quadrature
For computational purposes, it is convenient to express the interpolant defined in Theorem
4 as a truncated expansion in the orthogonal polynomial basis on
\({\gamma }\). We recall a classical result for univariate interpolants, according to which (
21) can be expressed as an expansion in the orthogonal polynomials
\(p_n(w)\),
$$\begin{aligned} L_N(w;f,x) = \sum _{k=0}^{N-1} a_{k,N}p_k(w;x), \end{aligned}$$
(24)
where
$$\begin{aligned} a_{k,N} = \frac{{\langle }f, p_k(w) {\rangle }_N}{{\langle }p_k(w), p_k(w) {\rangle }_N} = \frac{{\langle }f, p_k(w) {\rangle }_N}{h_k(w)}, \quad 0 \le k \le N-1, \end{aligned}$$
and
\({\langle }\cdot , \cdot {\rangle }_{N}\) denotes the discretised univariate inner product
\({\langle }\cdot , \cdot {\rangle }_{w}\) based on the
N-point Gaussian quadrature rule:
$$\begin{aligned} {\langle }f(x),g(x){\rangle }_N := \sum _{k=1}^N {\lambda }_{k,N} f(x_{k,N})g (x_{k,N}), \end{aligned}$$
where, as before,
\(x_{k,N}\) and
\({\lambda }_{k,N}\) are the Gaussian quadrature nodes (the roots of
\(p_{N}(w)\)) and weights, respectively.
According to the following bivariate analogue of the univariate result just mentioned, we require a system of M orthogonal functions with respect to an M-point discrete inner product to construct an interpolant expanded in an orthogonal basis.
We first consider the inner product
\({\langle }\cdot , \cdot {\rangle }_{N,{\gamma }}\) coming from discretization of
\({\langle }\cdot , \cdot {\rangle }_{{\gamma },w}\) via Gauss quadrature,
$$\begin{aligned} {\langle }f,g {\rangle }_{N,{\gamma }}&:=\sum _{k=1}^N \lambda _{k,N}\left[ f(x_{k,N},y_{k,N})g(x_{k,N},y_{k,N}) \right. \nonumber \\&\quad \left. + f(x_{k,N},-y_{k,N})g(x_{k,N},-y_{k,N}) \right] , \end{aligned}$$
(25)
where
\(x_{k,N}\) and
\(\lambda _{k,N}\) are the
N-point Gauss quadrature nodes and weights and
\(y_{k,N} = \sqrt{\phi (x_{k,N})}\). From Proposition
2, we require a system of 2
N orthogonal functions with respect to
\( {\langle }f,g {\rangle }_{N,{\gamma }}\) to construct an interpolant expanded in an orthogonal basis. The next result shows that only
\(2N-1\) functions from the orthogonal basis on
\({\gamma }\) given in Theorem
1 are orthogonal with respect to
\( {\langle }f,g {\rangle }_{N,{\gamma }}\).
We conclude that it is not possible to construct an interpolant via Gauss quadrature in the manner of Proposition
2. It is nevertheless possible to construct an interpolant à la Proposition
2 if
\({\langle }\cdot , \cdot {\rangle }_{{\gamma },w}\) is discretised via Gauss–Radau or Gauss–Lobatto quadrature [
6].
As shown in the proof of Theorem
4, the set of 2
N functions
\(\lbrace p_k(w), yp_n(\phi w) \rbrace _{k = 0}^{N-1}\) at the 2
N nodes
\((x_{k,N},\pm y_{k,N})\),
\(k = 1, \ldots , N\) are linearly independent. Hence, the interpolant in Theorem
4 can be represented in the form
$$\begin{aligned} {{\mathscr {L}}}_n (w; f, x,y) = \sum _{k = 0}^{N-1} a_{k,N} p_k(w;x) + y\sum _{k = 0}^{N-1} b_{k,N} p_k(\phi w;x), \end{aligned}$$
(30)
and the coefficients
\(a_{k,N}\) and
\(b_{k,N}\) can be obtained by solving a
\(2N \times 2N\) Vandermonde-like linear system whose columns consist of
\(p_k(w)\) and
\(yp_n(\phi w)\), evaluated at
\((x_{k,N},\pm y_{k,N})\) for
\(k = 1, \ldots , N\). However, since this procedure requires
\(\mathscr {O}(N^3)\) operations, we introduce an alternative method with which the coefficients can be obtained with a complexity of either
\(\mathscr {O}(N^2)\) (for a general weight
w) or
\(\mathscr {O}(N \log N)\) (for the Chebyshev weight).
We now introduce a new inner product and orthogonal basis with respect to which it is possible to construct an interpolant with Gaussian quadrature. Although the inner product
\({\langle }\cdot , \cdot {\rangle }_{{\gamma },w}\) is the natural parametrization of (
4), the new inner product gives rise to a new orthogonal basis that is more efficient for computational purposes compared to the basis in Theorem
1, as we shall demonstrate.
We define the inner product
\(\left[ \cdot ,\cdot \right] _{{\gamma },w}\) as
$$\begin{aligned} \left[ f,g \right] _{{\gamma },w} := \int _{\varOmega _{\gamma }} \left[ f_e\left( x \right) g_e\left( x \right) + f_o\left( x \right) g_o\left( x \right) \right] w(x) \mathrm {d}x , \end{aligned}$$
(31)
where
\(f_e, g_e, f_o, g_o\) are the even and odd parts, respectively, of functions
f(
x,
y) and
g(
x,
y) on
\({\gamma }\), as defined in (
10). Note that the odd part of functions on
\({\gamma }\) have removable singularities at points (
x,
y) where
\(y = \sqrt{\phi (x)} = 0\).
In the inner product space defined by
\(\left[ \cdot ,\cdot \right] _{{\gamma },w}\), the orthogonal polynomial basis on
\({\gamma }\) is exactly the same as in Theorem
1 (where the inner product space is equipped with
\({\langle }\cdot ,\cdot {\rangle }_{{\gamma },w}\)), except that the
\(Y_{n,i}\) that are of the form
\(yp_k(\phi w)\) are replaced by
\(yp_k( w)\). That is, whereas the orthogonal polynomial basis in Theorem
1 is constructed from
\(p_n(w)\) and
\(p_n(\phi w)\), the orthogonal basis with respect to
\(\left[ \cdot ,\cdot \right] _{{\gamma },w}\) is constructed solely out of
\(p_n(w)\). Hence, in the inner product space with
\(\left[ \cdot ,\cdot \right] _{{\gamma },w}\), one only requires univariate classical orthogonal polynomials.
The results in Theorem
2 and Corollary
1 also hold for the orthogonal basis with respect to
\(\left[ \cdot ,\cdot \right] _{{\gamma },w}\), mutatis mutandis. A notable difference between the inner products
\({\langle }\cdot , \cdot {\rangle }_{{\gamma },w}\) and
\(\left[ \cdot ,\cdot \right] _{{\gamma },w}\) is that, unlike the Jacobi operators for
\({\langle }\cdot , \cdot {\rangle }_{{\gamma },w}\) in Sect.
2.4, the latter gives rise to a non-symmetric Jacobi operator for multiplication by
y. This is because the inner product
\(\left[ \cdot , \cdot \right] _{{\gamma },w}\) is not self-adjoint with respect to multiplication by
y. That is,
$$\begin{aligned} \left[ yf(x), yg(x) \right] _{{\gamma },w} \ne \left[ y^2 f(x), g(x) \right] _{{\gamma },w}, \end{aligned}$$
because
\(\left[ yf(x), yg(x) \right] _{{\gamma },w} = {\langle }f(x), g(x) {\rangle }_{w}\) and
\(\left[ y^2 f(x), g(x) \right] _{{\gamma },w} = \left[ \phi f(x), g(x) \right] _{{\gamma },w} = {\langle }\phi f(x), g(x) {\rangle }_{w}\).
3.4 Interpolation via Quadrature with Respect to \(\left[ \cdot ,\cdot \right] _{{\gamma },w}\)
Discretising
\(\left[ \cdot , \cdot \right] _{{\gamma },w}\) via Gauss quadrature, we obtain
$$\begin{aligned}{}[f,g]_N := \sum _{k=1}^N {\lambda }_{k,N} \left[ f_e(x_{k,N})g_e (x_{k,N}) + f_o(x_{k,N})g_o (x_{k,N})\right] . \end{aligned}$$
We shall need the following result.
Note that if
w is chosen to be the Chebyshev weight, one can compute the coefficients of the interpolant,
\(a_{k,N}\) and
\(b_{k,N}\), in
\(\mathscr {O}(N \log N)\) operations using the fast cosine transform. A fast transform is also available for the uniform Legendre weight
\(w = 1\) [
9], and potentially for any Jacobi weight [
8]. Since fast transforms are not available for the non-classical polynomials
\(p_n(\phi w)\),
\(\mathscr {O}(N^2)\) operations are required to compute the coefficients of the interpolant obtained via (Gauss–Radau or Gauss–Lobatto) quadrature. The inner product
\(\left[ \cdot , \cdot \right] _{{\gamma },w}\) is therefore preferable to
\({\langle }\cdot , \cdot {\rangle }_{{\gamma },w}\) for computing interpolants via quadrature.