Skip to main content
Erschienen in: Foundations of Computational Mathematics 2/2015

Open Access 01.04.2015

Sparse Differential Resultant for Laurent Differential Polynomials

verfasst von: Wei Li, Chun-Ming Yuan, Xiao-Shan Gao

Erschienen in: Foundations of Computational Mathematics | Ausgabe 2/2015

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
insite
INHALT
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

In this paper, we first introduce the concept of Laurent differentially essential systems and give a criterion for a Laurent differential polynomial system to be Laurent differentially essential in terms of its support matrix. Then, the sparse differential resultant for a Laurent differentially essential system is defined, and its basic properties are proved. In particular, order and degree bounds for the sparse differential resultant are given. Based on these bounds, an algorithm to compute the sparse differential resultant is proposed, which is single exponential in terms of the Jacobi number and the size of the system.

Hinweise

Communicated by Elizabeth Mansfield.

Sparse Differential Resultant for Laurent Differential Polynomials

Wei Li1  , Chun-Ming Yuan1   and Xiao-Shan Gao1  
Foundations of Computational MathematicsThe Journal of the Society for the Foundations of Computational Mathematics201515:9249

DOI: 10.1007/s10208-015-9249-9

Received: 11 September 2012

Accepted: 8 January 2015

Published: 18 February 2015

Abstract

In this paper, we first introduce the concept of Laurent differentially essential systems and give a criterion for a Laurent differential polynomial system to be Laurent differentially essential in terms of its support matrix. Then, the sparse differential resultant for a Laurent differentially essential system is defined, and its basic properties are proved. In particular, order and degree bounds for the sparse differential resultant are given. Based on these bounds, an algorithm to compute the sparse differential resultant is proposed, which is single exponential in terms of the Jacobi number and the size of the system.

Keywords

Sparse differential resultant Jacobi number Poisson product formula Differential toric variety BKK bound Single exponential algorithm

Mathematics Subject Classification

Primary 12H05 68W30 Secondary 14M25 14Q99

1 Introduction

The multivariate resultant, which gives conditions for an overdetermined system of polynomial equations to have common solutions, is a basic concept in algebraic geometry [13, 19, 27, 45]. In recent years, the multivariate resultant has emerged as one of the most powerful computational tools in elimination theory due to its ability to eliminate several variables simultaneously without introducing many extraneous solutions. Many algorithms with best complexity bounds for problems such as polynomial equation solving and first-order quantifier elimination are strongly based on the multivariate resultant [4, 5, 15, 16, 26, 38].

In the theory of multivariate resultants, polynomials are assumed to involve all the monomials with degrees up to a given bound. In practical problems, most polynomials are sparse in that they only contain certain fixed monomials. For such sparse polynomials, the multivariate resultant often becomes identically zero and cannot provide any useful information.

As a major advance in algebraic geometry and elimination theory, the concept of sparse resultant was introduced by Gelfand, Kapranov, Sturmfels, and Zelevinsky [19, 45]. The degree of the sparse resultant is the Bernstein–Kushnirenko–Khovanskii (BKK) bound [2] instead of the Beźout bound [19, 37, 46], which makes the computation of the sparse resultant more efficient. The concept of sparse resultants originated from the work of Gelfand et al. [18] on generalized hypergeometric functions, where the central concept of \(\mathcal {A}\)-discriminant is studied. Kapranov et al. [28] introduced the concept of \(\mathcal {A}\)-resultant. Sturmfels further introduced the general mixed sparse resultant and gave a single exponential algorithm to compute the sparse resultant [45, 46]. Canny and Emiris showed that the sparse resultant is a factor of the determinant of a Macaulay style matrix and gave an efficient algorithm to compute the sparse resultant based on this matrix representation [14, 15]. D’Andrea further proved that the sparse resultant is the quotient of two Macaulay style determinants [11]. The representation given in [11] is used to develop efficient algorithms for computing sparse resultants [16].

Using the analog between ordinary differential operators and univariate polynomials, the differential resultant for two linear ordinary differential operators was implicitly given by Ore [36] and then studied by Berkovich and Tsirulik [1] using Sylvester style matrices. The subresultant theory was first studied by Chardin [7] for two differential operators and then by Li [35] and Hong [24] for the more general Ore polynomials.

For nonlinear differential polynomials, it is more difficult to define and study the differential resultant. The differential resultant for two nonlinear differential polynomials in one variable was defined by Ritt [41, p. 47]. In [50, p. 46], Zwillinger proposed to define the differential resultant of two differential polynomials as the determinant of a matrix following the idea of algebraic multivariate resultants, but did not give details. General differential resultants were defined by Carrà-Ferro [6] using Macaulay’s definition of algebraic resultants. But, the treatment in [6] is not complete. For instance, the differential resultant for two generic differential polynomials with positive orders and degrees greater than one is always identically zero if using the definition in [6]. In [48], Yang, Zeng, and Zhang used the idea of algebraic Dixon resultant to compute the differential resultant. Although efficient, this approach is not complete, because it is not proved that the differential resultant can always be computed in this way. Differential resultants for linear ordinary differential polynomials were studied by Rueda–Sendra [43, 44]. In [17], a rigorous definition for the differential resultant of \(n+1\) differential polynomials in \(n\) variables was first presented and its properties were proved.

This paper, together with its preliminary version [34], initiates the study of the sparse differential resultant which is an extension of the sparse resultant and the differential resultant. In [34], we studied the sparse differential resultant for a system of differential polynomials with nonvanishing degree zero terms. For more general systems, our first observation is that the sparse differential resultant is closely connected with non-polynomial solutions of algebraic differential equations, that is, solutions with nonvanishing derivatives to any order. As a consequence, the sparse differential resultant should be more naturally defined for Laurent differential polynomials. This is similar to the algebraic sparse resultant [19, 46], where nonzero solutions of Laurent polynomials are considered.

Consider \(n+1\) Laurent differential polynomials in \(n\) differential variables \({\mathbb {Y}}=\{y_1,\ldots ,y_n\}\)
$$\begin{aligned} {\mathbb {P}}_i=\sum \limits _{k=0}^{l_i}u_{ik} M_{ik}\,\, (i=0,\ldots ,n), \end{aligned}$$
(1)
defined over sets of Laurent monomials \({\mathcal A}_i=\{M_{i0},\ldots ,M_{il_i}\}\) in \({\mathbb {Y}}\), where \(u_{ik}\) are differential indeterminates over \(\mathbb {Q}\). Let \({\mathbf {u}}_i=(u_{i0},u_{i1},\ldots ,u_{il_i})\) be the coefficient vector of \({\mathbb {P}}_i\,(i=0,\ldots ,n)\). For each \(i\), there exists a unique Laurent monomial \(M_i\) such that \({\mathbb {P}}_i^\text {N}=M_i{\mathbb {P}}_i\) is an irreducible differential polynomial in \({\mathbb {Y}}\) and \({\mathbf {u}}_i\), which is called the norm form of \({\mathbb {P}}_i\). Let \(s_i={\mathrm{ord}}({\mathbb {P}}_i,{\mathbb {Y}})\) and denote \(M_{ik}/M_{i0}=\prod _{j=1}^n\prod _{l=0}^{s_i}(y_j^{(l)})^{t_{ikjl}}\), where \(y_j^{(l)}\) is the \(l\)th derivative of \(y_j\) and \(t_{ikjl}\in \mathbb {Z}\). Let \(o=\mathrm{max}_{i=0}^n {\mathrm{ord}}({\mathbb {P}}_i,{\mathbb {Y}})\) and \(\alpha _{ik}\) the exponent vector of the monomial \(M_{ik}\) in \({\mathbb {Y}}^{[o]}\), that is, \(M_{ik}= ({\mathbb {Y}}^{[o]})^{\alpha _{ik}}\), where \({\mathbb {Y}}^{[o]}\) is the set \(\{y_j^{(l)}:\,1\le j\le n, 0\le l\le o\}\).

The concept of Laurent differentially essential system is introduced, which is a necessary and sufficient condition for the existence of the sparse differential resultant. \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) are called Laurent differentially essential if \(\mathcal {I}_{{\mathbf {u}}}=\mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}\cap {\mathbb Q}\{{\mathbf {u}}_0\ldots ,{\mathbf {u}}_n\}\) is a prime differential ideal of codimension one, where \(\mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}=[{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n]\) is the differential ideal generated in the \({\mathbb {Y}}\)-Laurent differential polynomial ring \(\mathbb {Q}\{{\mathbb {Y}}^{\pm };{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\). This concept is similar to (but weaker than) the concept of essential supports introduced by Sturmfels [46]. We have the following criteria for a Laurent differential polynomial system to be Laurent differentially essential.

Theorem 1.1

For \({\mathbb {P}}_i\) given in (1), let \(d_{ij} = \sum \nolimits _{k=0}^{l_i} u_{ik} \sum \nolimits _{l=0}^{s_i} t_{ikjl}x_j^{l}\,\) \((i=0,\ldots ,n; j=1,\ldots ,n)\) where \(x_j\) are algebraic indeterminates. Denote
$$\begin{aligned} \mathrm{D}_{\mathbb {P}}=\left( \begin{array}{cccccccc} d_{01} &{} \,d_{02} &{} \,\ldots &{} \,d_{0n} \\ d_{11} &{} \,d_{12} &{} \,\ldots &{} \,d_{1n} \\ &{} &{} \ddots &{} \\ d_{n1} &{} \,d_{n2} &{} \,\ldots &{} \,d_{nn} \end{array}\right) \end{aligned}$$
to be the symbolic support matrix of (1). Then, the following assertions hold.
1)

The differential transcendence degree of \({\mathbb Q}\langle {\mathbf {u}}_0\ldots ,{\mathbf {u}}_n\rangle \langle \frac{{\mathbb {P}}_0}{M_{00}},\ldots ,\frac{{\mathbb {P}}_n}{M_{n0}}\rangle \) over \({\mathbb Q}\langle {\mathbf {u}}_0\ldots ,{\mathbf {u}}_n\rangle \) is equal to \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}})\).

2)

Let \(\mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}=[{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n]\subset \mathbb {Q}\{{\mathbb {Y}}^{\pm };{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\). Then \(\mathcal {I}_{{\mathbf {u}}}=\mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}\cap {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) is a prime differential ideal of codimension \(n+1-\mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}})\). So \(\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) is Laurent differentially essential if and only if \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}})=n\).

3)

\(\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) is Laurent differentially essential if and only if there exist \(k_i\,(i=0,\ldots ,n)\) with \(1\le k_i\le l_i\) such that \(\mathrm{rank}(\mathrm{{D}}_{k_0,\ldots ,k_n})=n\) where \(\mathrm{{D}}_{k_0,\ldots ,k_n}\) is the symbolic support matrix for the Laurent differential monomials \(M_{0k_0}/M_{00},\ldots , M_{nk_n}/M_{n0}\).

With the above theorem, computing the differential transcendence degree of certain differential polynomials is reduced to computing the rank of their symbolic support matrix. Similar to the case of linear equations, this result provides a useful tool to study generic differential polynomials. As an application of the above result, the differential dimension conjecture [42, p. 178] for a class of generic differential polynomials is proved.

Before introducing properties of the sparse differential resultant, the concept of Jacobi number is given below. Let \({\mathbb G}=\{g_1,\ldots ,g_n\}\) be \(n\) differential polynomials in \({\mathbb {Y}}\). Let \(s_{ij} = {\mathrm{ord}}(g_i,y_j)\) be the order of \(g_i\) in \(y_j\) if \(y_j\) occurs effectively in \(g_i\) and \(s_{ij} = -\infty \) otherwise. Then the Jacobi bound, or the Jacobi number, of \({\mathbb G}\), denoted as \(\mathrm{Jac}({\mathbb G})\), is the maximum number of the summations of all the diagonals of \(S=(s_{ij})\). Or equivalently,
$$\begin{aligned}\mathrm{Jac}({\mathbb G}) = \mathrm{max}_\sigma \sum _{i=1}^n s_{i\sigma (i)},\end{aligned}$$
where \(\sigma \) is a permutation of \(\{1,\ldots ,n\}\). The Jacobi’s Problem conjectures that the order of every zero-dimensional component of \({\mathbb G}\) is bounded by the Jacobi number of \({\mathbb G}\) [40].

If \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) in (1) are Laurent differentially essential, then \(\mathcal {I}_{\mathbf {u}}\) defined in Theorem 1.1 is a prime differential ideal of codimension one. Hence, there exists an irreducible differential polynomial \({\mathbf {R}}\in {\mathbb Q}\{{\mathbf {u}}_0\ldots ,{\mathbf {u}}_n\}\) such that \(\mathcal {I}_{\mathbf {u}}=\mathrm{sat}({\mathbf {R}})\) and \({\mathbf {R}}\) is defined to be the sparse differential resultant of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\). Properties of the sparse differential resultant are summarized in the following theorem.

Theorem 1.2

The sparse differential resultant \({\mathbf {R}}({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n)\in {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) has the following properties.
1)

Let \(\mathcal {Z}({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n)\) be the set of all specializations of the coefficients \(u_{ik}\) of \({\mathbb {P}}_i\) under which \({\mathbb {P}}_i=0\,(i=0,\ldots ,n)\) have a common non-polynomial solution and \(\overline{\mathcal {Z}({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n)}\) the Kolchin differential closure of \(\mathcal {Z}({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n)\). Then \(\overline{\mathcal {Z}({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n)}={\mathbb {V}}\big (\mathrm{sat}({\mathbf {R}})\big )\).

2)

\({\mathbf {R}}({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n)\) is differentially homogenous in each \({\mathbf {u}}_i\,(i=0,\ldots ,n)\).

3)
(Poisson product formula) Let \(h_0={\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_0)\ge 0\). Then \(t_0=\mathrm{deg}({\mathbf {R}},\) \(u_{00}^{(h_0)})\ge 1\) and there exist differential fields \((\mathbb {Q}_\tau ,\delta _\tau )\) and \(\xi _{\tau k}\in \mathbb {Q}_\tau \) for \(\tau =1,\ldots ,t_0\) and \(k=1,\ldots ,l_0\) such that
$$\begin{aligned} {\mathbf {R}}=A\prod _{\tau =1}^{t_0} \left( u_{00}+\sum \limits _{k=1}^{l_0} u_{0k}\xi _{\tau k}\right) ^{(h_0)},\end{aligned}$$
where \(A\) is a polynomial in \(\mathbb {Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_n\rangle [{\mathbf {u}}_0^{[h_0]}\backslash u_{00}^{(h_0)}]\). Furthermore, if 1) every \(n\) of the \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) form a differentially independent set over \({\mathbb Q}\langle {\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\rangle \) and 2) for each \(j=1,\ldots ,n\), \(\mathbf e _j\in \hbox {Span}_{\mathbb {Z}}\{\alpha _{ik}-\alpha _{i0}:k=1,\ldots ,l_i;i=0,\ldots ,n\}\), then there exist \(\eta _{\tau k}\in {\mathbb Q}_\tau \)  \((\tau =1,\ldots ,t_0;\,k=1,\ldots ,n)\) such that
$$\begin{aligned} {\mathbf {R}}&= A\prod _{\tau =1}^{t_0}\bigg [\frac{{\mathbb {P}}_0(\eta _\tau )}{M_{00}(\eta _\tau )}\bigg ]^{(h_0)}, \end{aligned}$$
where \(\eta _\tau =(\eta _{\tau 1},\ldots ,\eta _{\tau n})\) and \(\mathbf e _j\) is the exponent vector of \(y_j\). Moreover, \(\eta _\tau \,(\tau =1,\ldots ,t_0)\) are generic points of the prime differential ideal \([{\mathbb {P}}_1^\text {N},\ldots ,{\mathbb {P}}_n^\text {N}]\hbox {:}\mathbb {m}\) in \(\mathbb {Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_n\rangle \{{\mathbb {Y}}\}\), where \(\mathbb {m}\) is the set of all differential monomials in \({\mathbb {Y}}\).
4)

Assume that \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) have the same monomial set \({\mathcal A}={\mathcal A}_i\,(i=0,\ldots ,n)\). The differential toric variety \(X_{\mathcal A}\) associated with \({\mathcal A}\) is defined and is shown to be an irreducible projective differential variety of dimension \(n\). Furthermore, the differential Chow form [17, 34] of \(X_{\mathcal A}\) is \({\mathbf {R}}\).

5)

\(h_i = {\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\le \text {J}_i=\mathrm{Jac}(\widehat{{\mathbb {P}}}_{\hat{i}})\) for \(i=0,\ldots ,n\), where \(\widehat{{\mathbb {P}}}_{\hat{i}}=\{{\mathbb {P}}_0^\text {N},\ldots ,{\mathbb {P}}_n^\text {N}\}\backslash \{{\mathbb {P}}_i^\text {N}\}\).

6)

\(\mathrm{deg}({\mathbf {R}})\le \prod _{i=0}^n (m_i+1)^{h_i+1}\le (m+1)^{\sum _{i=0}^n(\text {J}_i+1)}=(m+1)^{\text { J}+n+1}\), where \(m_i=\mathrm{deg}({\mathbb {P}}_i^\text {N},{\mathbb {Y}})\), \(m=\mathrm{max}_i\{m_i\}\), and \(\text { J} = \sum _{i=0}^n \text { J}_i\).

7)
Let \({\mathrm{ord}}({\mathbb {P}}_i^\text {N},y_j)=e_{ij}\) and \(N_{i0}=M_i M_{i0}\). Then \({\mathbf {R}}\) has the following representation
$$\begin{aligned} \prod _{i=0}^n N_{i0}^{(h_i+1)\mathrm{deg}({\mathbf {R}})}\cdot {\mathbf {R}}=\sum _{i=0}^n\sum _{j=0}^{h_i}G_{ij}\big ({\mathbb {P}}_{i}^\text {N}\big )^{(j)} \end{aligned}$$
where \(G_{ij}\in {\mathbb Q}[{\mathbf {u}}_0^{[h_0]},\ldots ,{\mathbf {u}}_n^{[h_n]},y_1^{[t_1]},\ldots ,y_n^{[t_n]}]\) with \(t_j=\mathrm{max}_{i=0}^n\{h_i+e_{ij}\}\) such that \(\mathrm{deg}(G_{ij}({\mathbb {P}}_{i}^{\text {N}})^{(j)})\le [m+1+\sum _{i=0}^n(h_i+1)\mathrm{deg}(N_{i0})]\mathrm{deg}({\mathbf {R}})\).

Although similar to the properties of algebraic sparse resultants, each property given above is an essential extension of its algebraic counterpart. For instance, it needs lots of efforts to obtain the Poisson product formula. Property 5) is unique for the differential case and reflects the sparseness of the system in a certain sense.

Let \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) in (1) be generic differential polynomials such that all monomials with order \(\le s_i\) and degree \(\le m_i\) appear effectively in \({\mathbb {P}}_i\) and \({\mathbf {R}}({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n)\) the differential resultant of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\). Then a BKK style degree bound is given:

Theorem 1.3

For each \(i\in \{0,1,\ldots ,n\}\),
$$\begin{aligned} \mathrm{deg}({\mathbf {R}},{\mathbf {u}}_i)\le \sum _{k=0}^{s-s_i}\mathcal {M}\big ((\mathcal {Q}_{jl})_{j\ne i,0\le l\le s-s_j},\mathcal {Q}_{i0},\ldots ,\mathcal {Q}_{i,k-1},\mathcal {Q}_{i,k+1},\ldots ,\mathcal {Q}_{i,s-s_i}\big ) \end{aligned}$$
where \(s=\sum _{i=0}^ns_i\), \(\mathcal {Q}_{jl}\) is the Newton polytope of \(({\mathbb {P}}_j)^{(l)}\) treated as a polynomial in \(y^{[s]}_1,\ldots ,y^{[s]}_n\) and \(\mathcal {M}(S)\) is the mixed volume of the polytopes in \(S\).

In principle, the sparse differential resultant can be computed with characteristic set methods for differential polynomials via symbolic computation [3, 8, 25, 42, 47]. But in general, differential elimination procedures based on characteristic sets do not have an elementary complexity bound [20].

Based on order and degree bounds given in (5)–(7) of Theorem 1.2, a single exponential algorithm to compute the sparse differential resultant \({\mathbf {R}}\) is proposed. The idea of the algorithm is to compute \({\mathbf {R}}\) with its order and degree increasing incrementally and to use linear algebra to find the coefficients of \({\mathbf {R}}\) with the given order and degree. The order and degree bounds serve as the termination condition. Precisely, we have

Theorem 1.4

The sparse differential resultant of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) can be computed with at most \(O\Big ( \big ((\text { J}+n+2)^{O(l \text { J}+l)}(m+1)^{O((l\text { J}+l)(\text { J}+n+2))}\big )/{n^{n}}\Big )\) \({\mathbb Q}\)-arithmetic operations, where \(l=\sum _{i=0}^n (l_i+1)\), \(m=\mathrm{max}_{i=0}^n m_i\), and \(\text { J}=\sum _{i=0}^n \text { J}_i\).

Since \(n<l\), the complexity of the algorithm is single exponential in terms of \(l\) and \(\text { J}\). The sparseness is reflected in the quantity \(l\) which is called the size of the system and the Jacobi number \(\text { J}\). Note that even for algebraic sparse resultants, the computational complexity is single exponential [15, 45]. This seems to be the first algorithm which eliminates several variables from nonlinear differential polynomials with a single exponential complexity.

As mentioned above, a preliminary version of this paper was reported in ISSAC 2011 [34], where the sparse differential resultant of differential polynomials with nonvanishing degree zero terms is studied. To be more precise, in [34], differential polynomials of the form (1) are required to satisfy that all \(M_{ik}\) are differential monomials and \(M_{i0}=1\) for each \(i=0,\ldots ,n\). There, (2), (3), (6), and (7) of Theorem 1.2 and Theorem 1.4 in that case are proved. In this paper, we consider sparse differential resultants for general Laurent differential polynomial systems. Moreover, Theorem 1.1, (1), (4), and (5) of Theorem 1.2, and Theorem 1.3 are newly studied here.

The rest of the paper is organized as follows. In Sect. 2, preliminary results are introduced. In Sect. 3, the sparse differential resultant for Laurent differentially essential systems is defined. In Sect. 4, Theorem 1.1 is proved. In Sect. 5, properties (1)–(4) of Theorem 1.2 are proved. In Sect. 6, properties 5)–7) of Theorem 1.2, Theorem 1.3, and Theorem 1.4 are proved. In Sect. 7, the paper is concluded and several unsolved problems for differential sparse resultant are proposed.

2 Preliminaries

In this section, some basic notations and preliminary results in differential algebra will be given. For more details about differential algebra, please refer to [3, 17, 29, 42].

2.1 Differential Polynomial Algebra and Kolchin Topology

Let \(\mathcal {F}\) be a fixed ordinary differential field of characteristic zero with a derivation operator \(\delta \). An element \(c\in \mathcal {F}\) such that \(\delta (c)=0\) is called a constant of \(\mathcal {F}.\) In this paper, unless otherwise indicated, \(\delta \) is kept fixed during any discussion and we use primes and exponents \((i)\) to indicate derivatives under \(\delta \). Let \(\Theta \) denote the free commutative semigroup with unit (written multiplicatively) generated by \(\delta \).

A typical example of differential fields is \(\mathbb {Q}(x)\) which is the field of rational functions in a variable \(x\) with \(\delta =\frac{d}{dx}\).

Let \(S\) be a subset of a differential field \(\mathcal {G}\) which contains \(\mathcal {F}\). We will denote, respectively, by \(\mathcal {F}[S]\), \(\mathcal {F}(S)\), \(\mathcal {F}\{S\}\), and \(\mathcal {F}\langle S\rangle \) the smallest subring, the smallest subfield, the smallest differential subring, and the smallest differential subfield of \(\mathcal {G}\) containing \(\mathcal {F}\) and \(S\). If we denote \(\Theta (S)\) to be the smallest subset of \(\mathcal {G}\) containing \(S\) and stable under \(\delta \), we have \(\mathcal {F}\{S\}=\mathcal {F}[\Theta (S)]\) and \(\mathcal {F}\langle S\rangle =\mathcal {F}(\Theta (S))\). A differential extension field \(\mathcal {G}\) of \(\mathcal {F}\) is said to be finitely generated if \(\mathcal {G}\) has a finite subset \(S\) such that \(\mathcal {G}=\mathcal {F}\langle S\rangle \).

A subset \(\Sigma \) of a differential extension field \(\mathcal {G}\) of \(\mathcal {F}\) is said to be differentially dependent over \(\mathcal {F}\) if the set \((\theta \alpha )_{\theta \in \Theta , \alpha \in \Sigma }\) is algebraically dependent over \(\mathcal {F}\), and otherwise, it is said to be differentially independent over \(\mathcal {F}\), or to be a family of differential indeterminates over \(\mathcal {F}\). In the case \(\Sigma \) consists of only one element \(\alpha \), we say that \(\alpha \) is differentially algebraic or differentially transcendental over \(\mathcal {F}\), respectively. A maximal subset \(\Omega \) of \(\mathcal {G}\) which is differentially independent over \(\mathcal {F}\) is said to be a differential transcendence basis of \(\mathcal {G}\) over \(\mathcal {F}\). We use \(\hbox {{d.tr.deg}}\,\mathcal {G}/\mathcal {F}\) (see [29, pp. 105–109]) to denote the differential transcendence degree of \(\mathcal {G}\) over \(\mathcal {F}\), which is the cardinal number of \(\Omega \). Considering \(\mathcal {F}\) and \(\mathcal {G}\) as purely algebraic fields, we denote the algebraic transcendence degree of \(\mathcal {G}\) over \(\mathcal {F}\) by \(\hbox {{tr.deg}}\,\mathcal {G}/\mathcal {F}\).

A homomorphism \(\varphi \) from a differential ring \((\mathcal {R},\delta )\) to a differential ring \((\mathcal {S},\delta _1)\) is a differential homomorphism if \(\varphi \circ \delta =\delta _1\circ \varphi \). If \(\mathcal {R}_0\) is a common differential subring of \(\mathcal {R}\) and \(\mathcal {S}\) and the homomorphism \(\varphi \) leaves every element of \(\mathcal {R}_0\) invariant, then \(\varphi \) is said to be a homomorphism over \(\mathcal {R}_0\). If, in addition, \(\mathcal {R}\) is an integral domain and \(\mathcal {S}\) is a differential field, \(\varphi \) is called a differential specialization of \(\mathcal {R}\) into \(\mathcal {S}\) over \(R_0\). The following property about differential specialization will be needed in this paper, and it can be proved similarly to [17, Theorem 2.16].

Lemma 2.1

Let \(P_{i}({\mathbb {U}}, {\mathbb {Y}})\in \mathcal {F}\langle {\mathbb {Y}}\rangle \{{\mathbb {U}}\}\) \((i=1, \ldots , m)\) where \({\mathbb {U}}\) and \({\mathbb {Y}}\) are sets of differential indeterminates. If \(\theta _{ij}\big (P_{i}({\mathbb {U}}, {\mathbb {Y}})\big )\,(i=1, \ldots , m; j=1,\ldots ,n_{i})\) are algebraically dependent over \(\mathcal {F}\langle {\mathbb {U}}\rangle \) for \(\theta _{ij}\in \Theta \), then for any differential specialization \({\mathbb {U}}^0\subset \mathcal {F}\) of \({\mathbb {U}}\) over \(\mathcal {F}\), \(\theta _{ij}\big (P_{i}({\mathbb {U}}^0, {\mathbb {Y}})\big )\) are algebraically dependent over \(\mathcal {F}\). In particular, if \(P_{i}({\mathbb {U}}, {\mathbb {Y}})\) \((i=1, \ldots , m)\) are differentially dependent over \(\mathcal {F}\langle {\mathbb {U}}\rangle \), then for any differential specialization \({\mathbb {U}}^0\subset \mathcal {F}\) of \({\mathbb {U}}\) over \({\mathcal {F}}\), \(P_{i}({\mathbb {U}}^0,{\mathbb {Y}})\) are differentially dependent over \(\mathcal {F}\).

A differential extension field \(\mathcal {E}\) of \(\mathcal {F}\) is called a universal differential extension field, if for any finitely generated differential extension field \(\mathcal {F}_1\subset \mathcal {E}\) of \(\mathcal {F}\) and any finitely generated differential extension field \(\mathcal {F}_2\) of \(\mathcal {F}_1\) not necessarily in \(\mathcal {E}\), \(\mathcal {F}_2\) can be embedded in \(\mathcal {E}\) over \(\mathcal {F}_1\), i.e., there exists a differential extension field \(\mathcal {F}_3\) in \(\mathcal {E}\) that is differentially isomorphic to \(\mathcal {F}_2\) over \(\mathcal {F}_1\). Such a differential universal extension field of \(\mathcal {F}\) always exists [29, Theorem 2, p. 134]. By definition, any finitely generated differential extension field of \(\mathcal {F}\) can be embedded over \(\mathcal {F}\) into \(\mathcal {E}\), and \(\mathcal {E}\) is a universal differential extension field of every finitely generated differential extension field of \(\mathcal {F}\). In particular, for any natural number \(n\), we can find in \(\mathcal {E}\) a subset of cardinality \(n\) whose elements are differentially independent over \(\mathcal {F}.\) Throughout the present paper, \(\mathcal {E}\) stands for a fixed universal differential extension field of \(\mathcal {F}\).

Now suppose \({\mathbb {Y}}=\{y_{1}, y_{2}, \ldots , y_{n}\}\) is a set of differential indeterminates over \(\mathcal {E}\). For any \(y\in {\mathbb {Y}}\), denote \(\delta ^ky\) by \(y^{(k)}.\) The elements of \(\mathcal {F}\{{\mathbb {Y}}\}=\mathcal {F}[y_j^{(k)}\,|\,j=1,\ldots ,n;k\in \mathbb {N}]\) are called differential polynomials over \(\mathcal {F}\) in \({\mathbb {Y}}\), and \(\mathcal {F}\{{\mathbb {Y}}\}\) itself is called the differential polynomial ring over \(\mathcal {F}\) in \({\mathbb {Y}}\). A differential polynomial ideal \(\mathcal {I}\) in \(\mathcal {F}\{{\mathbb {Y}}\}\) is an ordinary algebraic ideal which is closed under derivation, i.e., \(\delta (\mathcal {I})\subset \mathcal {I}\). And a prime (resp. radical) differential ideal is a differential ideal which is prime (resp. radical) as an ordinary algebraic polynomial ideal. For convenience, a prime differential ideal is assumed not to be the unit ideal in this paper.

By a differential affine space, we mean any one of the sets \(\mathcal {E}^n\,(n\in \mathbb {N}).\) An element \(\eta =(\eta _1,\ldots ,\eta _n)\) of \(\mathcal {E}^n\) will be called a point. Let \(\Sigma \) be a subset of differential polynomials in \(\mathcal {F}\{{\mathbb {Y}}\}\). A point \(\eta =(\eta _{1},\ldots ,\eta _{n}) \in \mathcal {E}^n\) is called a differential zero of \(\Sigma \) if \(f(\eta )=0\) for any \(f \in \Sigma \). The set of differential zeros of \(\Sigma \) is denoted by \(\mathbb {V}(\Sigma )\), which is called a differential variety defined over \(\mathcal {F}\). When the base field is clear from the context, we simply call it a differential variety. The differential varieties in \(\mathcal {E}^n\) (resp. the differential varieties in \(\mathcal {E}^n\) that are defined over \(\mathcal {F}\)) are the closed sets in a topology called the Kolchin topology (resp. the Kolchin \(\mathcal {F}\)-topology).

For \(V\subset \mathcal {E}^n\), let \(\mathbb {I}(V)\) be the set of all differential polynomials in \(\mathcal {F}\{{\mathbb {Y}}\}\) that vanish at every point of \(V\). Clearly, \(\mathbb {I}(V)\) is a radical differential ideal in \(\mathcal {F}\{{\mathbb {Y}}\}\). By the differential Nullstellensatz, there exists a bijective correspondence between Kolchin \(\mathcal {F}\)-closed sets and radical differential ideals in \(\mathcal {F}\{{\mathbb {Y}}\}\). That is, for any differential variety \(V\) defined over \(\mathcal {F}\), \(\mathbb {V}(\mathbb {I}(V))=V\) and for any radical differential ideal \(\mathcal {I}\) in \(\mathcal {F}\{{\mathbb {Y}}\}\), \(\mathbb {I}(\mathbb {V}(\mathcal {I}))=\mathcal {I}\).

Similarly as in algebraic geometry, an \(\mathcal {F}\)-irreducible differential variety can be defined. And there is a bijective correspondence between \(\mathcal {F}\)-irreducible differential varieties and prime differential ideals in \(\mathcal {F}\{{\mathbb {Y}}\}\). A point \(\eta \in {\mathbb {V}}({\mathcal {I}})\) is called a generic point of a prime ideal \({\mathcal {I}}\subset \mathcal {F}\{{\mathbb {Y}}\}\), or of the irreducible variety \({\mathbb {V}}({\mathcal {I}})\), if for any polynomial \(P\in \mathcal {F}\{{\mathbb {Y}}\}\) we have \(P(\eta )=0 \Leftrightarrow P\in {\mathcal {I}}\). It is well known that [42, p. 27] a non-unit differential ideal is prime if and only if it has a generic point. Notice that irreducibility depends on the base field over which the polynomials are defined. In this paper, to emphasize the differential ring where differential ideals are generated, we use the notation \(\mathcal {I}_{\mathcal {F}\{{\mathbb {Y}}\}}\) or \((\mathcal {I})_{\mathcal {F}\{{\mathbb {Y}}\}}\)to mean that \(\mathcal {I}\) is a differential ideal in \(\mathcal {F}\{{\mathbb {Y}}\}\).

Let \(\mathcal {I}\) be a prime differential ideal in \(\mathcal {F}\{{\mathbb {Y}}\}\) and \(\xi =(\xi _{1},\ldots ,\xi _{n})\) a generic point of \(\mathcal {I}\) [29, p. 19]. The dimension of \(\mathcal {I}\) or of \(\mathbb {V}(\mathcal {I})\) is defined to be the differential transcendence degree of the differential extension field \(\mathcal {F}\langle \xi _{1},\ldots ,\xi _{n}\rangle \) over \(\mathcal {F}\), that is, \(\hbox {{dim}}(\mathcal {I})=\hbox {{d.tr.deg}}\, \mathcal {F}\langle \xi _{1},\ldots ,\xi _{n}\rangle /\mathcal {F}\).

We will conclude this section by introducing some basic concepts in projective differential algebraic geometry which will be used in Sect. 5.4. For more details, please refer to [31, 33]. And unless otherwise stated, in the whole paper, we only consider the affine differential case.

For each \(l\in \mathbb {N}\), consider a projective space \(\mathbf P (l)\) over \(\mathcal {E}\). By a differential projective space, we mean any one of the sets \(\mathbf P (l)\,(l\in \mathbb {N}).\) Denote \(z_0,z_1,\ldots ,z_l\) to be the homogenous coordinates and \({\mathbf {z}}=\{z_0,z_1,\ldots ,z_l\}\).

Definition 2.2

Let \(\mathcal {I}\) be a differential ideal of \(\mathcal {F}\{{\mathbf {z}}\}\) and \(\mathcal {I}\hbox {{:}}{\mathbf {z}}=\{f\in \mathcal {F}\{{\mathbf {z}}\}|\,z_jf\in \mathcal {I}, j=0,\ldots ,l\}\). Call \(\mathcal {I}\) a differentially homogenous differential ideal of \(\mathcal {F}\{{\mathbf {z}}\}\) if \(\mathcal {I}\hbox {{:}}{\mathbf {z}}=\mathcal {I}\) and for every \(P\in \mathcal {I}\) and a differential indeterminate \(\lambda \) over \(\mathcal {F}\{{\mathbf {z}}\}\), \(P(\lambda {\mathbf {z}})\in \mathcal {F}\{\lambda \}\mathcal {I}\) in \(\mathcal {F}\{\lambda ,{\mathbf {z}}\}\).

Consider a differential polynomial \(P\in \mathcal {F}\{{\mathbf {z}}\}\) and a point \(\alpha \in \mathbf P (l)\). Say that \(P\) vanishes at \(\alpha \) and that \(\alpha \) is a zero of \(P\), if \(P\) vanishes at \(\lambda \alpha \) for every \(\lambda \) in \(\mathcal {E}\). For a subset \(\fancyscript{M}\) of \(\mathbf P (l)\), let \(\mathbb {I}(\fancyscript{M})\) denote the set of all differential polynomials in \(\mathcal {F}\{{\mathbf {z}}\}\) that vanish at \(\fancyscript{M}\). Let \({\mathbb {V}}(S)\) denote the set of points of \(\mathbf P (l)\) that are zeros of the subset \(S\) of \(\mathcal {F}\{{\mathbf {z}}\}\). And a subset \(V\subset \mathbf P (l)\) is called a projective differential \(\mathcal {F}\) -variety if there exists \(S\subset \mathcal {F}\{{\mathbf {z}}\}\) such that \(V={\mathbb {V}}(S)\). There exists a one-to-one correspondence between projective differential varieties and radical differentially homogenous differential ideals. And a projective differential \(\mathcal {F}\)-variety \( V\) is \(\mathcal {F}\)-irreducible if and only if \(\mathbb {I}(V)\) is prime.

Let \(\mathcal {I}\) be a prime differentially homogenous ideal and \(\xi =(\xi _0,\xi _1,\ldots ,\xi _l)\) be a generic point of \(\mathcal {I}\) with \(\xi _0\ne 0\). Then the differential dimension of \({\mathbb {V}}(\mathcal {I})\) is defined to be the differential transcendence degree of \(\mathcal {F}\langle (\xi _0^{-1}\xi _k)_{1\le k\le l}\rangle \) over \(\mathcal {F}\).

2.2 Characteristic Sets of a Differential Polynomial System

Let \(f\) be a differential polynomial in \(\mathcal {F}\{{\mathbb {Y}}\}\). We define the order of \(f\) w.r.t. \(y_i\) to be the greatest number \(k\) such that \(y_{i}^{(k)}\) appears effectively in \(f\), which is denoted by \({\mathrm{ord}}(f,y_{i})\). And if \(y_{i}\) does not appear in \(f\), then we set \({\mathrm{ord}}(f,y_{i})=-\infty \). The order of \(f\) is defined to be \(\mathrm{max}_{i}\,{\mathrm{ord}}(f,y_{i})\), that is, \({\mathrm{ord}}(f)=\mathrm{max}_{i}\,{\mathrm{ord}}(f,y_{i})\).

A ranking \(\fancyscript{R}\) is a total order over \(\Theta ({\mathbb {Y}})\), which is compatible with the derivations over the alphabet:
  1. 1)

    \(\delta \theta y_{j} >\theta y_{j}\) for all derivatives \(\theta y_{j}\in \Theta ({\mathbb {Y}})\).

     
  2. 2)

    \(\theta _{1} y_{i} >\theta _{2} y_{j}\) \(\Longrightarrow \) \(\delta \theta _{1} y_{i} >\delta \theta _{2} y_{j}\) for \(\theta _{1} y_{i}, \theta _{2} y_{j}\in \Theta ({\mathbb {Y}})\).

     
By convention, \(1<\theta y_{j}\) for all \(\theta y_{j}\in \Theta ({\mathbb {Y}})\).
Two important kinds of rankings are the following:
  1. 1)

    Elimination ranking: \(y_{i} > y_{j}\) \(\Longrightarrow \) \(\delta ^{k} y_{i} >\delta ^{l} y_{j}\) for any \(k, l\ge 0\).

     
  2. 2)

    Orderly ranking: \(k>l\) \(\Longrightarrow \) \(\delta ^{k} y_{i} >\delta ^{l} y_{j}\), for any \(i, j \in \{1, 2, \ldots , n\}\).

     
Let \(\mathcal {F}\{{\mathbb {Y}}\}\) be endowed with a ranking \(\fancyscript{R}\) and \(f\) be a differential polynomial in \(\mathcal {F}\{{\mathbb {Y}}\}\). The greatest derivative w.r.t. \(\fancyscript{R}\) which appears effectively in \(f\) is called the leader of \(f\), denoted by \(u_{f}\) or \(\hbox {{ld}}(f)\). The two conditions mentioned above imply that the leader of \(\theta (f)\) is \(\theta u_{f}\) for \(\theta \in \Theta \). Let the degree of \(f\) in \(u_{f}\) be \(d\). As a univariate polynomial in \(u_{f}\), \(f\) can be rewritten as
$$\begin{aligned} f=I_{d} u_{f}^{d}+I_{d-1}u_{f}^{d-1}+\cdots +I_{0}. \end{aligned}$$
Then \(I_{d}\) is called the initial of \(f\) and is denoted by \(\mathrm{I}_{f}\). The partial derivative of \(f\) w.r.t. \(u_{f}\) is called the separant of \(f\), which will be denoted by \(\hbox {{S}}_{f}\). Clearly, \(\hbox {{S}}_{f}\) is the initial of any proper derivative of \(f\). The rank of \(f\) is \(u_{f}^{d}\) and is denoted by \(\hbox {{rk}}(f)\).

Let \(f\) and \(g\) be two differential polynomials and \(\hbox {{rk}}(f)=u_{f}^{d}\). Then \(g\) is said to be partially reduced w.r.t. \(f\) if no proper derivatives of \(u_{f}\) appear in \(g\). And \(g\) is said to be reduced w.r.t. \(f\) if \(g\) is partially reduced w.r.t. \(f\) and \(\mathrm{deg}(g,u_{f})<d\). A set of differential polynomials \(\mathcal {A}\) is said to be an auto-reduced set if each polynomial of \(\mathcal {A}\) is reduced w.r.t. any other element of \(\mathcal {A}\). Every auto-reduced set is finite.

Let \(\mathcal {A}=A_{1},A_{2},\ldots ,A_{t}\) be an auto-reduced set and \(f\) an arbitrary differential polynomial. Then there exists an algorithm, called Ritt’s algorithm of reduction, which reduces \(f\) w.r.t. \(\mathcal {A}\) to a polynomial \(r\) that is reduced w.r.t. \(\mathcal {A}\), satisfying the relation
$$\begin{aligned} \prod _{i=1}^t\hbox {{S}}_{A_i}^{d_{i}}\mathrm{I}_{A_i}^{e_{i}} \cdot f \equiv r, \mathrm{{mod}}\, [\mathcal {A}],\end{aligned}$$
(2)
where \(d_{i}\) and \(e_{i}\) are nonnegative integers. The differential polynomial \(r\) is called the differential remainder of \(f\) w.r.t. \({\mathcal A}\).
Let \(\mathcal {A}\) be an auto-reduced set. Denote \(\hbox {{H}}_{\mathcal {A}}\) to be the set of all the initials and separants of \(\mathcal {A}\) and \(\hbox {{H}}_{\mathcal {A}}^\infty \) the minimal multiplicative set containing \(\hbox {{H}}_{\mathcal {A}}\). The saturation ideal of \({\mathcal A}\) is defined as
$$\begin{aligned} \mathrm{sat}({\mathcal A})=[\mathcal {A}]\hbox {{:}}\hbox {H}_{\mathcal {A}}^\infty = \{p\,| \exists h\in \hbox {H}_{\mathcal {A}}^\infty , \,{\text{ s }. t. }\, hp\in [A]\}. \end{aligned}$$
An auto-reduced set \(\mathcal {C}\) contained in a differential polynomial set \(\mathcal {S}\) is said to be a characteristic set of \(\mathcal {S}\), if \(\mathcal {S}\) does not contain any nonzero element reduced w.r.t. \(\mathcal {C}\). A characteristic set \(\mathcal {C}\) of a differential ideal \(\mathcal {I}\) reduces all elements of \(\mathcal {I}\) to zero. If the ideal is prime, \(\mathcal {C}\) reduces only the elements of \(\mathcal {I}\) to zero and \(\mathcal {I}=\mathrm{sat}({\mathcal C})\) [29, Lemma 2, p. 167] is valid.

In terms of characteristic sets, the cardinal number of a characteristic set of \(\mathcal {I}\) is equal to the codimension of \(\mathcal {I}\), that is, \(n-\hbox {{dim}}(\mathcal {I})\). When \(\mathcal {I}\) is of codimension one, it has the following property.

Lemma 2.3

[42, p. 45] Let \(\mathcal {I}\) be a prime differential ideal of codimension one in \(\mathcal {F}\{{\mathbb {Y}}\}\). Then there exists an irreducible differential polynomial \(A\) such that \(\mathcal {I}=\mathrm{sat}(A)\) and \(\{A\}\) is the characteristic set of \(\mathcal {I}\) w.r.t. any ranking.

3 Sparse Differential Resultants for Laurent Differential Polynomials

In this section, the concepts of Laurent differential polynomial and Laurent differentially essential system are first introduced, and then the sparse differential resultant for a Laurent differentially essential system is defined.

3.1 Laurent Differential Polynomials

Let \({\mathcal {F}}\) be an ordinary differential field with a derivation operator \(\delta \) and \({\mathcal {F}}\{{\mathbb {Y}}\}\) the ring of differential polynomials in the differential indeterminates \({\mathbb {Y}}=\{y_1,\ldots ,y_n\}\). Let \(\mathcal {E}\) be a universal differential extension field of \(\mathcal {F}\). For any element \(e\in \mathcal {E}\), \(e^{[k]}\) denotes the set \(\{e^{(0)},\ldots ,e^{(k)}\}\).

The sparse differential resultant is closely related to Laurent differential polynomials, which will be defined below.

Definition 3.1

A Laurent differential monomial of order \(s\in {\mathbb N}\) is a Laurent monomial in variables \({\mathbb {Y}}^{[s]}=(y_i^{(k)})_{1\le i\le n;0\le k\le s}\). More precisely, it has the form \(\prod _{i=1}^n\prod _{k=0}^s(y_i^{(k)})^{m_{ik}}\), where \(m_{ik}\) are integers which can be negative. A Laurent differential polynomial is a finite linear combination of Laurent differential monomials with coefficients from \(\mathcal {E}\).

Clearly, the collection of all Laurent differential polynomials forms a commutative differential ring under the obvious sum and product operations and the usual derivation operator \(\delta \), where all Laurent differential monomials are invertible. We denote the differential ring of Laurent differential polynomials with coefficients in \(\mathcal {F}\) by \(\mathcal {F}\{y_1,y_1^{-1},\ldots ,y_n,y_n^{-1}\}\), or simply by \(\mathcal {F}\{{\mathbb {Y}}^{\pm }\}\).

Remark 3.2

\(\mathcal {F}\{{\mathbb {Y}}^{\pm }\}=\mathcal {F}\{y_1,y_1^{-1},\ldots ,y_n,y_n^{-1}\}\) is only a notation for Laurent differential polynomial ring. It is not equal to \(\mathcal {F}[y_{i}^{(k)},(y_{i}^{-1})^{(k)}\,|\,k\ge 0]\).

Denote \(\mathcal {S}\) to be the set of all differential ideals in \(\mathcal {F}\{{\mathbb {Y}}^{\pm }\}\), which are finitely generated. Let \(\mathbb {m}\) be the set of all differential monomials in \({\mathbb {Y}}\) and \(\mathcal {T}\) the set of all differential ideals in \(\mathcal {F}\{{\mathbb {Y}}\}\), each of which has the form
$$\begin{aligned} ([f_1,\ldots ,f_r]\hbox {{:}}\mathbb {m})_{\mathcal {F}\{{\mathbb {Y}}\}}=\{f\in \mathcal {F}\{{\mathbb {Y}}\}\big |\, \exists \, M\in \mathbb {m}, \,\text { s.t.}\,\,M\cdot f\in [f_1,\ldots ,f_r]\} \end{aligned}$$
for arbitrary \(f_i\in \mathcal {F}\{{\mathbb {Y}}\}\). Now we give a one-to-one correspondence between \(\mathcal {S}\) and \(\mathcal {T}\).
The maps \(\phi :\,\mathcal {S}\longrightarrow \mathcal {T}\) and \(\psi :\,\mathcal {T}\longrightarrow \mathcal {S}\) are defined as follows:
  • Given \(\mathcal {I}=[F_1,\ldots ,F_s]_{ \mathcal {F}\{{\mathbb {Y}}^{\pm }\}} \in \mathcal {S}\). Since each \(F_i\in \mathcal {F}\{{\mathbb {Y}}^{\pm }\}\), a vector \((M_1,\ldots ,M_s)\in \mathbb {m}^s \) can be chosen such that each \(M_iF_i\in \mathcal {F}\{{\mathbb {Y}}\}\). We then define \(\phi (\mathcal {I})\mathop {=}\limits ^{\triangle }([M_1F_1,\ldots ,M_sF_s]\hbox {{:}}\mathbb {m})_{\mathcal {F}\{{\mathbb {Y}}\}}\).

  • Given \(\mathcal {J}=([f_1,\ldots ,f_r]\hbox {{:}}\mathbb {m})_{_{\mathcal {F}\{{\mathbb {Y}}\}}}\in \mathcal {T}\), define \(\psi (\mathcal {J})=[f_1,\ldots ,f_r]_{\mathcal {F}\{{\mathbb {Y}}^{\pm }\}}\).

Lemma 3.3

The above maps \(\phi \) and \(\psi \) are well defined. Moreover, \(\phi \circ \psi =\text { id}_{\mathcal {T}}\) and \(\psi \circ \phi =\text { id}_{\mathcal {S}}\).

Proof

\(\psi \) is obviously well defined. To show that \(\phi \) is well defined, it suffices to show that given another \((N_1,\ldots ,N_s)\in \mathbb {m}^s\) with \(N_iF_i\in \mathcal {F}\{{\mathbb {Y}}\}\,(i=0,\ldots ,n)\), \(([M_1F_1,\ldots ,M_sF_s]\hbox {{:}}\mathbb {m})_{\mathcal {F}\{{\mathbb {Y}}\}}=([N_1F_1,\ldots ,N_sF_s]\hbox {{:}}\mathbb {m})_{\mathcal {F}\{{\mathbb {Y}}\}}\). It follows from the fact that \(N_iF_i\in ([M_1F_1,\ldots ,M_sF_s]\hbox {{:}}\mathbb {m})_{\mathcal {F}\{{\mathbb {Y}}\}}\) and \(M_iF_i\in ([N_1F_1,\ldots ,N_sF_s]\hbox {{:}}\mathbb {m})_{\mathcal {F}\{{\mathbb {Y}}\}}\). For each \(\mathcal {I}=[F_1,\ldots ,F_s]_{ \mathcal {F}\{{\mathbb {Y}}^{\pm }\}}\in \mathcal {S}\), \(\psi \circ \phi (\mathcal {I})=\psi (([M_1F_1,\ldots ,\) \(M_sF_s]\hbox {{:}}\mathbb {m})_{ \mathcal {F}\{{\mathbb {Y}}\}})=[M_1F_1,\ldots ,M_sF_s]_{ \mathcal {F}\{{\mathbb {Y}}^{\pm }\}}= \mathcal {I}\) where \(M_iF_i\in \mathcal {F}\{{\mathbb {Y}}\}\). So we have \(\psi \circ \phi =\text { id}_{\mathcal {S}}\). And for each \(\mathcal {J}=([f_1,\ldots ,f_r]\hbox {{:}}\mathbb {m})_{ \mathcal {F}\{{\mathbb {Y}}\}}\in \mathcal {T}\), \(\phi \circ \psi (\mathcal {J})=\phi ([f_1,\ldots ,f_r]_{ \mathcal {F}\{{\mathbb {Y}}^{\pm }\}})=\mathcal {J}\). Thus, \(\phi \circ \psi =\text { id}_{\mathcal {T}}\) follows. \(\square \)

From the above, for a finitely generated Laurent differential ideal \(\mathcal {I}=[F_1,\ldots ,F_s]_{\mathcal {F}\{{\mathbb {Y}}^{\pm }\}}\in \mathcal {S}\), although \(\phi (\mathcal {I})\) is unique, different vectors \((M_1,\ldots ,M_s)\in \mathbb {m}^s\) can be chosen to give different representations for \(\phi (\mathcal {I})\). Now the norm form for a Laurent differential polynomial is introduced to fix the choice of \((M_1,\ldots ,M_s)\in \mathbb {m}^s\) when we consider \(\phi (\mathcal {I})\).

Definition 3.4

For every Laurent differential polynomial \(F\in \mathcal {E}\{{\mathbb {Y}}^{\pm }\}\), there exists a unique Laurent differential monomial \(M\) such that (1) \(M\cdot F\in \mathcal {E}\{{\mathbb {Y}}\}\) and (2) for any Laurent differential monomial \(T\) with \(T\cdot F\in \mathcal {E}\{{\mathbb {Y}}\}\), \(T\cdot F\) is divisible by \(M\cdot F\) as differential polynomials. This \(M\cdot F\) is defined to be the norm form of \(F\), denoted by \(F^{\text {N}}\). The order of \(F^{\text {N}}\) is defined to be the effective order of F, denoted by \(\hbox {{Eord}}(F)\). Clearly, \(\hbox {{Eord}}(F)\le {\mathrm{ord}}(F)\). And the degree of \(F\) is defined to be the degree of \(F^{\text {N}}\), denoted by \(\mathrm{deg}(F)\).

In the following, we consider zeros for Laurent differential polynomials.

Definition 3.5

Let \(\mathcal {E}^{\wedge }=\mathcal {E}\backslash \{a\in \mathcal {E}\big |\,\exists k\in \mathbb {N}, \,\text { s.t.}\, a^{(k)}=0 \}\). Let \(F\) be a Laurent differential polynomial in \(\mathcal {F}\{{\mathbb {Y}}^{\pm }\}\). A point \((a_1,\ldots ,a_n)\in (\mathcal {E}^{\wedge })^n\) is called a non-polynomial differential zero of \(F\) if \(F(a_1,\ldots ,a_n)=0.\)

It becomes apparent why non-polynomial elements in \(\mathcal {E}^{\wedge }\) are considered as zeros of Laurent differential polynomials when defining the zero set of an ideal. If \(F\in \mathcal {I}\), then \((y_i^{(k)})^{-1}F\in \mathcal {I}\) for any positive integer \(k\), and in order for \((y_i^{(k)})^{-1}F\) to be meaningful, we need to assume \(y_i^{(k)}\ne 0\). We will see later in Example 5.2 how non-polynomial solutions are naturally related to the sparse differential resultant.

3.2 Definition of Sparse Differential Resultant

In this section, the definition of the sparse differential resultant will be given. Since the study of sparse differential resultants becomes more transparent if we consider not individual differential polynomials but differential polynomials with indeterminate coefficients, the sparse differential resultant for Laurent differential polynomials with differential indeterminate coefficients will be defined first. Then the sparse differential resultant for a given Laurent differential polynomial system with concrete coefficients is the value that the generic resultant takes for the coefficients of the given system.

Let \(\mathcal {A}_i=\{M_{i0},M_{i1},\ldots ,M_{il_i}\}\) for \(i=0,\ldots ,n\), where \(M_{ik}=\prod _{j=1}^n\prod _{l=0}^{s_i}(y_j^{(l)})^{d_{ikjl}}\triangleq ({\mathbb {Y}}^{[s_i]})^{\alpha _{ik}}\) is a Laurent differential monomial of order \(s_i\) with exponent vector \(\alpha _{ik}=(d_{ikjl}\,|\,j=1,\ldots ,n;l=0,\ldots ,s_i)\in \mathbb {Z}^{n(s_i+1)}\) and for \(k_1\ne k_2\), \(\alpha _{ik_1}\ne \alpha _{ik_2}\). Here \({\mathbb {Y}}^{[s_i]}=\{y_{j}^{(l)}\,| j=1,\ldots ,n;l=0,\ldots ,s_i\}\). Consider \(n+1\) generic Laurent differential polynomials defined over \(\mathcal {A}_i\,(i=0,1,\ldots ,n)\):
$$\begin{aligned} {\mathbb {P}}_i=\sum \limits _{k=0}^{l_i}u_{ik} M_{ik}\,(i=0,\ldots ,n), \end{aligned}$$
(3)
where all the \(u_{ik}\) are differentially independent over \({\mathbb Q}\). The set of exponent vectors \({\mathbb {S}}_i =\{ \alpha _{ik}\,|\, k=0,\ldots ,l_i\}\) is called the support of \({\mathbb {P}}_i\). The number \(|{\mathbb {S}}_i| = l_i +1\) is called the size of \({\mathbb {P}}_i\). Note that \(s_i\) is the order of \({\mathbb {P}}_i\) and an exponent vector of \({\mathbb {P}}_i\) contains \(n(s_i+1)\) elements. Denote
$$\begin{aligned} {\mathbf {u}}_i=(u_{i0},u_{i1},\ldots ,u_{il_i})\,(i=0,\ldots ,n) \hbox { and } {\mathbf {u}}=\{u_{ik}\,|\,i=0,\ldots ,n;k=1,\ldots ,l_i\}. \end{aligned}$$
(4)
To avoid the triviality, each \(l_i\ge 1\,(i=0,\ldots ,n)\) is always assumed in this paper.

Definition 3.6

A set of Laurent differential polynomials of the form (3) is said to be a Laurent differentially essential system if there exist \(k_i\,(i=0,\ldots ,n)\) with \(1\le k_i\le l_i\) such that \(\hbox {{d.tr.deg}}\,\mathbb {Q}\langle \frac{M_{0k_0}}{M_{00}},\) \(\frac{M_{1k_1}}{M_{10}},\ldots ,\frac{M_{nk_n}}{M_{n0}}\rangle /\mathbb {Q}=n.\) In this case, we also say that \(\mathcal {A}_0,\ldots ,\mathcal {A}_n\) or \({\mathbb {S}}_0,\ldots ,\) \({\mathbb {S}}_n\) form a Laurent differentially essential system.

Although \(M_{i0}\) are used as denominators to define Laurent differentially essential systems, the following lemma shows that the definition does not depend on the choice of \(M_{i0}\).

Lemma 3.7

The following two conditions are equivalent.
  1. 1.

    There exist \(k_0,\ldots ,k_n\) with \(1\le k_i\le l_i\) such that \(\mathrm{d.tr.deg}\,\mathbb {Q}\langle \frac{M_{0k_0}}{M_{00}},\ldots ,\frac{M_{nk_n}}{M_{n0}}\rangle /\mathbb {Q}=n.\)

     
  2. 2.

    There exist pairs \((k_i,j_i)\,(i=0,\ldots ,n)\) with \(k_i\ne j_i\in \{0,\ldots ,l_i\}\) such that\(\mathrm{d.tr.deg}\,\mathbb {Q}\langle \frac{M_{0k_0}}{M_{0j_0}},\) \(\ldots ,\frac{M_{nk_n}}{M_{nj_n}}\rangle /\mathbb {Q}=n.\)

     

Proof

It is trivial that 1) implies 2). For the other direction, assume 2) holds. Without loss of generality, suppose \(\frac{M_{1k_1}}{M_{1j_1}},\ldots ,\frac{M_{nk_n}}{M_{nj_n}} \) are differentially independent over \(\mathbb {Q}\). We need to show (1) holds. Suppose the contrary, then for any \(m_i\in \{1,\ldots ,l_{i}\}\), \(\frac{M_{1m_1}}{M_{10}},\ldots ,\frac{M_{nm_n}}{M_{n0}}\) are differentially dependent over \({\mathbb Q}\). Now we claim that \((*)\) suppose for each \(i\in \{1,2\}\), \(a\) and \(b_i\) are differentially dependent over \(\mathbb {Q}\), then \(a\) and \(b_1/b_2\) are differentially dependent over \(\mathbb {Q}\). Indeed, if \(a\) is differentially algebraic over \(\mathbb {Q}\), then \((*)\) follows. If \(a\) is differentially transcendental over \(\mathbb {Q}\), then each \(b_i\) is differentially algebraic over \(\mathbb {Q}\langle a\rangle \). Thus, \(b_1/b_2\) is differentially algebraic over \(\mathbb {Q}\langle a\rangle \) [29, p. 102] and the claim is proved. Since \(\frac{M_{ik_i}}{M_{ij_i}}=\frac{M_{ik_i}}{M_{i0}}\big /\frac{M_{ij_i}}{M_{i0}}\), by claim \((*)\), \(\frac{M_{ik_i}}{M_{ij_i}}\,(i=1,\ldots ,n)\) are differentially dependent over \({\mathbb Q}\), which leads to a contradiction. \(\square \)

Suppose the norm form of \({\mathbb {P}}_i\) has the following form:
$$\begin{aligned} {\mathbb {P}}_i^\text {N}=M_i{\mathbb {P}}_i=\sum \limits _{k=0}^{l_i}u_{ik} N_{ik}\quad \,(i=0,\ldots ,n). \end{aligned}$$
(5)
Clearly, \(N_{ik}=M_iM_{ik}\), \(\frac{M_{ik}}{M_{i0}}=\frac{N_{ik}}{N_{i0}}\). Suppose \(\mathbb {m}\) is the set of all differential monomials in \({\mathbb {Y}}\). Let
$$\begin{aligned} \mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}&= ([{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n])_{\mathbb {Q}\{{\mathbb {Y}}^{\pm };{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}\end{aligned}$$
(6)
$$\begin{aligned} \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}&= ([{\mathbb {P}}_0^\text {N},\ldots ,{\mathbb {P}}_n^\text {N}]\hbox {{:}}\mathbb {m})_{\mathbb {Q}\{{\mathbb {Y}};{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}. \end{aligned}$$
(7)
By Lemma 3.3, \(\mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}\) corresponds to \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) in a unique way. Moreover, we have

Lemma 3.8

\(\mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}=\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\).

Proof

It is obvious that the right elimination ideal is contained in the left one. For the other direction, let \(G\) be any element in the left ideal. Then there exist \(H_{ij}\in \mathbb {Q}\{{\mathbb {Y}}^{\pm };{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) such that \(G=\sum _{i,j}H_{ij}{\mathbb {P}}_i^{(j)}\). So \(G=\sum _{i,j}H_{ij}\big (\frac{{\mathbb {P}}_i^{\text {N}}}{M_i}\big )^{(j)}=\sum _{i,j}\widetilde{H}_{ij}\big ({\mathbb {P}}_i^{\text {N}}\big )^{(j)}\) with \(\widetilde{H}_{ij}\in \mathbb {Q}\{{\mathbb {Y}}^{\pm };{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\). Thus, there exists an \(M\in \mathbb {m}\) such that \(MG\in [{\mathbb {P}}_0^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}]_{\mathbb {Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}\) and \(G\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) follows. \(\square \)

By Lemma 3.8, we are safely to define
$$\begin{aligned} \mathcal {I}_{\mathbf {u}}=\mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots , {\mathbf {u}}_n\}=\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}. \end{aligned}$$
(8)
Let \(\eta =(\eta _1,\ldots ,\eta _n)\) be a generic point of \([0]_{{\mathbb Q}\langle {\mathbf {u}}\rangle \{{\mathbb {Y}}\}}\), where \({\mathbf {u}}\) is defined in (4). Let
$$\begin{aligned} \zeta _i&= -\sum _{k=1}^{l_i}u_{ik}\frac{N_{ik} (\eta )}{N_{i0}(\eta )}\quad \,\,(i=0,1,\ldots ,n)\nonumber \\ \zeta&= (\zeta _0,u_{01},\ldots ,u_{0l_0};\ldots ; \zeta _n,u_{n1},\ldots ,u_{nl_n})\\ \theta&= (\eta ,\zeta )=(\eta ;\zeta _0,u_{01},\ldots ,u_{0l_0}; \ldots ;\zeta _n,u_{n1},\ldots ,u_{nl_n}).\nonumber \end{aligned}$$
(9)
In this paper, when talking about prime differential ideals, it is assumed that they are distinct from the unit differential ideal. The following result is the foundation for defining the sparse differential resultant.

Theorem 3.9

Let \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) be Laurent differential polynomials defined in (3). Then the following assertions hold.
1)

\(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) is a prime differential ideal in \({\mathbb Q}\{{\mathbb {Y}};{\mathbf {u}}_0,\) \(\ldots ,{\mathbf {u}}_n\}\) with \(\theta \) given in (9) as a generic point.

2)

The prime differential ideal \(\mathcal {I}_{\mathbf {u}}=\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) is of codimension one if and only if \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) form a Laurent differentially essential system.

Proof

To prove 1), it suffices to show that \(\theta =(\eta ;\zeta )\) is a generic point of \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Clearly, \({\mathbb {P}}_i^{\text {N}}=M_i{\mathbb {P}}_i\) vanishes at \(\theta \) \((i=0,\ldots ,n)\). For any \(f\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\), there exists an \(M\in \mathbb {m}\) such that \(Mf\in [{\mathbb {P}}_0^{\text {N}},{\mathbb {P}}_1^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}]_{{\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}\). It follows that \(f(\theta )=0\). Conversely, let \(f\) be any differential polynomial in \({\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) satisfying \(f(\theta )=0\). Clearly, \({\mathbb {P}}_0^{\text {N}},{\mathbb {P}}_1^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}\) constitute an auto-reduced set with \(u_{i0}\) as leaders. Let \(f_1\) be the differential remainder of \(f\) w.r.t. this auto-reduced set. Since \({\mathbb {P}}_i^\text {N}\) is linear in \(u_{i0}\), \(f_1\) is free from \(u_{i0}\,(i=0,\ldots ,n)\). By (2), there exist \(k_i\ge 0\) such that \(\prod _{i=0}^n(N_{i0})^{k_i}\cdot f\equiv f_1,\mathrm{{mod}}\,[{\mathbb {P}}_0^{\text {N}},{\mathbb {P}}_1^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}]\). Hence, \(f_1(\theta )=0\). Since \(f_1\in {\mathbb Q}\{{\mathbf {u}},{\mathbb {Y}}\}\), \(f_1(\theta )=f_1(\eta ,{\mathbf {u}})=0\) means \(f_1=0\). Thus, \(f\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). So \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) is a prime differential ideal with \(\theta \) as its generic point.

Consequently, \(\mathcal {I}_{\mathbf {u}}=\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) is a prime differential ideal with a generic point \(\zeta =(\zeta _0,u_{01},\ldots ,u_{0l_0};\ldots ;\zeta _n,u_{n1},\ldots ,u_{nl_n})\). From (9), it is clear that \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle \zeta \rangle /{\mathbb Q}\) \(\le \sum _{i=0}^nl_i+n\). Suppose \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) form a Laurent differentially essential system, that is, there exist pairs \((i_k,j_k)\)  \((k=1,\ldots ,n)\) with \(1\le j_k\le l_{i_k}\) and \(i_{k_1}\ne i_{k_2}\) \((k_1\ne k_2)\) such that \(\frac{N_{i_1j_1}}{N_{i_10}},\ldots ,\frac{N_{i_nj_n}}{N_{i_n0}}\) are differentially independent over \({\mathbb Q}\). Then by Lemma 2.1, \(\zeta _{i_1},\ldots ,\zeta _{i_n}\) are differentially independent over \({\mathbb Q}\langle {\mathbf {u}}\rangle \). For if not, by specializing \(u_{i_kj_k}\) to \(-1\) and the other \({\mathbf {u}}\) to \(0\), Lemma 2.1 guarantees that \(\frac{N_{i_1j_1}}{N_{i_10}},\ldots ,\frac{N_{i_nj_n}}{N_{i_n0}}\) are differentially dependent over \({\mathbb Q}\), a contradiction. Then it follows that \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle \zeta \rangle /{\mathbb Q}= \sum _{i=0}^nl_i+n\). Thus, \(\mathcal {I}_{\mathbf {u}}\) is of codimension 1.

Conversely, assume that \(\mathcal {I}_{\mathbf {u}}\) is of codimension 1. That is, \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle \zeta \rangle /{\mathbb Q}= \sum _{i=0}^nl_i+n\). We need to show that there exist pairs \((i_k,j_k)\)  \((k=1,\ldots ,n)\) with \(1\le j_k\le l_{i_k}\) and \(i_{k_1}\ne i_{k_2}\) \((k_1\ne k_2)\) such that \(\frac{N_{i_1j_1}}{N_{i_10}},\ldots ,\frac{N_{i_nj_n}}{N_{i_n0}}\) are differentially independent over \({\mathbb Q}\). Suppose the contrary, i.e., \(\frac{N_{i_1j_1}(\eta )}{N_{i_10}(\eta )},\ldots ,\frac{N_{i_nj_n}(\eta )}{N_{i_n0}(\eta )}\) are differentially dependent for any \(n\) different \(i_k\) and \(j_k\in \{1,\ldots ,l_{i_k}\}\). Since each \(\zeta _{i_k}\) is a linear combination of \(\frac{N_{i_kj_k}(\eta )}{N_{i_k0}(\eta )}\) \((j_k=1,\ldots ,l_{i_k})\), it follows that \(\zeta _{i_1},\ldots ,\zeta _{i_n}\) are differentially dependent over \({\mathbb Q}\langle {\mathbf {u}}\rangle \). So \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle \zeta \rangle /{\mathbb Q}< \sum _{i=0}^nl_i+n\), a contradiction. \(\square \)

Now suppose \(\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) is a Laurent differentially essential system. By Theorem 3.9, \(\mathcal {I}_{\mathbf {u}}\) is a prime differential ideal of codimension one. By Lemma 2.3, there exists an irreducible differential polynomial \({\mathbf {R}}({\mathbf {u}}_{0},\ldots ,{\mathbf {u}}_{n})\) \(\in {\mathbb Q}\{{\mathbf {u}}_{0},\ldots ,{\mathbf {u}}_{n}\}\) such that
$$\begin{aligned} \mathcal {I}_{{\mathbf {u}}}=\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}= \mathrm{sat}({\mathbf {R}})\end{aligned}$$
(10)
where \(\mathrm{sat}({\mathbf {R}})\) is the saturation ideal of \({\mathbf {R}}\). More explicitly, \(\mathrm{sat}({\mathbf {R}})\) is the whole set of differential polynomials having zero differential remainders w.r.t. \({\mathbf {R}}\) under any ranking endowed on \({\mathbf {u}}_{0},\ldots ,{\mathbf {u}}_{n}\). So among all the differential polynomials in \(\mathcal {I}_{\mathbf {u}}\), \({\mathbf {R}}\) is of minimal order in each \({\mathbf {u}}_i\) provided that \({\mathbf {u}}_i\) effectively appears in \({\mathbf {R}}\).

Now the definition of sparse differential resultant is given as follows:

Definition 3.10

\({\mathbf {R}}({\mathbf {u}}_{0},\ldots ,{\mathbf {u}}_{n})\in {\mathbb Q}\{{\mathbf {u}}_{0},\ldots ,{\mathbf {u}}_{n}\}\) in (10) is defined to be the sparse differential resultant of the Laurent differentially essential system \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\), denoted by \(\hbox {Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n}\) or \(\hbox {Res}_{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n}\). And when all the \(\mathcal {A}_i\) are equal to the same \(\mathcal {A}\), we simply denote it by \(\hbox {Res}_\mathcal {A}\).

From the proof of Theorem 3.9 and Eq. (10), \({\mathbf {R}}\) has the following useful properties.

Corollary 3.11

\(\mathcal {I}_{\mathbf {u}}=\mathrm{sat}({\mathbf {R}})\) is a prime differential ideal in \( {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) with a generic zero \(\zeta \), where \(\zeta \) is defined in (9).

By changing variable order, \({\mathbf {R}}\) can be treated as a differential polynomial in \({\mathbf {u}},u_{00},\ldots ,u_{n0}\):
$$\begin{aligned} {\mathbf {R}}({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n)= {\mathbf {R}}({\mathbf {u}};u_{00},\ldots ,u_{n0}), \end{aligned}$$
where \({\mathbf {u}}\) is given in (4). Then, we have the following more useful form of Corollary 3.11.

Corollary 3.12

\(\mathcal {I}_{\mathbf {u}}=\mathrm{sat}({\mathbf {R}})\) is a prime differential ideal in \({\mathbb Q}\{{\mathbf {u}},u_{00},\ldots ,u_{n0}\}\) with a generic zero \(\zeta =({\mathbf {u}},\zeta _0,\ldots ,\zeta _n)\), where \(\zeta _i\) is defined in (9).

Denote \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\) to be the maximal order of \({\mathbf {R}}\) in \(u_{ik}\,(k=0,\ldots ,l_i)\), that is, \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=\mathrm{max}_{k}\,{\mathrm{ord}}({\mathbf {R}},u_{ik})\). If \({\mathbf {u}}_i\) does not occur in \({\mathbf {R}}\), then set \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=-\infty \).

Let \(h_i={\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\). By Corollary 3.12, \({\mathbf {R}}({\mathbf {u}};\zeta _{0},\zeta _{1},\ldots ,\zeta _{n})=0\). Differentiating both sides of the equality \({\mathbf {R}}({\mathbf {u}};\zeta _{0},\zeta _{1},\ldots ,\zeta _{n})=0\) w.r.t. \(u_{ik}^{(h_i)}\), we have
$$\begin{aligned} \overline{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}}+ \overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}}(-\frac{N_{ik}(\eta )}{N_{i0}(\eta )})=0 \end{aligned}$$
(11)
where \(\overline{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}}=\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}({\mathbf {u}};\zeta _{0},\zeta _{1},\ldots ,\zeta _{n})\). Equation (11) is frequently used in the rest of the paper.

Corollary 3.13

For each \(i\), if \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=h_i\ge 0\), then \({\mathrm{ord}}({\mathbf {R}},u_{ik})=h_i\,(k=0,\ldots ,l_i)\).

Proof

Firstly, we claim that \({\mathrm{ord}}({\mathbf {R}},u_{i0})=h_i\). For if not, suppose \({\mathrm{ord}}({\mathbf {R}},u_{ik})=h_i\ge 0\) for some \(k\ne 0\). By (11), \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}({\mathbf {u}};\zeta _0,\ldots ,\zeta _n)=0\), where \( \zeta _i\) are defined in (9). By Corollary 3.12, we have \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\in \mathrm{sat}({\mathbf {R}})\), a contradiction since \({\mathbf {R}}\) is irreducible. Thus, \({\mathrm{ord}}({\mathbf {R}},u_{i0})=h_i\). For each \(k\ne 0\), \({\mathrm{ord}}({\mathbf {R}},u_{ik})\le h_i\). If \({\mathrm{ord}}({\mathbf {R}},u_{ik})< h_i\), by (11), we have \(\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}({\mathbf {u}};\zeta _0,\ldots ,\zeta _n)\cdot (-\frac{N_{ik}(\eta )}{N_{i0}(\eta )})=0\). So \(\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}({\mathbf {u}};\zeta _0,\ldots ,\zeta _n)=0\) and \(\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}\in \mathrm{sat}({\mathbf {R}})\), a contradiction. Thus, for each \(k=0,\ldots ,l_i\), \({\mathrm{ord}}({\mathbf {R}},u_{ik})=h_i\). \(\square \)

Corollary 3.14

For \(i=1,\ldots ,n\) and \(k\in {\mathbb N}\), \(y_i^{(k)}\not \in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\).

Proof

Assume the contrary, \(y_i^{(k)}\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Since \(\zeta \) in (9) is a generic point of \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\), we have \(\eta _i^{(k)}=0\), which contradicts to the fact that \(\eta =(\eta _1,\ldots ,\eta _n)\) is a generic point of \(([0])_{\mathbb {Q}\langle {\mathbf {u}}\rangle \{{\mathbb {Y}}\}}\). \(\square \)

Remark 3.15

Due to Lemma 3.8, the sparse differential resultant can also be defined as follows: \(\mathcal {I}_{{\mathbb {Y}}^{\pm },{\mathbf {u}}}\cap {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}= \mathrm{sat}({\mathbf {R}})\). Although the sparse differential resultant is defined for Laurent differential polynomials \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\), it is more convenient to prove its properties using \({\mathbb {P}}_0^\text {N},\ldots ,{\mathbb {P}}_n^\text {N}\) instead of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\), since \({\mathbb {P}}_i^\text {N}\) are differential polynomials, and we can thus use results from differential algebra freely.

Remark 3.16

The sparse differential resultant can be computed with characteristic set methods for differential polynomials [3, 8, 25, 42, 47], which is implemented in the diffalg package of Maple. In Sect. 6, we will give an algorithm to compute the sparse differential resultant, which has a better complexity bound.

We give five examples that will be used throughout the paper.

Example 3.17

Let \(n=2\) and \({\mathbb {P}}_i\) of the form
$$\begin{aligned} {\mathbb {P}}_i=u_{i0}y_1''+u_{i1}y_1'''+u_{i2}y_2'''\,(i=0,1,2). \end{aligned}$$
It is easy to show that \(y_1'''/y_1''\) and \(y'''_2/y''_1\) are differentially independent over \({\mathbb Q}\). Thus, \({\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2\) form a Laurent differentially essential system. The sparse differential resultant is
$$\begin{aligned}{\mathbf {R}}= \hbox {Res}_{{\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2}=\left| \begin{array}{l@{\quad }l@{\quad }l} u_{00}&{}u_{01}&{}u_{02}\\ u_{10}&{}u_{11}&{}u_{12}\\ u_{20}&{}u_{21}&{}u_{22} \end{array} \right| . \end{aligned}$$
Indeed, since \(y_2'''{\mathbf {R}}=(u_{10}u_{21}-u_{20}u_{11}){\mathbb {P}}_0-(u_{00}u_{21}-u_{20}u_{01}){\mathbb {P}}_1+(u_{00}u_{11}-u_{01}u_{10}){\mathbb {P}}_2\), \({\mathbf {R}}\) is an irreducible differential polynomial in \(([{\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2]\hbox {{:}}\mathbb {m})_{\mathbb {Q}\{{\mathbb {Y}};{\mathbf {u}}_0,{\mathbf {u}}_1,{\mathbf {u}}_2\}}\) with minimal order in each \({\mathbf {u}}_i\). Pay attention to the fact that \({\mathbf {R}}\) does not belong to the differential ideal generated by \({\mathbb {P}}_i\) in \({\mathbb Q}\{{\mathbb {Y}};{\mathbf {u}}_0,{\mathbf {u}}_1,{\mathbf {u}}_2\}\) because each \({\mathbb {P}}_i\) is homogenous in \(y_1'',y_1''',y_2'''\) and \({\mathbf {R}}\) does not involve \({\mathbb {Y}}\). That is why we use the ideal \(([{\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2]\hbox {{:}}\mathbb {m})_{\mathbb {Q}\{{\mathbb {Y}};{\mathbf {u}}_0,{\mathbf {u}}_1,{\mathbf {u}}_2\}}\) rather than \([{\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2]_{\mathbb {Q}\{{\mathbb {Y}};{\mathbf {u}}_0,{\mathbf {u}}_1,{\mathbf {u}}_2\}}\) in Theorem 3.9. Of course, \({\mathbf {R}}\) does belong to \([{\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2]_{\mathbb {Q}\{{\mathbb {Y}}^{\pm };{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}\), for we have the expression \({\mathbf {R}}={(u_{10}u_{21}-u_{20}u_{11})}/{y_2'''}\cdot {\mathbb {P}}_0-(u_{00}u_{21}-u_{20}u_{01})/{y_2'''}\cdot {\mathbb {P}}_1+(u_{00}u_{11}-u_{01}u_{10})/{y_2'''}\cdot {\mathbb {P}}_2\).

The following example shows that for a Laurent differentially essential system, its sparse differential resultant may not involve the coefficients of some \({\mathbb {P}}_i\).

Example 3.18

Let \(n=2\) and \({\mathbb {P}}_i\) of the form
$$\begin{aligned} {\mathbb {P}}_0=u_{00}+u_{01}y_1y_1',\, {\mathbb {P}}_1=u_{10}+u_{11}y_1,\, {\mathbb {P}}_2=u_{20}+u_{21}y'_2. \end{aligned}$$
Clearly, \({\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2\) form a Laurent differentially essential system. And the sparse differential resultant of \({\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2\) is
$$\begin{aligned} {\mathbf {R}}=u_{01}u_{10}(u_{11}u_{10}'-u_{10}u_{11}')+u_{00}u_{11}^3, \end{aligned}$$
for \({\mathbf {R}}=u_{01}u_{11}u_{10}'{\mathbb {P}}_1+u_{01}u_{11}'{\mathbb {P}}_1^2-2u_{01}u_{10}u_{11}'{\mathbb {P}}_1+u_{11}^3{\mathbb {P}}_0 -u_{01}u_{11}{\mathbb {P}}_1{\mathbb {P}}_1'+u_{01}u_{10}u_{11}{\mathbb {P}}_1'\) and \({\mathbf {R}}\) is an irreducible differential polynomial with minimal order in \({\mathbf {u}}_0\). Note that \({\mathbf {R}}\) is free from the coefficients of \({\mathbb {P}}_2.\)

Example 3.19

Let \(\mathcal {A}_0=\{\mathbf{1 },y_1y_2\}\), \(\mathcal {A}_1=\{\mathbf{1 },y_1y'_2\}\), and \(\mathcal {A}_2=\{\mathbf{1 },y'_1y'_2\}\). It is easy to verify that \(\mathcal {A}_0,\mathcal {A}_1,\mathcal {A}_2\) form a Laurent differentially essential system. And \(\hbox {Res}_{{\mathcal {A}}_0,{\mathcal {A}}_1,{\mathcal {A}}_2}=u_{10}u_{01}u_{21}u_{11}u'_{00}-u_{10}u_{00}u_{11}u_{21}u'_{01} -u_{01}^2u_{21}u_{10}^2-u_{01}u_{00}u_{11}^2u_{20}.\)

Example 3.20

Let \(n=1\) and \(\mathcal {A}_0=\mathcal {A}_1=\{y_1^2,(y_1')^2,y_1y_1'\}\). Clearly, \(\mathcal {A}_0,\mathcal {A}_1\) form a Laurent differentially essential system and \(\hbox {Res}_{\mathcal {A}}=u_{11}^2u_{00}^2-2u_{01}u_{10}u_{11}u_{00}+u_{01}^2u_{10}^2- u_{12}u_{02}u_{11}u_{00}-u_{12}u_{02}u_{01}u_{10}+u_{12}^2u_{01}u_{00}+u_{10}u_{11}u_{02}^2\).

Example 3.21

Let \(n=1\) and \(\mathcal {A}_0=\mathcal {A}_1=\{y_1,y_1',y_1^2\}\). Clearly, \(\mathcal {A}_0,\mathcal {A}_1\) form a Laurent differentially essential system and \(\hbox {Res}_{\mathcal {A}}=-u_{12}u_{01}u_{00}u_{10} -u_{12}u_{01} ^2u'_{10}+u_{12}u_{01}u'_{11}u_{00} + u_{12}u_{01}u_{11}u'_{00}-u_{11}u_{02}u_{00}u_{10}+u_{11}u_{02}u'_{10}u_{01} + u_{02}u_{01}u_{10}^2-u_{11}^2u_{02}u'_{00}+u_{11}u_{02}u'_{01}u_{10} +u_{11}u_{00}^2u_{12}\) \( + u_{11}^2u'_{02}u_{00}-u_{11}u'_{02}u_{01}u_{10} -u_{11}u_{01}u'_{12}u_{00} + u_{01}^2u'_{12}u_{10} -u_{11}u'_{01}u_{12}u_{00}-u'_{11}u_{02}u_{01}u_{10}. \)

Remark 3.22

When all the \(\mathcal {A}_i\,(i=0,\ldots ,n)\) are sets of differential monomials as in the above examples, unless explicitly mentioned, we always consider \({\mathbb {P}}_i\) as Laurent differential polynomials. In this paper, sometimes we regard \({\mathbb {P}}_i\) as differential polynomials where it will be indicated.

We now define the sparse differential resultant for any set of specific Laurent differential polynomials over a Laurent differentially essential monomial system. For any finite set \(\mathcal {A}\) of Laurent differential monomials, denote by \(\mathcal {L}(\mathcal {A})\) the set of all Laurent differential polynomials of the form \(\sum _{M\in \mathcal {A}}a_{M}M\) where \(a_{M}\in \mathcal {E}\). Then \(\mathcal {L}(\mathcal {A})\) can be considered as the affine space \(\mathcal {E}^l\) or the projective space \(\mathbf P (l-1)\) over \(\mathcal {E}\) where \(l=|\mathcal {A}|\).

Definition 3.23

Let \(\mathcal {A}_i=\{M_{i0},M_{i1},\ldots ,M_{il_i}\}\,(i=0,\ldots ,n)\) be finite sets of Laurent differential monomials which form a Laurent differentially essential system. Consider \(n+1\) Laurent differential polynomials \((F_0,\ldots ,F_n)\in \prod _{i=0}^n\mathcal {L}(\mathcal {A}_i)\). The sparse differential resultant of \(F_0,\ldots ,F_n\), denoted as \(\hbox {Res}_{F_0,\ldots ,F_n}\), is obtained by replacing \({\mathbf {u}}_i\) by the corresponding coefficient vector of \(F_i\) in \(\hbox {Res}({\mathbf {u}}_{0},\ldots ,{\mathbf {u}}_{n})\) which is the sparse differential resultant of the \(n+1\) generic Laurent differential polynomials in (3).

We will show in Sect. 5.1 that the sparse differential resultant \(\hbox {Res}_{F_0,\ldots ,F_n}=0\) will approximately measure whether or not the overdetermined equation system \(F_i=0\,(i=0,\ldots ,n)\) has a common non-polynomial solution.

4 Criterion for Laurent Differentially Essential System in Terms of Supports

Let \({\mathcal A}_i\,(i=0,\ldots ,n)\) be finite sets of Laurent differential monomials. According to Definition 3.6, in order to check whether they form a Laurent differentially essential system, we need to check whether there exist \(M_{ik_i}, M_{ij_i}\in {\mathcal A}_i (i=0,\ldots ,n)\) such that \(\hbox {{d.tr.deg}}\, {\mathbb Q}\langle M_{0k_0}/M_{0j_0},\) \(\ldots ,M_{nk_n}/M_{nj_n}\rangle /{\mathbb Q}=n\). This can be done with the differential characteristic set method via symbolic computation [3, 17, 25]. In this section, a criterion will be given to check whether a Laurent differential system is essential in terms of their supports, which is conceptually and computationally simpler than the naive approach based on the characteristic set method.

4.1 Laurent Differential Monomials in Reduced and T-shape Forms

In this section, the differential transcendence degree of a set of Laurent differential monomials over \(\mathbb {Q}\) is shown to be equal to the rank of a certain matrix. The idea is to transform a Laurent differential monomial set to a standard form called T-shape whose differential transcendence degree is easy to compute.

Let \(B_i = \prod _{j=1}^n\prod _{k=0}^{q_i} (y_{j}^{(k)})^{t_{ijk}}(i=1,\ldots ,m)\) be \(m\) Laurent differential monomials with order \(q_j\), respectively. Let \(x_1,\ldots ,x_n\) be new algebraic indeterminates and
$$\begin{aligned} d_{ij} = \sum _{k=0}^{q_j} t_{ijk}x_j^{k}\in \mathbb {Z}[x_j] \quad (i=1,\ldots ,m ,j=1,\ldots ,n). \end{aligned}$$
If \({\mathrm{ord}}(B_i,y_j) = -\infty \), then set \(d_{ij}=0\) and \(\mathrm{deg}(d_{ij},x_j) = -\infty \). The vector \((d_{i1},d_{i2},\) \(\ldots ,d_{in})\) is called the symbolic support vector of \(B_i\). The following \(m\times n\) matrix
$$\begin{aligned} \mathrm{{D}}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} d_{11} &{} \,d_{12} &{} \,\ldots &{} \,d_{1n} \\ d_{21} &{} \,d_{22} &{} \,\ldots &{} \,d_{2n} \\ &{} &{} \ddots &{} \\ d_{m1} &{} \,d_{m2} &{} \,\ldots &{} \,d_{mn} \end{array}\right) \end{aligned}$$
is called the symbolic support matrix of \(B_1,\ldots , B_m\).

Note that there is a one-to-one correspondence between Laurent differential monomials and their symbolic support vectors, so we will not distinguish these two concepts whenever there is no confusion. The same is true for a set of Laurent differential monomials and its symbolic support matrix.

Definition 4.1

A set of Laurent differential monomials \(B_1, B_2, \ldots , B_m\) or its symbolic support matrix \(\mathrm{{D}}\) is called reduced if for each \(i\le \min (m,n)\), \(-\infty \ne {\mathrm{ord}}(B_i,y_i) > {\mathrm{ord}}(B_{i+k},y_i)\), or equivalently \(-\infty \ne \mathrm{deg}(d_{ii},x_i)>\mathrm{deg}(d_{i+k,i},x_i)\), holds for all \(k>0\).

Note that a reduced symbolic support matrix is always of full rank. The reason is that the term \(\prod _{i=1}^{\min (m,n)}\) \(x_i^{{\mathrm{ord}}(B_i,y_i)}\) will appear effectively in the determinant of the \(\min (m,n)\)th principal minor when expanded.

Example 4.2

Let \(B_1 = y_{1}^2y''_{1}y'_{2}y_4, B_2 = y_{1}^3(y'_{2})^2y_{3}(y'_{3})^2, B_3 = y'_{1}y'_{3}y'_4\). Then \(q_1 = 2, q_2 = 1, q_3 = 1,q_4=0\), and
$$\begin{aligned} \mathrm{{D}}=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} x_1^2 + 2 &{} x_2 &{} 0 &{} 1 \\ 3 &{} 2x_2 &{} 2x_3+1 &{} 0 \\ x_1 &{} 0 &{} x_3 &{}x_4 \end{array}\right) \end{aligned}$$
is reduced and is of full row rank.

Before giving the property of reduced symbolic support matrices, the following simple result about the differential transcendence degree is needed.

Lemma 4.3

For \(\eta _1,\eta _2\) in an extension field of \(\mathbb {Q}\), \(\mathrm{d.tr.deg}\,\mathbb {Q}\langle \eta _1^{a_1},\eta _1^{a_2}\eta _2 \rangle /\mathbb {Q}=\mathrm{d.tr.deg}\,\mathbb {Q}\langle \eta _1,\) \(\eta _2 \rangle /\mathbb {Q}\), where \(a_1,a_2\) are nonzero rational numbers.

Proof

For any \(p\in \mathbb {Z}\setminus \{0\}\), we have
$$\begin{aligned} \begin{array}{lll} \hbox {{d.tr.deg}}\, \mathbb {Q}\langle \eta _1,\eta _2 \rangle /\mathbb {Q}&{} = &{} \hbox {{d.tr.deg}}\, \mathbb {Q}\langle \eta _1,\eta _2 \rangle /\mathbb {Q}\langle \eta _1^p,\eta _2 \rangle + \hbox {{d.tr.deg}}\, \mathbb {Q}\langle \eta _1^p,\eta _2 \rangle /\mathbb {Q}\\ &{} = &{} \hbox {{d.tr.deg}}\,\mathbb {Q}\langle \eta _1^p,\eta _2 \rangle /\mathbb {Q}. \end{array} \end{aligned}$$
So for \(a\in \mathbb {Q}\backslash \{0\}\), \(\hbox {{d.tr.deg}}\, \mathbb {Q}\langle \eta _1^a,\eta _2 \rangle /\mathbb {Q}=\hbox {{d.tr.deg}}\, \mathbb {Q}\langle \eta _1,\eta _2 \rangle /\mathbb {Q}\). Thus, \(\hbox {{d.tr.deg}}\, \mathbb {Q}\langle \eta _1^{a_1},\eta _1^{a_2}\eta _2 \rangle /\mathbb {Q}= \hbox {{d.tr.deg}}\, \mathbb {Q}\langle \eta _1^{a_2},\eta _1^{a_2}\eta _2\rangle = \hbox {{d.tr.deg}}\,\mathbb {Q}\langle \eta _1, \eta _2 \rangle \). \(\square \)

The differential transcendence degree of a set of reduced Laurent differential monomials is easy to compute.

Theorem 4.4

Let \(B_1, B_2, \ldots , B_m\) be a set of reduced Laurent differential monomials in \({\mathbb {Y}}\). Then \(\mathrm{d.tr.deg}\,{\mathbb Q}\langle B_1, B_2, \ldots , B_m\rangle /{\mathbb Q}=\min (m,n)\).

Proof

It suffices to prove the case \(m = n\) by the following two facts. In the case \(m>n\), we need only to prove that \(B_1,\ldots , B_n\) are differentially independent. And in the case \(m<n\), we can treat \(y_{m+1},\ldots ,y_n\) as parameters, then \(B_1, B_2, \ldots , B_m\) are still reduced Laurent differential monomials. So if we have proved the result for \(m=n\), \(\hbox {{d.tr.deg}}\, {\mathbb Q}\langle B_1, B_2, \ldots , B_m\rangle /{\mathbb Q}\ge \hbox {{d.tr.deg}}\, {\mathbb Q}\langle y_{m+1},\ldots ,y_n\rangle \langle B_1, B_2, \ldots , B_m\rangle /{\mathbb Q}\langle y_{m+1},\ldots ,y_n\rangle \) \( = m\) follows.

Since \(\{ B_1, B_2, \ldots , B_n\}\) is reduced, we have \(o_i = {\mathrm{ord}}(B_i,y_i)\ge 0\) for \(i\le n\). In this proof, a Laurent differential monomial will be treated as an algebraic Laurent monomial, or simply a monomial. Furthermore, the lex order between two monomials induced by the following variable order will be used.
$$\begin{aligned}&\boxed {y_{1} > y'_{1}> \cdots > y_{1}^{(o_1-1)}}\\&> \boxed {y_{2} > y'_{2}> \cdots > y_{2}^{(o_2-1)}}\\&> \cdots \\&> \boxed {y_{n} > y'_{n} > \cdots > y_{n}^{(o_n-1)} > y_{n}^{(o_n)} > y_{n}^{(o_n+1)} > \cdots }\\&> \boxed { y_{n-1}^{(o_{n-1})} > y_{n-1}^{(o_{n-1}+1)}> \cdots }\\&> \cdots \\&> \boxed {y_{1}^{(o_1)} > y_{1}^{(o_1+1)} > \cdots }. \end{aligned}$$
Under this ordering, we claim that the leading monomial of \(\delta ^t B_i\,( 1\le i \le n, t\in {\mathbb N})\) is \(LM_{it} = \frac{B_i y_{i}^{(o_i+t)}}{y_{i}^{(o_i)}}\). Here by leading monomial, we mean the monomial with the highest order appearing effectively in a polynomial. Let \(B_i = N_i (y_{i}^{(o_i)})^{a_{i}}\,(1\le i \le n)\). If \(N_i=1\), then the monomials of \(\delta ^t B_i\) is of the form \(\prod _{k=0}^t(y_{i}^{(o_i+k)})^{s_k}\), where \(s_0,\ldots ,s_t\) are nonnegative integers such that \(\sum _{k=0}^t s_k = a_{i}\) and \(\sum _{k=1}^t ks_k = t\). Among these monomials, if \(s_k>0\) for some \(1\le k\le t-1\), then \(s_0\) is strictly less than \(a_{i}-1\) and \(\prod _{k=0}^t(y_{i}^{(o_i+k)})^{s_k}<(y_{i}^{(o_i)})^{a_{i}-1}y_{i}^{(o_i+t)}=\frac{B_iy_{i}^{(o_i+t)}}{y_{i}^{(o_i)}}\) follows. Hence, in the case \(N_i=1\), the claim holds. Now suppose \(N_i\ne 1\), then it is a product of variables with lex order larger than \(y_i^{(o_i)}\). Then \(\delta ^{t}B_i=\sum _{k=0}^t {t\atopwithdelims ()k}\delta ^{k}N_i\delta ^{t-k}(y_{i}^{(o_i)})^{a_{i}}\). If \(k=0\), then similar to the case \(N_i=1\), we can show that the highest monomial in \( N_i\delta ^{t} (y_{i}^{(o_i)})^{a_{i}}\) is \(N_i(y_{i}^{(o_i)})^{a_{i}-1}y_{i}^{(o_i+t)}\). For each \(k>0\), \(\delta ^{k}N_i<N_i\) and \(\delta ^{k}N_i\delta ^{t-k}(y_{i}^{(o_i)})^{a_{i}}<N_i(y_{i}^{(o_i)})^{a_{i}-1}y_{i}^{(o_i+t)}=\frac{B_iy_{i}^{(o_i+t)}}{y_{i}^{(o_i)}}\). Hence, the leading monomial of \(\delta ^t B_i\) is \(N_i(y_{i}^{(o_i)})^{a_{i}-1}y_{i}^{(o_i+t)} = \frac{B_iy_{i}^{(o_i+t)}}{y_{i}^{(o_i)}}\).
We claim that these leading monomials \(LM_{it} = \frac{B_iy_{i}^{(o_i+t)}}{y_i^{(o_i)}}\,( i=1,\ldots ,m; t\ge 0)\) are algebraically independent over \(\mathbb {Q}\). We prove this claim by showing that the algebraic transcendence degree of these monomials is the same as the number of monomials for any fixed \(t\). Let \(Y_i = [y_{i} , y_{i}' , \ldots , y_{i}^{(o_i-1)}] \), \(Y_i^\star = [ y_{i}^{(o_i+t+1)}, \ldots , y_{i}^{(q_i+t)}]\), \(B_{it} = [B_i, LM_{i1},\ldots , LM_{it}]\) for \(1\le i \le n\). Let \(\widetilde{B}_{it} = [(y_{i}^{(o_i)})^{a_{i}}, (y_{i}^{(o_i)})^{a_{i}-1}y_{i}^{(o_i+1)}, \ldots , (y_{i}^{(o_i)})^{a_{i}-1}y_{i}^{(o_i+t)}]\) for \(1\le i \le n\). Then, by Lemma 4.3, we have
$$\begin{aligned} \begin{array}{lll} n(t+1) &{}\ge &{} \hbox {{tr.deg}}\,{\mathbb Q}(B_{1t},B_{2t},\ldots ,B_{nt})/{\mathbb Q}\\ &{}\ge &{} \hbox {{tr.deg}}\,{\mathbb Q}_1(B_{1t},B_{2t},\ldots ,B_{nt})/\mathbb {Q}_1 \\ &{}=&{} \hbox {{tr.deg}}\,{\mathbb Q}_1(\widetilde{B}_{1t},\widetilde{B}_{2t},\ldots ,\widetilde{B}_{nt})/\mathbb {Q}_1\\ &{}=&{} n(t+1) \end{array} \end{aligned}$$
where \(\mathbb {Q}_1={\mathbb Q}(Y_1,\ldots ,Y_n, Y_1^\star , \ldots , Y_n^\star )\). Hence, this claim is proved.

Now, we prove that \(B_1,\ldots ,B_n\) are differentially independent over \(\mathbb {Q}\). Suppose the contrary, then there exists a nonzero differential polynomial \(P\in \mathbb {Q}\{z_1,\ldots ,z_n\}\) such that \(P(B_1,\ldots ,B_n) = 0\). Let \(P = \sum _k c_k P_k \), where \(P_k\) is a monomial and \(c_k\in {\mathbb Q}\backslash \{0\}\). Then, the leading monomial of \(P_k(B_1,\ldots ,B_n)\) is a product of \(LM_{it}\, (i=1,\ldots ,n; t\ge 0)\). We denote this product by \(LMP_k\), then \(LMP_k \ne LMP_j\) for \(k\ne j\) since these \(LM_{it}\) are algebraically independent. But there exists one and only one product which has the highest order, which cannot be eliminated by the others, which means that \(P(B_1,\ldots ,B_n)\ne 0\), a contradiction. \(\square \)

In general, we cannot reduce a symbolic support matrix to a reduced one. We will show that any symbolic support matrix can be reduced to a more general standard form called T-shape to be defined below.

A generalized Laurent differential monomial is a differential monomial with rational numbers as exponents, that is, a monomial of the form \(\prod _{j=1}^n\prod _{k=0}^s (y_{j}^{(k)})^{t_{jk}}\) for \(t_{jk}\in {\mathbb Q}\). Let \(B_1,\ldots ,B_m\) be generalized Laurent differential monomials. Then their symbolic support matrix is \(\mathrm{{D}}=(d_{ij})_{m\times n}\) where \(d_{ij}\in {\mathbb Q}[x_j]\).

Definition 4.5

A set of generalized Laurent differential monomials \(B_1,\ldots ,B_m\) or their symbolic support matrix \(\mathrm{{D}}\) is said to be in T-shape with index \((i,j)\), if there exist \(1\le i\le \min (m,n), 0\le j \le \min (m,n)-i\) such that all elements except those in the first \(i\) rows and the \(i+1,\ldots ,(i+j)\)th columns of \(\mathrm{{D}}\) are zeros and the sub-matrix consisting of the first \(i+j\) columns of \(\mathrm{{D}}\) is reduced.

In Fig. 1, an illustration of a matrix in T-shape is given, where the sub-matrices \(\mathrm{{D}}_1\) and \(\mathrm{{D}}_2\) of the matrix are reduced. It is easy to see that \(\mathrm{{D}}_1\) must be an \(i\times i\) square matrix. Since the first \(i+j\) columns of a T-shape matrix \(\mathrm{{D}}\) are a reduced sub-matrix, we have

Lemma 4.6

The rank of a T-shape matrix with index \((i,j)\) equals to \(i+j\). Furthermore, a T-shape matrix is reduced if and only if it is of full rank, that is, \(i+j=\min {(m,n)}\).

https://static-content.springer.com/image/art%3A10.1007%2Fs10208-015-9249-9/MediaObjects/10208_2015_9249_Fig1_HTML.gif
Fig. 1

A T-shape matrix

The sub-matrices \(\mathrm{{Z}}_1\) and \(\mathrm{{Z}}_2\) in Fig. 1 are zero matrices and \((\mathrm{{Z}}_1,\mathrm{{Z}}_2)\) is called the zero sub-matrix of \(\mathrm{{D}}\). For a \(k\times l\) zero matrix \(A\), we define its \(0\) -rank to be \(k+l\).

Lemma 4.7

A T-shape matrix \(\mathrm{{D}}\) of index \((i,j)\) is not of full rank if and only if the \(0\)-rank \(r=m+n-i-j\) of its zero sub-matrix satisfies \(r \ge \mathrm{max}(m,n)+1\).

Proof

Note that the zero sub-matrix of \(\mathrm{{D}}\) is an \((m-i)\times (n-j)\) matrix with \(0\)-rank \(r=m+n-i-j\). By Lemma 4.6, \(\mathrm{{D}}\) is not of full rank if and only if \(i+j < \min (m,n)\), which is equivalent to \(r=m+n-i-j>m+n-\min (m,n)\) or \(r\ge \mathrm{max}(m,n)+1\). \(\square \)

The differential transcendence degree of a set of Laurent differential monomials in T-shape can be easily determined, as shown by the following result.

Theorem 4.8

Let \(B_1,\ldots ,B_m\) be generalized Laurent differential monomials and \(\mathrm{{D}}\) their symbolic support matrix which is in T-shape with index \((i,j)\). Then \(\mathrm{d.tr.deg}\,\) \( {\mathbb Q}\langle B_1, \ldots , B_m\rangle /{\mathbb Q}=\mathrm{rank}(\mathrm{{D}})=i+j\).

Proof

Without loss of generality, each \(B_i\) is assumed to be a Laurent differential monomial. For otherwise, by Lemma 4.3, we may consider \(B_i^{k_i}\) for certain \(k_i\in {\mathbb N}\), which is a Laurent differential monomial.

Since \(\mathrm{{D}}\) is a T-shape matrix with index \((i,j)\), by Lemma 4.6, the rank of \(\mathrm{{D}}\) is \(i+j\). Deleting the zero columns of the symbolic support matrix of \(B_{i+1},\ldots ,B_{m}\), we can get a reduced matrix. By Theorem 4.4, we have \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle B_{i+1},\ldots ,B_{m}\rangle /{\mathbb Q}= j\). Since the symbolic support matrix of \(B_1,\ldots ,B_{i}\) is also a reduced one, by Theorem 4.4, we have \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle B_1,\ldots ,\) \(B_{i}\rangle \) \(/{\mathbb Q}\) \( = i\). Hence,
$$\begin{aligned} \begin{array}{lll} \hbox {{d.tr.deg}}\,\mathbb {Q}\langle B_1,\ldots ,B_m\rangle /{\mathbb Q}&{} = &{} \hbox {{d.tr.deg}}\,{\mathbb Q}\langle B_1,\ldots ,B_m\rangle /{\mathbb Q}\langle B_{i+1},\ldots ,B_{m}\rangle \\ {} &{} &{} \quad + \hbox {{d.tr.deg}}\,{\mathbb Q}\langle B_{i+1},\ldots ,B_{m}\rangle /{\mathbb Q}\\ &{} \le &{} \hbox {{d.tr.deg}}\,{\mathbb Q}\langle B_1,\ldots ,B_{i}\rangle /{\mathbb Q}+ j \\ &{} = &{} i+j. \end{array} \end{aligned}$$
On the other hand, if we treat \(y_{i+1},\ldots , y_{i+j}\) and their derivatives as parameters, the symbolic support matrix of \(B_1,\ldots ,B_{i}\) is also a reduced one and the rank of this matrix is \(i\). By Theorem 4.4, we have \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle y_{i+1},\ldots ,y_{i+j}\rangle \langle B_1,\ldots ,B_{i}\rangle /{\mathbb Q}\langle y_{i+1},\ldots ,y_{i+j}\rangle = i\). Since \(B_{i+1},\ldots ,B_m\) are monomials in \(y_{i+1},\ldots ,y_{i+j}\) (see Fig. 1), \({\mathbb Q}\langle B_{i+1},\ldots ,B_{m}\rangle \subset {\mathbb Q}\langle y_{i+1},\ldots ,\) \(y_{i+j}\rangle \). Hence,
$$\begin{aligned} \begin{array}{lll} &{}&{}\hbox {{d.tr.deg}}\,\mathbb {Q}\langle B_1,\ldots ,B_m\rangle /{\mathbb Q}= \hbox {{d.tr.deg}}\,{\mathbb Q}\langle B_1,\ldots ,B_m\rangle /{\mathbb Q}\langle B_{i+1},\ldots ,B_{m}\rangle \\ &{}&{} \qquad +\, \hbox {{d.tr.deg}}\,{\mathbb Q}\langle B_{i+1},\ldots ,B_{m}\rangle /{\mathbb Q}\\ &{}&{}\quad \ge \hbox {{d.tr.deg}}\,{\mathbb Q}\langle y_{i+1},\ldots ,y_{i+j}\rangle \langle B_1,\ldots ,B_{i}\rangle /{\mathbb Q}\langle y_{i+1},\ldots ,y_{i+j}\rangle + j\\ &{}&{}\quad = i+j. \end{array} \end{aligned}$$
Thus, \(\hbox {{d.tr.deg}}\,\mathbb {Q}\langle B_1,\ldots ,B_m\rangle /{\mathbb Q}=\hbox {{rk}}(\mathrm{{D}})=i+j\). \(\square \)

To express the differential transcendence degree of a set \(S\) of Laurent differential monomials in terms of the rank of its symbolic support matrix, it remains to show that \(S\) can be reduced to a set of Laurent differential monomials in T-shape, which has the same differential transcendence degree with \(S\).

We first define the transformations that will be used to reduce each symbolic support matrix to one in T-shape. A \({\mathbb Q}\) -elementary transformation for a matrix \(\mathrm{{D}}\) consists of two types of matrix row operations and one type of matrix column operations. To be more precise, Type 1 operations consist of interchanging two rows of \(\mathrm{{D}}\), Type 2 operations consist of adding a rational number multiple of one row to another, and Type 3 operations consist of interchanging two columns.

Let \(B_1,\ldots ,B_m\) be Laurent differential monomials and \(\mathrm{{D}}\) their symbolic support matrix. Then \({\mathbb Q}\)-elementary transformations of \(\mathrm{{D}}\) correspond to certain transformations of the monomials. Indeed, interchanging the \(i\)th and the \(j\)th rows of \(\mathrm{{D}}\) means interchanging \(B_i\) and \(B_j\), and interchanging the \(i\)th and the \(j\)th columns of \(\mathrm{{D}}\) means interchanging \(y_i\) and \(y_j\) in \(B_1,\ldots ,B_m\)(or in the variable order). Multiplying the \(i\)th row of \(\mathrm{{D}}\) by a rational number \(r\) and adding the result to the \(j\)th row mean changing \(B_j\) to \(B_i^rB_j\). It is clear that by applying \(Q\)-elementary transformations to \(B_1,\ldots ,B_m\), we obtain a set of generalized Laurent differential monomials. As a direct consequence of Lemma 4.3, we have the following result.

Lemma 4.9

Let \(B_1,\ldots ,B_m\) be Laurent differential monomials and \(C_1,\ldots ,C_m\) generalized Laurent differential monomials obtained from \(B_1,\ldots ,B_m\) by a series of \({\mathbb Q}\)-elementary transformations. Then \(\mathrm{d.tr.deg}\,{\mathbb Q}\langle B_1, \ldots , B_m\rangle /{\mathbb Q}=\mathrm{d.tr.deg}\, {\mathbb Q}\langle C_1,\) \(\ldots , C_m\rangle /{\mathbb Q}\).

In “Appendix”, we will prove the following theorem.

Theorem 4.10

The symbolic support matrix of any Laurent differential monomials \(B_1,\) \( \ldots , B_m\) can be reduced to a T-shape matrix by a finite number of \({\mathbb Q}\)-elementary transformations.

We now have the main result of this section.

Theorem 4.11

Let \(B_1,\ldots , B_m\) be Laurent differential monomials in \({\mathbb {Y}}\) and \(\mathrm{{D}}\) their symbolic support matrix. Then \(\mathrm{d.tr.deg}\,{\mathbb Q}\langle B_1, \ldots , B_m\rangle /{\mathbb Q}=\mathrm{rank}(\mathrm{{D}})\).

Proof

By Lemma 4.9, \({\mathbb Q}\)-elementary transformations keep the differential transcendence degree unchanged. The result follows from Theorems 4.8 and 4.10. \(\square \)

Theorem 4.11 can be used to check whether the Laurent polynomial system (3) is differentially essential as shown by the following result.

Corollary 4.12

The Laurent differential system (3) is Laurent differentially essential if and only if there exist \(M_{ij_i}\,(i=0,\ldots ,n)\) with \(1\le j_i\le l_{i}\) such that the symbolic support matrix of the Laurent differential monomials \(M_{0j_0}/M_{00},\ldots ,M_{nj_n}/M_{n0}\) is of rank \(n\).

By Corollary 3.4 of [16] , the complexity to compute the determinant of a sub-matrix \(\mathrm{{D}}_s\) of \(\mathrm{{D}}\) with size \(k\times k\) is bounded by \(O(k^{k+2}L\gamma ^{\frac{2}{k+3}}\Delta )\), where \(L = \log ||\mathrm{{D}}_s||\), \(\gamma \) denotes the number of arithmetic operations required for multiplying a scalar vector by the matrix \(\mathrm{{D}}_s\), and \(\Delta \) is the degree bound of \(\mathrm{{D}}_s\). So, the complexity to compute the rank of \(\mathrm{{D}}\) is single exponential at most.

Remark 4.13

A practical way to check whether the Laurent differential system (3) is Laurent differentially essential is given below.
  • Choose \(n+1\) monomials \(M_{ij_i}\,(i=0,\ldots ,n)\) with \(1\le j_i\le l_{i}\).

  • Use Algorithm TSHAPE in “Appendix” to reduce the symbolic support matrix of \(M_{0j_0}/M_{00},\) \(\ldots ,M_{nj_n}/M_{n0}\) to a T-shape matrix \(\mathrm{{D}}\).

  • Use Theorem 4.8 to check whether the rank of \(\mathrm{{D}}\) is \(n\).

  • If the rank of \(\mathrm{{D}}\) is \(n\), then the system is essential. Otherwise, we need to choose another set of \(n+1\) monomials and repeat the procedure.

The number of possible choices for the \(n+1\) monomials is \(\prod _{i=0}^n l_i\), which is very large. But, the procedure is more efficient than computing the rank of the symbolic support matrix for two reasons. Firstly, in Algorithm TSHAPE, since the maximal degree of polynomials in each column of the matrix is not increased, there is no size swell in the elimination procedure. Secondly, the probability for \(n+1\) Laurent differential monomials to have differential transcendence degree \(n\) is very high. As a consequence, we do not need to repeat the procedure for many choices of \(n+1\) monomials.

By Corollary 4.12, property 3) of Theorem 1.1 is proved.

4.2 Rank Essential Laurent Differential Polynomial Systems

In this section, the result in the preceding section is used to determine a rank essential sub-system of \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\), which is the minimal subset of \({\mathbb {P}}\) whose coefficients occur in \({\mathbf {R}}\).

Consider \(m\) generic Laurent differential polynomials
$$\begin{aligned} {\mathbb {P}}_i= u_{i0}M_{i0} + \sum \limits _{k=1}^{l_i}u_{ik} M_{ik}\,(i=1,\ldots ,m), \end{aligned}$$
(12)
where \(m\le n\) and all the \(u_{ik}\) are differentially independent over \({\mathbb Q}\). Let \({\mathbf {u}}_i\) \(=(u_{i0},\ldots ,u_{il_i})\) and let \(\beta _{ik}\) be the symbolic support vector of \(M_{ik}/M_{i0}\). Then the vector \(w_i = \sum _{k=1}^{l_i} u_{ik}\beta _{ik}\) is called the symbolic support vector of \({\mathbb {P}}_i\), and the matrix \(\mathrm{{D}}_{\mathbb {P}}\) with \(w_1,\ldots ,w_m\) as its rows is called the symbolic support matrix of \({\mathbb {P}}_1,\ldots ,{\mathbb {P}}_m\). Then, we have the following results.

Lemma 4.14

Let \(\mathrm{{D}}_{k_1,\ldots ,k_m}\) be the symbolic support matrix of the Laurent differential monomials \((M_{1k_1}/M_{10},\ldots ,M_{mk_m}/M_{m0})\). Then \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}}) = \mathrm{max}_{ 1\le k_i \le l_i} \mathrm{rank}(\mathrm{{D}}_{k_1,\ldots ,k_m})\).

Proof

Let the rank of \(\mathrm{{D}}_{\mathbb {P}}\) be \(r\). Without loss of generality, we assume that the \(r\times r\) leading principal sub-matrix of \(\mathrm{{D}}_{\mathbb {P}}\), say \(\mathrm{{D}}_{{\mathbb {P}},r}\), is of full rank. By the properties of determinants, \(\det (\mathrm{{D}}_{{\mathbb {P}},r}) = \sum \limits _{k_1=1}^{ l_1}\cdots \sum \limits _{k_r=1}^{ l_r} \prod _{i=1}^r u_{ik_i}\det (k_1,\ldots ,k_r)\) where \(\det (k_1,\ldots ,k_r)\) is the determinant of the \(r\times r\) leading principal sub-matrix of \(\mathrm{{D}}_{k_1,\ldots ,k_m}\). So \(\det (\mathrm{{D}}_{{\mathbb {P}},r})\ne 0\) if and only if there exist \(k_1,\ldots ,k_r\) such that \(\det (k_1,\ldots ,k_r)\ne 0\). Hence, the rank of \(\mathrm{{D}}_{k_1,\ldots ,k_m}\) is no less than the rank of \(\mathrm{{D}}_{\mathbb {P}}\). On the other hand, let \(s= \mathrm{max}_{ 1\le k_i \le l_i} \mathrm{rank}(\mathrm{{D}}_{k_1,\ldots ,k_m})\). Without loss of generality, we assume \(\det (k_1,\ldots ,k_s)\ne 0\), then, \(\det (\mathrm{{D}}_{{\mathbb {P}},s})\ne 0\). Hence, \(s\) is no greater than the rank of \(\mathrm{{D}}_{\mathbb {P}}\). \(\square \)

The following result is interesting in that it reduces the computation of differential transcendence degree for a set of generic Laurent differential polynomials to the computation of the rank of a matrix, which is analogous to the similar result for linear equations.

Theorem 4.15

For \({\mathbb {P}}_i\) given in (12), \(\mathrm{d.tr.deg}\,{\mathbb Q}\langle \cup _{i=1}^m{\mathbf {u}}_i\rangle \langle {\mathbb {P}}_1/M_{10},\ldots , {\mathbb {P}}_m/M_{m0}\rangle /\) \({\mathbb Q}\langle \cup _{i=1}^m{\mathbf {u}}_i\rangle = \mathrm{rank}(\mathrm{{D}}_{\mathbb {P}})\).

Proof

By Lemma 2.1, \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle \cup _{i=1}^m{\mathbf {u}}_i\rangle \langle {\mathbb {P}}_1/M_{10},\ldots , {\mathbb {P}}_m/M_{m0}\rangle /{\mathbb Q}\langle \cup _{i=1}^m{\mathbf {u}}_i\rangle \) is no less than the maximal differential transcendence degree of \(M_{1k_1}/M_{10},\ldots ,M_{mk_m}/M_{m0}\) over \({\mathbb Q}\).

On the other hand, the differential transcendence degree will not increase by linear combinations, since for any differential polynomial \(a_i\) and \( \bar{a}_1\), \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle \lambda \rangle \langle a_1+\lambda \bar{a}_1,a_2,\ldots ,a_k\rangle /{\mathbb Q}\langle \lambda \rangle \le \mathrm{max}\{\hbox {{d.tr.deg}}\,{\mathbb Q}\langle a_1,a_2\ldots ,a_k\rangle /{\mathbb Q},\) \( \hbox {{d.tr.deg}}\,{\mathbb Q}\langle \bar{a}_1,a_2,\ldots ,a_k\rangle /{\mathbb Q}\}\). So, the differential transcendence degree of \({\mathbb {P}}_1/M_{10},\ldots , {\mathbb {P}}_m/M_{m0}\) over \({\mathbb Q}\langle \cup _{i=1}^m{\mathbf {u}}_i \rangle \) is no greater than the maximal differential transcendence degree of \(M_{1k_1}/M_{10},\ldots ,M_{mk_m}/M_{m0}\).

Thus, \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle \cup _{i=1}^m{\mathbf {u}}_i\rangle \langle {\mathbb {P}}_1/M_{10},\ldots , {\mathbb {P}}_m/M_{m0}\rangle /{\mathbb Q}\langle \cup _{i=1}^m{\mathbf {u}}_i\rangle =\mathrm{max}_{ k_i}\hbox {{d.tr.deg}}\,\mathbb {Q}\langle M_{1k_1}/M_{10},\) \(\ldots ,M_{mk_m}/M_{m0}\rangle /\mathbb {Q}\). By Theorem 4.11 and Lemma 4.14, the differential transcendence degree of \({\mathbb {P}}_1/M_{10},\ldots , {\mathbb {P}}_m/M_{m0}\) equals to the rank of \(\mathrm{{D}}_{\mathbb {P}}\).\(\square \)

By Lemma and 4.14 and Theorem 4.15, we have the following criterion for system (3) to be differentially essential.

Corollary 4.16

The Laurent differential system \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) defined in (3) is Laurent differentially essential if and only if \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}})=n\).

The difference between Corollary 4.12 and Corollary 4.16 is that in the later case we need only to compute the rank of a single matrix whose elements are multivariate polynomials in \(\sum _{i=0}^n (l_i+1)+n\) variables, while in the former case we need to compute the ranks of up to \(\prod _{i=0}^n l_i\) matrices whose elements are univariate polynomials in \(n\) separate variables.

In the rest of this section, properties for the elimination ideal
$$\begin{aligned} \mathcal {I}_u=([{\mathbb {P}}_1^{\text {N}},\ldots ,{\mathbb {P}}_m^{\text {N}}]\hbox {{:}}\mathbb {m})_{{\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m\}}\cap {\mathbb Q}\{{\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m\} \end{aligned}$$
(13)
will be studied, where \({\mathbb {P}}_i\) are defined in (12) and \(\mathbb {m}\) is the set of all differential monomials in \({\mathbb {Y}}\). These results will lead to a deeper understanding of the sparse differential resultant.

Theorem 4.17

The above \(\mathcal {I}_u\) is a differential prime ideal with codimension \(m-\mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}})\).

Proof

Let \(\eta =(\eta _1,\ldots ,\eta _n)\) be a generic point of \([0]_{{\mathbb Q}\langle {\mathbf {u}}\rangle \{{\mathbb {Y}}\}}\), where \(\hat{{\mathbf {u}}}=\cup _{i=1}^m{\mathbf {u}}_i\backslash \{u_{i0}\}\) and
$$\begin{aligned} \zeta _i=-\sum _{k=1}^{l_i}u_{ik}\frac{M_{ik}(\eta )}{M_{i0}(\eta )}\,\,(i=1,\ldots ,m). \end{aligned}$$
(14)
Similar to the proof of Theorem 3.9, we can show that \(\theta =(\eta _1,\ldots ,\eta _n;\zeta _1,u_{11},\ldots ,u_{1l_1};\ldots ;\zeta _m,\) \(u_{m1},\ldots ,u_{ml_m})\) is a generic point of \(([{\mathbb {P}}_1^{\text {N}},\ldots ,{\mathbb {P}}_m^{\text {N}}]\hbox {{:}}\mathbb {m})_{{\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_1, \ldots ,{\mathbf {u}}_m\}}\), which implies that it is a prime differential ideal in \({\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_1,\) \(\ldots ,{\mathbf {u}}_m\}\). As a consequence, \(\mathcal {I}_u\) is a prime differential ideal. Since \(\zeta _1,\ldots ,\zeta _m\) are free of \(u_{i0}\, (i=1,\ldots ,m)\), by Theorem 4.15,
$$\begin{aligned}&\hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}}\rangle \langle \zeta _1, \ldots ,\zeta _m\rangle /{\mathbb Q}\langle {\mathbf {u}}\rangle \\&\quad = \hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_{m}\rangle \langle \zeta _1,\ldots ,\zeta _m\rangle /{\mathbb Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_{m}\rangle \\&\quad = \hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_{m}\rangle \langle \frac{{\mathbb {P}}_1(\eta )}{M_{10}(\eta )},\ldots , \frac{{\mathbb {P}}_m(\eta )}{M_{m0}(\eta )}\rangle /{\mathbb Q}\langle {\mathbf {u}}_1,\ldots , {\mathbf {u}}_{m}\rangle \\&\quad = \mathrm{rank}(\mathrm{{D}}_{\mathbb {P}}). \end{aligned}$$
Hence, the codimension of \(\mathcal {I}_u\) is \(m-\mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}})\). \(\square \)

In the following, two applications of Theorem 4.17 will be given. The first application is to identify certain \({\mathbb {P}}_i\) such that their coefficients will not occur in the sparse differential resultant. This will lead to simplifications in the computation of the resultant.

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) be given in (3) and \(I\subset \{0,1,\ldots ,n\}\). Denote \({\mathbf {u}}_I=\cup _{i\in I}{\mathbf {u}}_i\). Also denote by \({\mathbb {P}}_I\) the Laurent differential polynomial set consisting of \({\mathbb {P}}_i\,( i\in I)\) and \(\mathrm{{D}}_{{\mathbb {P}}_I}\) its symbolic support matrix. Let \({\mathbb {P}}_I^{\text {N}}=\{{\mathbb {P}}_i^{\text {N}}| i\in I\}\). For a subset \(I\subset \{0,1,\ldots ,n\}\), the cardinal number of \(I\) is denoted by \(|I|\). If \(|I|= \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I})\), then \({\mathbb {P}}_I\), or \(\{\mathcal {A}_i\,|\,i\in I\}\), is called a differentially independent set.

Lemma 4.18

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) given in (3) be a Laurent differentially essential system and \(I\subset \{0, 1,\ldots , n \}\). If \(|I|- \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I}) = 1\), then \(([{\mathbb {P}}_I^{\text {N}}]\hbox {:}\mathbb {m})_{{\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_I\}}\cap {\mathbb Q}\{{\mathbf {u}}_I\}=\mathrm{sat}({\mathbf {R}})\).

Proof

By Theorem 4.17, \(\mathcal {I}_1=([{\mathbb {P}}_I^{\text {N}}]\hbox {:}\mathbb {m})_{{\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_I\}}\cap {\mathbb Q}\{{\mathbf {u}}_I\}\) is of codimension \(|I|-\mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I})=1\). Then \(\mathcal {I}_1=\mathrm{sat}({\mathbf {R}}_1)\subset \mathrm{sat}({\mathbf {R}})\) for an irreducible differential polynomial \({\mathbf {R}}_1\in {\mathbb Q}\{{\mathbf {u}}_I\}\). By Lemma 2.3, \({\mathbf {R}}\) can reduce \({\mathbf {R}}_1\) to zero under any ranking. If \(I = \{0,1,\ldots ,n\}\), then the lemma is proved. Otherwise, for any \(k\in \{0,1,\ldots ,n\}\setminus I\), we claim that \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_k)=-\infty \). Suppose the contrary, then under an arbitrary elimination ranking satisfying \({\mathbf {u}}_k > {\mathbf {u}}_i\) for \(i\ne k\), \({\mathbf {R}}_1\) cannot be reduced to zero w.r.t \({\mathbf {R}}\), a contradiction to \({\mathbf {R}}_1\in \mathrm{sat}({\mathbf {R}})\). So \({\mathbf {R}}\in {\mathbb Q}\{{\mathbf {u}}_I\}\) and it is easy to check that \({\mathbf {R}}\in ([{\mathbb {P}}_I^{\text {N}}]\hbox {:}\mathbb {m})_{{\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_I\}}\cap {\mathbb Q}\{{\mathbf {u}}_I\}=\mathrm{sat}({\mathbf {R}}_1)\). Then \(\mathrm{sat}({\mathbf {R}})=\mathrm{sat}({\mathbf {R}}_1)\) and the lemma is proved. \(\square \)

Definition 4.19

Let \(I\subset \{0,1,\ldots ,n\}\). Then we say \(I\) or \({\mathbb {P}}_I\) is rank essential if the following conditions hold: (1) \(|I| - \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I}) = 1\) and (2) \(|J| = \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_J})\) for each proper subset \(J\) of \(I\).

Note that a rank essential system is the differential analog of the essential system introduced in [46]. Using this definition, we have the following property, which is similar to Corollary 1.1 in [46].

Theorem 4.20

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) given in (3) be a Laurent differentially essential system. Then for any \(I\subset \{0, 1,\ldots , n \}\), \(|I|- \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I}) \le 1\) and there exists a unique \(I\) which is rank essential. If \(I\) is rank essential, then \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\ge 0\) if and only if \(i\in I\).

Proof

Since \(n = \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}}) \le \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I}) + |{\mathbb {P}}| - |{\mathbb {P}}_I| = n+1 + \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I}) - |I|\), we have \(|I| - \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I}) \le 1\). Since \(|I| - \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I})\ge 0\), for each \(I\), either \(|I| - \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I})=0\) or \(|I| - \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_I})=1\). Using the fact that \(|\{0,1,\ldots ,n\}| - \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}})=n\), it is easy to check the existence of a rank essential set \(I\). For the uniqueness, we assume that there exist two subsets \(I_1,I_2\subset \{0,1,\ldots ,n\}\) which are rank essential. Then, we have
$$\begin{aligned}\begin{array}{lll} \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_{I_1\cup I_2}}) &{}\le &{} \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_{I_1}}) + \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_{I_2}}) - \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_{I_1\cap I_2}})\\ &{}= &{} |I_1| - 1 + |I_2| -1 -|I_1\cap I_2| = |I_1\cup I_2| - 2, \end{array} \end{aligned}$$
which means that \(\mathrm{{D}}_{{\mathbb {P}}}\) is not of full rank, a contradiction.

Let \(I\) be a rank essential set. By Lemma 4.18, the sparse differential resultant \({\mathbf {R}}\) of \({\mathbb {P}}\) involves only the coefficients of \({\mathbb {P}}_i\,({i\in I})\). For any \(i\in I\), let \(I_{\hat{i}}=I\setminus \{i\}\). Since \(I\) is rank essential, we have \(([{\mathbb {P}}_{I_{\hat{i}}}]\hbox {:}\mathbb {m})_{{\mathbb Q}\{{\mathbb {Y}},{\mathbf {u}}_{I_{\hat{i}}}\}}\cap {\mathbb Q}\{{\mathbf {u}}_{I_{\hat{i}}}\}=[0]\) and hence \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\ge 0\) for any \(i\in I\). \(\square \)

Remark 4.21

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) given in (3) be a Laurent differentially essential system. We can obtain a rank essential set \(I\subset \{0, 1,\ldots , n \}\) as follows. Let \(J=\{0, 1,\ldots , n \}\). If for all \(j\in J\), \(|J\setminus \{j\}| = \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_{J\setminus \{j\}}})\), then \(J\) is rank essential. Otherwise, by Theorem 4.20, there exists an \(j_0\in J\) such that \(|J\setminus \{j_0\}| = \mathrm{rank}(\mathrm{{D}}_{{\mathbb {P}}_{J\setminus \{j_0\}}})+1\). Repeating the procedure for \(J:=J\setminus \{j_0\}\), we will eventually obtain a rank essential system.

Example 4.22

In Example 3.18, \(\{{\mathbb {P}}_0,{\mathbb {P}}_1\}\) is a rank essential set since they involve \(y_1\) only.

A more interesting example is given below.

Example 4.23

Let \({\mathbb {P}}\) be a Laurent differential polynomial system where
$$\begin{aligned} \begin{array}{llll} {\mathbb {P}}_0 &{}=&{} u_{00}y_1y_2+ u_{01}y_3 \\ {\mathbb {P}}_1 &{}=&{} u_{10}y_1y_2+ u_{11}y_3y_3' \\ {\mathbb {P}}_2 &{}=&{} u_{20}y_1y_2+ u_{21}y_3' \\ {\mathbb {P}}_3 &{}=&{} u_{30}y_1^{(o)}+ u_{31}y_2^{(o)} + u_{32}y_3^{(o)} \\ \end{array} \end{aligned}$$
where \(o\) is a very large positive integer. It is easy to show that \({\mathbb {P}}\) is Laurent differentially essential and \(\widetilde{{\mathbb {P}}}=\{ {\mathbb {P}}_0, {\mathbb {P}}_1, {\mathbb {P}}_2 \}\) is the rank essential sub-system. Note that all \(y_1,y_2,y_3\) are in \(\widetilde{{\mathbb {P}}}\). \(\widetilde{{\mathbb {P}}}\) is rank essential because \(y_1y_2\) can be treated as one variable.

The second application of Theorem 4.17 is to prove the dimension conjecture for a class of generic differential polynomials. The differential dimension conjecture proposed by Ritt [42, p. 178] claims that the dimension of each component of the differential ideal generated by \(m\) differential polynomials in \(m\le n\) variables is no less than \(n-m\). In [17], the dimension conjecture is proved for quasi-generic differential polynomials. The following theorem proves the conjecture for a larger class of differential polynomials.

Theorem 4.24

Let \({\mathbb {P}}_i= u_{i0} + \sum \limits _{k=1}^{l_i}u_{ik} M_{ik}\,(i=1,\ldots ,m;\,m\le n)\) be generic differential polynomials in \(n\) differential indeterminates \({\mathbb {Y}}\) and \({\mathbf {u}}_i=(u_{i0},\ldots ,u_{il_i})\). Then \([{\mathbb {P}}_1,\ldots ,{\mathbb {P}}_m]_{{\mathbb Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m\rangle \{{\mathbb {Y}}\}}\) is either the unit ideal or a prime differential ideal of dimension \(n-m\).

Proof

Use the notation introduced in the proof of Theorem 4.17 with \(M_{i0}=1\). Let \(\mathcal {I}_0 = [{\mathbb {P}}_1,\ldots ,{\mathbb {P}}_m]_{{\mathbb Q}\{{\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m, {\mathbb {Y}}\}}\) and \(\mathcal {I}_1 = [{\mathbb {P}}_1,\ldots ,{\mathbb {P}}_m]_{{\mathbb Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m\rangle \{{\mathbb {Y}}\}}\). Since \({\mathbb {P}}_i\) contains a nonvanishing degree zero term \(u_{i0}\), it is clear that \(\mathcal {I}_0=\mathcal {I}_0\hbox {{:}}\mathbb {m}=\mathcal {I}_1\cap {\mathbb Q}\{{\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m, {\mathbb {Y}}\}\).

From the proof of Theorem 4.17, \(\mathcal {I}_0\) is a prime differential ideal with \(\theta =(\eta _1,\ldots ,\eta _n;\zeta _1,\) \( u_{11},\ldots ,u_{1l_1};\ldots ;\zeta _m,u_{m1},\ldots ,u_{ml_m})\) as a generic point. Note that \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}}) \le m\) and two cases will be considered. If \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}}) < m\), by Theorem 4.17, \(\mathcal {I}_u=[{\mathbb {P}}_1,\ldots ,{\mathbb {P}}_m]\cap {\mathbb Q}\{{\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m\}\) is of codimension \(m-\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}})>0\), which means that \(\mathcal {I}_1\) is the unit ideal in \({\mathbb Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m\rangle \{{\mathbb {Y}}\}\). If \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}}) = m\), by the proof of Theorem 4.17, \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}}\rangle \langle \zeta _1,\ldots ,\zeta _m\rangle /{\mathbb Q}\langle {\mathbf {u}}\rangle = m\) and \(\mathcal {I}_u = [0]\) follows. Since \(\mathcal {I}_0 = \mathcal {I}_1 \cap {\mathbb Q}\{{\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m, {\mathbb {Y}}\}\) and \(\mathcal {I}_0\) is prime, it is easy to see that \(\mathcal {I}_1\) is also a differential prime ideal in \({\mathbb Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m\rangle \{{\mathbb {Y}}\}\). Moreover, we have
$$\begin{aligned} \begin{array}{lll} n&{}=&{} \hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}}\rangle \langle \eta _1, \ldots ,\eta _n,\zeta _1,\ldots ,\zeta _m\rangle /{\mathbb Q}\langle {\mathbf {u}}\rangle \\ &{} =&{} \hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}}\rangle \langle \eta _1, \ldots ,\eta _n,\zeta _1,\ldots ,\zeta _m\rangle /{\mathbb Q}\langle {\mathbf {u}},\zeta _1, \ldots ,\zeta _m\rangle \\ &{}\quad &{} +\, \hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}},\zeta _1, \ldots ,\zeta _m\rangle /{\mathbb Q}\langle {\mathbf {u}}\rangle \\ &{} =&{} \hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}}\rangle \langle \eta _1, \ldots ,\eta _n,\zeta _1,\ldots ,\zeta _m\rangle /{\mathbb Q}\langle {\mathbf {u}},\zeta _1, \ldots ,\zeta _m\rangle + m. \end{array} \end{aligned}$$
Hence, \(\hbox {{d.tr.deg}}\,{\mathbb Q}\langle {\mathbf {u}},\zeta _1, \ldots ,\zeta _m\rangle \langle \eta _1, \ldots ,\eta _n\rangle /{\mathbb Q}\langle {\mathbf {u}},\zeta _1, \ldots ,\zeta _m\rangle =n-m\). Without loss of generality, suppose \(\eta _1,\ldots ,\eta _{n-m}\) are differentially independent over \({\mathbb Q}\langle {\mathbf {u}},\zeta _1, \ldots ,\zeta _m\rangle \). Since \(\mathcal {I}_0 = \mathcal {I}_1 \cap {\mathbb Q}\{{\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m, {\mathbb {Y}}\}\), \(\{y_1,\ldots ,y_{n-m}\}\) is a parametric set of \(\mathcal {I}_1\). Thus, \([{\mathbb {P}}_1,\ldots ,{\mathbb {P}}_m]_{{\mathbb Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_m\rangle \{{\mathbb {Y}}\}}\) is of dimension \(n-m\). \(\square \)

By Theorem 4.15, Theorem 4.17, and Corollary 4.16, properties 1) and 2) of Theorem 1.1 are proved.

5 Basic Properties of the Sparse Differential Resultant

In this section, we will prove basic properties for the sparse differential resultant \({\mathbf {R}}({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n)\) of the generic Laurent differential polynomials given in (3).

5.1 Necessary and Sufficient Conditions for the Existence of Non-polynomial Solutions

In the algebraic case, the vanishing of the sparse resultant gives a necessary and sufficient condition for a system of polynomials to have common nonzero solutions in certain sense. We will show that this is also true for sparse differential resultants.

To be more precise, we first introduce some notations. Let \({\mathcal A}_i=\{M_{i0},\ldots ,M_{il_i}\}\) be the Laurent monomial sets of \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) given in (3). Each element \((F_0,\ldots ,F_n)\in \mathcal {L}(\mathcal {A}_0)\times \cdots \times \mathcal {L}(\mathcal {A}_n)\) can be represented by one and only one point \(({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\in \mathcal {E}^{l_0+1}\times \cdots \times \mathcal {E}^{l_n+1}\) where \({\mathbf {v}}_i=(v_{i0},v_{i1},\ldots ,v_{il_i})\) is the coefficient vector of \(F_i\).1 Let \(\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)\) be the subset of \(\mathcal {E}^{l_0+1}\times \cdots \times \mathcal {E}^{l_n+1}\) consisting of points \(({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\) such that the corresponding \(F_i=0\,(i=0,\ldots ,n)\) have non-polynomial common solutions. That is,
$$\begin{aligned}&\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)\!=\!\{({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\in \mathcal {E}^{l_0+1}\times \cdots \times \mathcal {E}^{l_n+1}\,|\, F_0\!=\!\cdots \!=\!F_n=0 \,\,\text {have} \nonumber \\&\qquad \qquad \qquad \text { a common non-polynomial solution in}\, (\mathcal {E}^{\wedge })^n\}. \end{aligned}$$
(15)
The following result shows that the vanishing of the sparse differential resultant gives a necessary condition for the existence of non-polynomial solutions.

Lemma 5.1

Suppose the Laurent differential monomial sets \(\mathcal {A}_i\,(i=0,\ldots ,n)\) form a Laurent differentially essential system. Then \(\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)\subseteq {\mathbb {V}}\big (\mathrm{sat}(\mathrm{Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\big )\).

Proof

Let \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) be a generic Laurent differentially essential system corresponding to \(\mathcal {A}_0,\ldots ,\mathcal {A}_n\) with coefficient vectors \({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\). By (10),
$$\begin{aligned}{}[{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n]\cap {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}= \mathrm{sat}(\hbox {Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n}). \end{aligned}$$
For any point \(({\mathbf {v}}_{0},\ldots ,{\mathbf {v}}_{n})\in \mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)\), let \((F_{0},\ldots ,F_{n})\in \mathcal {L}(\mathcal {A}_0)\times \cdots \times \mathcal {L}(\mathcal {A}_n)\) be the differential polynomial system represented by \(({\mathbf {v}}_{0},\ldots ,{\mathbf {v}}_{n})\). Let \(G\) be any differential polynomial in \(\mathrm{sat}(\hbox {Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\). Then \(G({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\in [F_0,\ldots ,F_n]\subset \mathcal {E}\{{\mathbb {Y}}^{\pm }\}\). Since \(F_{0},\ldots ,F_{n}\) have a non-polynomial common zero, \(G({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\) should be zero. Thus, \(\mathrm{sat}(\hbox {Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\) vanishes at \(({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\). \(\square \)

Example 5.2

Consider Example 3.17. Suppose \({\mathcal {F}}={\mathbb Q}(x)\) and \(\delta = \frac{d}{d x}\). In this example, we have \(\hbox {Res}_{{\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2}\ne 0\). But \(y_1=c_{11}x+c_{10}, y_2=c_{22}x^2+c_{21}x+c_{20}\) consist of a nonzero solution of \({\mathbb {P}}_0={\mathbb {P}}_1={\mathbb {P}}_2=0\) where \(c_{ij}\) are distinct arbitrary constants. This shows that Lemma 5.1 is not correct if we do not consider non-polynomial solutions. This example also shows why we need to consider non-polynomial differential solutions, or equivalently why we consider Laurent differential polynomials instead of the usual differential polynomials.

Let \(\overline{\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)}\) be the Kolchin differential closure of \(\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)\) in \(\mathcal {E}^{l_0+1}\times \cdots \times \mathcal {E}^{l_n+1}\). Then we have the following theorem which gives another characterization of the sparse differential resultant.

Theorem 5.3

Suppose the Laurent differential monomial sets \(\mathcal {A}_i\,(i=0,\ldots ,n)\) form a Laurent differentially essential system. Then \(\overline{\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)}\!=\!{\mathbb {V}}\big (\mathrm{sat}(\mathrm{Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\big )\).

Proof

Firstly, by Lemma 5.1, \(\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n) \subseteq {\mathbb {V}}\big (\mathrm{sat}(\hbox {Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\big )\). So \(\overline{\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)} \subseteq {\mathbb {V}}\big (\mathrm{sat}(\hbox {Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\big )\).

For the other direction, let \(\eta ,\zeta \) be as defined in (9). By Theorem 3.9, \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) is a prime differential ideal with a generic point \((\eta ;\zeta )\). Let \((F_0,\ldots ,F_n)\in \mathcal {L}(\mathcal {A}_0)\times \cdots \times \mathcal {L}(\mathcal {A}_n)\) be a set of Laurent differential polynomials represented by \(\zeta \). Clearly, \(\eta \) is a non-polynomial solution of \(F_i=0\). Thus, \(\zeta \in \mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n) \subset \overline{\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)}\). By Corollary 3.11, \(\zeta \) is a generic point of \(\mathrm{sat}(\hbox {Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\). It follows that \({\mathbb {V}}\big (\mathrm{sat}(\hbox {Res}_{\mathcal {A}_0,\ldots , \mathcal {A}_n})\big )\subseteq \overline{\mathcal {Z}(\mathcal {A}_0, \ldots ,\mathcal {A}_n)}\). As a consequence, \({\mathbb {V}}\big (\mathrm{sat}(\hbox {Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\big )= \overline{\mathcal {Z}(\mathcal {A}_0,\ldots ,\mathcal {A}_n)}\). \(\square \)

The above theorem shows that the sparse differential resultant gives a sufficient and necessary condition for a differentially essential system to have non-polynomial solutions over an open set of \(\prod _{i=0}^n \mathcal {L}(\mathcal {A}_i)\) in the sense of the Kolchin topology.

In the rest of this section, we will analyze structures of non-polynomial solutions of the system (3). By Theorem 4.20 and Corollary 4.21, \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) in (3) can be divided into two disjoint sets \(\{{\mathbb {P}}_i\,|\,i\in I\}\) and \(\{{\mathbb {P}}_i\,|\,i\in \{0,1,\ldots ,n\}\backslash I\}\), where \(I\subseteq \{0,1,\ldots ,n\}\) is rank essential. In this section, we will assume that \(\{0,1,\ldots ,n\}\) is rank essential, that is, any \(n\) of the \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) form a differentially independent set, which is equivalent to the fact that each \({\mathbf {u}}_i\) occurs in \({\mathbf {R}}\) effectively.

Firstly, we will give the following theorem which shows the relation between the original differential system and their sparse differential resultant.

Theorem 5.4

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) given in (3) be rank essential and \({\mathbb {P}}_i^\text {N}=\sum _{k=0}^{l_i}u_{ik}N_{ik}\). Denote \(h_i={\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\), \(Q_{ik}=\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}N_{ik}-\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}N_{i0}\,(0\le i\le n,1\le k\le l_i)\), and \(S=\left\{ \frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}},y_j^{(k)}\big |0\le i\le n;1\le j\le n; k\ge 0\right\} \). Then
$$\begin{aligned} \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}=\big ([{\mathbb {P}}_0^\text {N},\ldots ,{\mathbb {P}}_n^\text {N}]\hbox {{:}}\mathbb {m}\big )_{\mathbb {Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}} =\big ([{\mathbf {R}},(Q_{ik})_{0\le i\le n;1\le k\le l_i}]\hbox {{:}}S^\infty \big )_{\mathbb {Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}. \end{aligned}$$

Proof

Let \(\mathcal {J}=\big ([{\mathbf {R}},(Q_{ik})_{0\le i\le n;1\le k\le l_i}]\hbox {{:}}S^\infty \big )_{\mathbb {Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}\). By Theorem 3.9, \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) is a prime differential ideal with a generic point \(\theta =(\eta ;\zeta )\) given in (9). By Corollary 3.12, \(\zeta =({\mathbf {u}},\zeta _0,\ldots ,\zeta _n)\) is a generic zero point of \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}=\mathrm{sat}({\mathbf {R}})\). Since \(\frac{M_{ik}(\eta )}{M_{i0}(\eta )}=\frac{N_{ik}(\eta )}{N_{i0}(\eta )}\), by (11), \(Q_{ik}({\mathbf {u}},\zeta _0,\ldots ,\zeta _n)=0\). So \(Q_{ik}\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\).

Since \({\mathbb {P}}\) is rank essential, \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\ge 0\). Substituting \(\big (Q_{ik}+N_{i0}\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\big )/\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}\) for \(N_{ik}\) in each \({\mathbb {P}}_i^\text {N}\), we obtain \({\mathbb {P}}_i^\text {N}=u_{i0}N_{i0}+\sum _{k=1}^{l_i}u_{ik}\big (Q_{ik}+N_{i0}\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\big )/\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}\). So \(\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}{\mathbb {P}}_i^\text {N}=\sum _{k=1}^{l_i} u_{ik}Q_{ik}+(\sum _{k=0}^{l_i}u_{ik}\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}})N_{i0}\). Since \(Q_{ik}\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\), \(R_i=\sum _{k=0}^{l_i}u_{ik}\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Since \(R_i\) and \({\mathbf {R}}\) have the same degree and \({\mathbf {R}}\) is irreducible, there exists some \(a\in \mathbb {Q}\) such that \(R_i=a{\mathbf {R}}\). It follows that \({\mathbb {P}}_i^\text {N}\in \mathcal {J}\). For any differential polynomial \(f\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\), there exists a differential monomial \(M\in \mathbb {m}\) such that \(Mf\in [{\mathbb {P}}_0^\text {N},\ldots ,{\mathbb {P}}_n^\text {N}]\subset \mathcal {J}\). Thus, \(f\in \mathcal {J}\) and \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\subseteq \mathcal {J}\) follows. Conversely, for any differential polynomial \(g\in \mathcal {J}\), there exist some differential monomial \(M\) and some \(b\in \mathbb {N}\) such that \(M(\prod _{i}\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}})^bg\in [{\mathbf {R}},Q_{ik}]\subset \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Since \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) is a prime differential ideal, \(g\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Hence, \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}=\mathcal {J}.\) \(\square \)

We conclude this section by giving a sufficient condition for a differentially essential system to have a unique non-polynomial solution. Following notations in Sect. 3.2, \(\mathcal {A}_i=\{M_{i0},M_{i1},\ldots ,M_{il_i}\}\) are finite sets of Laurent differential monomials, where \(M_{ik}= ({\mathbb {Y}}^{[s_i]})^{\alpha _{ik}}\) and \(\alpha _{ik}\in \mathbb {Z}^{n(s_i+1)}\) is an exponent vector written in terms of degrees of \(y_1,\ldots ,y_n,y'_1,\ldots ,y'_n,\) \(\ldots ,y_1^{(s_i)},\ldots ,y_{n}^{(s_i)}\). Let \(o=\mathrm{max}_i\{s_i\}\). Then, every vector \(\alpha _{ik}\) in \(\mathbb {Z}^{n(s_i+1)}\) can be embedded in \(\mathbb {Z}^{n(o+1)}\). For \(L\subset \mathbb {Z}^{n(o+1)}\), let \(\hbox {Span}_{\mathbb {Z}}(L)\) be the \(\mathbb {Z}\) module generated by \(L\). Let \(\mathbf e _i\) be the exponent vector for \(y_i\) in \(\mathbb {Z}^{n(o+1)}\) whose \(i\)th coordinate is 1 and other coordinates are equal to zero. Then we have the following definition.

Definition 5.5

\({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) given in (3) is called normal rank essential if \({\mathbb {P}}\) is rank essential and for each \(j=1,\ldots ,n\), \(\mathbf e _j\in \hbox {Span}_{\mathbb {Z}}(\{\alpha _{ik}-\alpha _{i0}\,|\,i=0,\ldots ,n;k=1,\ldots ,l_i\})\).

Lemma 5.6

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) given in (3) be normal rank essential. Then2
$$\begin{aligned} \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\!=\!\big ([{\mathbb {P}}_0^\text {N},\ldots ,{\mathbb {P}}_n^\text {N}]\hbox {{:}}\mathbb {m}\big )_{\mathbb {Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}\!=\! \mathrm{sat}({\mathbf {R}},S_1y_1\!-\!T_1,\ldots ,S_ny_n\!-\!T_n)_{\mathbb {Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}} \end{aligned}$$
where \(S_l\) and \(T_l\) are certain nonnegative power products of \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\).

Proof

Let \({\mathcal {J}}=\mathrm{sat}({\mathbf {R}},S_1y_1-T_1,\ldots ,S_ny_n-T_n)_{\mathbb {Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}\). It is easy to verify that \({\mathcal {J}}\) is a prime differential ideal. Since \({\mathbb {P}}\) is rank essential, \(h_i={\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\ge 0\) for each \(i\). By equation (11), we have \(\frac{N_{ik}(\eta )}{N_{i0}(\eta )}= \frac{M_{ik}(\eta )}{M_{i0}(\eta )}=\overline{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}}\big /\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}}\). Since \(\mathbf e _j\in \hbox {Span}_{\mathbb {Z}}(\{\alpha _{ik}-\alpha _{i0}\,|\,k=1,\ldots ,l_i;i=0,\ldots ,n\})\), for \(j=1,\ldots ,n\), there exist \(t_{jik}\in \mathbb {Z}\) such that \(\sum _{i,k}t_{jik}(\alpha _{ik}-\alpha _{i0})=\mathbf e _j\). So, \(\prod _{i,k}\Big (\frac{N_{ik}}{N_{i0}}\Big )^{t_{jik}}=y_j\). Thus, \(\prod _{i,k}\Big (\frac{N_{ik}(\eta )}{N_{i0}(\eta )}\Big )^{t_{jik}}=\eta _j=\prod _{i,k}\big (\overline{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}}\big /\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}}\big )^{t_{jik}}\). By Theorem 3.9, there exist \(S_j\) and \(T_j\) which are nonnegative power products of \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\) such that \(S_jy_j-T_j\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Since \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\not \in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) and \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) is prime, \(S_j\not \in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) follows. Thus, \({\mathcal {J}}\subset \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). To prove \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\subset {\mathcal {J}}\), for each \(k=0,\ldots ,n\), let \(R_k\) be the differential remainder of \({\mathbb {P}}_k^\text {N}\) w.r.t. \({\mathbf {R}}, S_1y_1-T_1,\ldots ,S_ny_n-T_n\) under the given ranking. Then \(R_k\in {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\). And by (2), \(R_k\in [{\mathbf {R}}, S_1y_1-T_1,\ldots ,S_ny_n-T_n,{\mathbb {P}}_k^\text {N}]\subset \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). So \(R_k\in {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\cap \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}=\mathrm{sat}({\mathbf {R}})\). Since \(R_k\) is reduced w.r.t. \({\mathbf {R}}\), \(R_k=0\) and \({\mathbb {P}}_k^\text {N}\in {\mathcal {J}}\) follows. By Corollary 3.14, \(y_i^{(j)}\not \in {\mathcal {J}}\subset \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) for each \(i\) and \(j\). Thus, \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\subset {\mathcal {J}}\). \(\square \)

Theorem 5.7

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) given in (3) be normal rank essential. Let \(\overline{{\mathbb {P}}}_{i}\) be a specialization of \({\mathbb {P}}_i\) with coefficient vector \({\mathbf {v}}_i\,(i=0,\ldots ,n)\). Then there exists a differential polynomial set \(\mathcal {S}\subset {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) such that \({\mathbb {V}}({\mathbf {R}})\backslash \bigcup \limits _{S\in \mathcal {S}}{\mathbb {V}}(S)\ne \emptyset \) and whenever \(({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\in {\mathbb {V}}({\mathbf {R}}) \backslash \bigcup \limits _{S\in \mathcal {S}}{\mathbb {V}}(S)\), \(\overline{{\mathbb {P}}}_{i}=0\,(i=0,\ldots ,n)\) have a unique common non-polynomial solution.

Proof

By Lemma 5.6, \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}} = \mathrm{sat}({\mathbf {R}},A_1,\ldots ,A_n)\), where \(A_l=S_ly_l-T_l(l=1,\ldots ,n)\). Let \(\mathcal {S}=\{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}},(S_j)^{m+1}\big (\frac{T_j}{S_j}\big )^{(m)}\,\big |\,\) \(0\le i\le n;0\le k\le l_i;1\le j\le n;m\in \mathbb {N}\}\). Firstly, we show that \({\mathbb {V}}({\mathbf {R}})\backslash \bigcup \limits _{S\in \mathcal {S}}{\mathbb {V}}(S)\ne \emptyset \). Suppose the contrary, viz. \({\mathbb {V}}({\mathbf {R}})\subset \bigcup \limits _{S\in \mathcal {S}}{\mathbb {V}}(S)\). In particular, there exists one \(S\in \mathcal {S}\) such that \(S\) vanishes at the generic point \(\zeta \) of \(\mathrm{sat}({\mathbf {R}})\). It is obvious that \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\) does not vanish at \(\zeta \). If \((S_j)^{m+1}\big (\frac{T_j}{S_j}\big )^{(m)}\) vanishes at \(\zeta \) for some \(m\), \((S_j)^{m+1}\big (\frac{T_j}{S_j}\big )^{(m)}\in \mathrm{sat}({\mathbf {R}})\). Replacing \(\frac{T_j}{S_j}\) by \(y_j -\frac{A_j}{S_j}\), we have \(S_j^{m+1}y_j^{(m)}\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Since \(S_j\) is a power product of certain \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\), \(S_j^{m+1}\not \in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Then, \(y_j^{(m)}\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\), contradicting to Corollary 3.14.

Suppose \(({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\in {\mathbb {V}}({\mathbf {R}})\backslash \bigcup \limits _{S\in \mathcal {S}}{\mathbb {V}}(S)\). Let \(\bar{T}_j=T_j({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\) and \(\bar{S_j}=S_j({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\). Since \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\ne 0\) for each \(i\) and \(k\), \(\bar{T}_j\bar{S}_j\ne 0\). Let \(\bar{y}_j=\frac{\bar{T}_j}{\bar{S}_j}\) and denote \(\bar{y}=(\bar{y}_1,\ldots ,\bar{y}_n)\). For each \(m\in \mathbb {N},\) \(\bar{y}_j^{(m)}=(\frac{\bar{T}_j}{\bar{S}_j})^{(m)}\ne 0\). Thus, \(\bar{y}\in (\mathcal {E}^\wedge )^n.\) Since \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}=\mathrm{sat}({\mathbf {R}},A_1,\ldots ,A_n)\), \(H\cdot {\mathbb {P}}_i^\text {N}\in [{\mathbf {R}},A_1,\ldots ,A_n]\) where \(H\) is a product of powers of \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\). Hence, \(\overline{{\mathbb {P}}}_{i}^\text {N}(\bar{y})= M_i(\bar{y})\cdot \overline{{\mathbb {P}}}_{i}(\bar{y})=0\), which implies that \(\overline{{\mathbb {P}}}_{i}(\bar{y})=0\). Thus, \(\bar{y}\) is a non-polynomial common solution of \(\overline{{\mathbb {P}}}_{i}\). On the other hand, if \(\xi \) is a non-polynomial common solution of \(\overline{{\mathbb {P}}}_{i}\), then \(\bar{S}_jy_j-\bar{T}_j\) vanishes at \(\xi \) for each \(i\). Hence, \(\xi =\bar{y}\). As a consequence, \(\overline{{\mathbb {P}}}_{i}=0\) have a unique common non-polynomial solution. \(\square \)

Theorem 5.7 can be rephrased as the following geometric form.

Corollary 5.8

Let \(\mathcal {Z}_1(\mathcal {A}_0,\ldots ,\mathcal {A}_n) \subset \mathcal {E}^{l_0+1}\times \cdots \times \mathcal {E}^{l_n+1}\) be the set consisting of \(({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\) for which the corresponding Laurent differential polynomials \(F_i=0\,(i=0,\ldots ,n)\) have a unique non-polynomial common solution and \(\overline{\mathcal {Z}_1(\mathcal {A}_0,\ldots ,\mathcal {A}_n)}\) the Kolchin closure of \(\mathcal {Z}_1(\mathcal {A}_0,\ldots ,\mathcal {A}_n)\). Then if \(\mathcal {A}_0,\ldots ,\mathcal {A}_n\) are normal rank essential, \(\overline{\mathcal {Z}_1(\mathcal {A}_0,\ldots ,\mathcal {A}_n)}=\) \({\mathbb {V}}\big (\mathrm{sat}(\) \(\mathrm{Res}_{\mathcal {A}_0,\ldots ,\mathcal {A}_n})\big )\).

Example 5.9

In Example 3.18, the sparse differential resultant \({\mathbf {R}}\) of \({\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2\) is free from the coefficients of \({\mathbb {P}}_2.\) The system can be solved as follows: \(y_1\) can be solved from \({\mathbb {P}}_0={\mathbb {P}}_1=0\) and \({\mathbb {P}}_2=u_{10}+u_{11}y'_2\) is of order one in \(y_2\) which leads to an infinite number of solutions. Thus, the system cannot have a unique solution. This shows the importance of rank essential condition.

Example 5.10

In Example 3.19, the characteristic set of \([{\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2]\) w.r.t. the elimination ranking \(u_{ik}\prec y_2\prec y_1\) is \({\mathbf {R}},u_{11}u_{00}y'_2-u_{01}u_{10}y_2,u_{01}y_2y_1+u_{00}\). Here \(\mathcal {A}_0,\mathcal {A}_1,\mathcal {A}_2\) are rank essential but not normal rank essential, and the system \(\{{\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2\}\) does not have a unique solution under the condition \({\mathbf {R}}=0\).

With Theorem 5.3, property 1) of Theorem 1.2 is proved.

5.2 Differential Homogeneity of the Sparse Differential Resultant

Following Kolchin [30], we now introduce the concept of differentially homogenous polynomials.

Definition 5.11

A differential polynomial \(f \in \mathcal {F}\{z_{0},\ldots ,z_{n}\}\) is called differentially homogenous of degree \(m\) if for a new differential indeterminate \(\lambda \), we have \(f(\lambda z_{0},\lambda z_{1}\ldots ,\lambda z_{n})=\lambda ^{m}f(z_{0},z_{1},\ldots ,z_{n}) \).

The differential analog of Euler’s theorem related to homogenous polynomials is valid.

Theorem 5.12

[30]   \(f \in \mathcal {F}\{z_{0},\ldots ,z_{n}\}\) is differentially homogenous of degree \(m\) if and only if
$$\begin{aligned} \sum _{j=0}^{n} \sum _{k \in \mathbb {N}} {k+r \atopwithdelims ()r} z_{j}^{(k)} \frac{\partial f(z_{0},\ldots ,z_{n})}{\partial z_{j}^{(k+r)} } = \left\{ \begin{array}{ccc} mf &{} &{} r = 0 \\ 0 &{} &{} r \ne 0 \\ \end{array} \right. \end{aligned}$$

Sparse differential resultants have the following property.

Theorem 5.13

The sparse differential resultant is differentially homogenous in each \({\mathbf {u}}_i\) which is the coefficient vector of \({\mathbb {P}}_i\).

Proof

Suppose \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=h_i\ge 0\). Follow notations in Sect. 3.2. By Corollary 3.12, \({\mathbf {R}}({\mathbf {u}};\zeta _0,\ldots ,\zeta _n)=0\). Differentiating this identity w.r.t. \(u_{ij}^{(k)}\,(j=1,\ldots ,l_i)\), respectively, we have
https://static-content.springer.com/image/art%3A10.1007%2Fs10208-015-9249-9/MediaObjects/10208_2015_9249_Equ98_HTML.gif
In the above equations, \(\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(k)}}}\) \((k=0,\ldots ,h_i; j=0,\ldots ,l_i)\) are obtained by replacing \(u_{i0}\) by \(\zeta _{i}\,(i=0, 1, \ldots , n)\) in each \(\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(k)}}\), respectively.
Now, let us consider \(\sum _{j=0}^{l_i} \sum _{k\ge 0} {k+r \atopwithdelims ()k}u_{i j}^{(k)}\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(k+r)}}\). Of course, it needs only to consider \(r\le h_i\). For each \(r\le h_i\) and each \(j\in \{1,\ldots ,l_i\}\),
$$\begin{aligned} 0&= (r*)\times {r \atopwithdelims ()r} u_{i j} +(r+1*)\times {r+1 \atopwithdelims ()r} u_{i j}'+\cdots +(h_i*)\times {h_i \atopwithdelims ()r} u_{i j}^{(h_i-r)}\\&= {r \atopwithdelims ()r}u_{i j}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(r)}}}+{r+1 \atopwithdelims ()r}u_{i j}'\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(r+1)}}}+\cdots +{h_i \atopwithdelims ()r}u_{i j}^{(h_i-r)}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(h_i)}}}+\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(r)}}}\Big (-u_{i j} \frac{M_{ij}(\eta )}{M_{i0}(\eta )}\Big ) \\&\quad +\,\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(r+1)}}}\Big (-{r+1 \atopwithdelims ()r}u_{i j} \left[ \frac{M_{ij}(\eta )}{M_{i0}(\eta )}\right] '-{r+1 \atopwithdelims ()r}u_{i j}'\frac{M_{ij}(\eta )}{M_{i0}(\eta )}\Big ) + \cdots \\&\quad +\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}}\Big (-{h_i \atopwithdelims ()r}u_{i j} \left[ \frac{M_{ij}(\eta )}{M_{i0}(\eta )}\right] ^{(h_i-r)} -{r+1 \atopwithdelims ()r}{h_i \atopwithdelims ()r+1}u_{i j}' \left[ \frac{M_{ij}(\eta )}{M_{i0}(\eta )}\right] ^{(h_i-r-1)}-\cdots \\&\quad -{h_i \atopwithdelims ()r}{h_i \atopwithdelims ()h_i}u_{i j}^{(h_i-r)} \frac{M_{ij}(\eta )}{M_{i0}(\eta )}\Big ) \\&= {r\atopwithdelims ()r}u_{i j}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(r)}}}+{r+1 \atopwithdelims ()r}u_{i j}'\overline{\frac{\partial {\mathbf {R}}}{ \partial u_{i j}^{(r+1)}}}+\cdots +{h_i \atopwithdelims ()r}u_{i j}^{(h_i-r)}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(h_i)}}} +{r \atopwithdelims ()r}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(r)}}}\big (-u_{i j} \frac{M_{ij}(\eta )}{M_{i0}(\eta )}\big ) \\&\quad +\,{r+1 \atopwithdelims ()r}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(r+1)}}}\Big (-u_{i j} \frac{M_{ij}(\eta )}{M_{i0}(\eta )}\Big )'+\cdots +{h_i \atopwithdelims ()r}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{ i0}^{(h_i)}}}\Big (-u_{i j}\frac{M_{ij}(\eta )}{M_{i0}(\eta )}\Big )^{(h_i-r)}. \end{aligned}$$
It follows that \(\sum _{j=1}^{l_i} {r \atopwithdelims ()r}u_{i j}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(r)}}}+\sum _{j=1}^{l_i} {r+1 \atopwithdelims ()r}u_{i j}'\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(r+1)}}}+\cdots +\sum _{j=1}^{l_i} {h_i \atopwithdelims ()r}u_{i j}^{(h_i-r)}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(h_i)}}}+{r \atopwithdelims ()r}\zeta _{i}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(r)}}}\) \(+{r+1 \atopwithdelims ()r}\) \(\zeta _{i}'\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(r+1)}}}+\cdots +{h_i \atopwithdelims ()r}\zeta _{i}^{(h_i-r)}\overline{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}}=0\).
By Corollary 3.12, \(G=\sum _{k\ge 0}\sum _{j=0}^{l_i} {r+k \atopwithdelims ()r}u_{i j}^{(k)}\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(r+k)}}\in \mathrm{sat}({\mathbf {R}})\). Since \({\mathrm{ord}}(G)\le {\mathrm{ord}}({\mathbf {R}})\), \(G\) can be divisible by \({\mathbf {R}}\). In the case \(r=0\), \(\sum \limits _{j=0}^{l_i} \sum \limits _{k=0}^{h_i} u_{i j}^{(k)}\) \( \frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(k)}}\) \(=m\cdot {\mathbf {R}}\) for some \(m\in \mathbb {Z}\), while in the case \(r>0\), if \(G\ne 0\), it cannot be divisible by \({\mathbf {R}}\). Thus, \(G\) must be identically zero. From the above, we conclude that
$$\begin{aligned} \sum _{j=0}^{l_i} \sum _{k\ge 0} {k+r \atopwithdelims ()r}u_{i j}^{(k)}\frac{\partial {\mathbf {R}}}{\partial u_{i j}^{(k+r)}}=\left\{ \begin{array}{ccc} 0&{}\quad &{} r\ne 0 \\ m{\mathbf {R}}&{}\quad &{} r=0 \end{array} \right. \end{aligned}$$
By Theorem 5.12, \({\mathbf {R}}({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n)\) is differentially homogenous in each \({\mathbf {u}}_i\). \(\square \)

With Theorem 5.13, property 2) of Theorem 1.2 is proved.

5.3 Poisson Product Formulas

In this section, we prove formulas for sparse differential resultants, which are similar to the Poisson product formulas for multivariate resultants [37].

Denote \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\) by \(h_i\) (\(i=0,\ldots ,n\)), and suppose \(h_0\ge 0\). Let \(\tilde{{\mathbf {u}}}=\cup _{i=0}^n {\mathbf {u}}_i\setminus \{u_{00}\}\) and \(\mathbb {Q}_{0}=\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle (u_{00}^{(0)},\ldots ,u_{00}^{(h_0-1)})\). Consider \({\mathbf {R}}\) as an irreducible algebraic polynomial \(R(u_{00}^{(h_0)})\) in \(\mathbb {Q}_{0}[u_{00}^{(h_0)}]\). In a suitable algebraic extension field of \(\mathbb {Q}_{0}\), \(R(u_{00}^{(h_0)})=0\) has \(t_0=\mathrm{deg}(R,u_{00}^{(h_0)})=\mathrm{deg}({\mathbf {R}},u_{00}^{(h_0)})\) roots \(\gamma _{1},\ldots ,\gamma _{t_0}\). Thus
$$\begin{aligned} {\mathbf {R}}({\mathbf {u}}_{0},{\mathbf {u}}_{1},\ldots ,{\mathbf {u}}_{n}) =A\prod ^{t_0}_{\tau =1}(u_{00}^{(h_0)}-\gamma _{\tau }) \end{aligned}$$
(16)
where \(A\in \mathbb {Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_n\rangle [{\mathbf {u}}_0^{[h_0]}\backslash u_{00}^{(h_0)}]\). For each \(\tau \) such that \(1\le \tau \le t_0\), let
$$\begin{aligned} \mathbb {Q}_\tau =\mathbb {Q}_0(\gamma _{\tau })=\mathbb {Q}\langle \tilde{{\mathbf {u}}} \rangle (u_{00}^{(0)},\ldots ,u_{00}^{(h_0-1)},\gamma _{\tau }) \end{aligned}$$
(17)
be an algebraic extension field of \(\mathbb {Q}_0\) defined by \(R(u_{00}^{(h)})=0\). We will define a derivation operator \(\delta _{\tau }\) on \(\mathbb {Q}_\tau \) so that \(\mathbb {Q}_\tau \) becomes a \(\delta _\tau \)-field. This can be done in a very natural way. For \(e\in {\mathbb Q}\langle \tilde{{\mathbf {u}}}\rangle \), define \(\delta _{\tau } e = \delta e=e'\). Define \(\delta _{\tau }^{i} u_{00} = u_{00}^{(i)}\) for \(i=0,\ldots ,h_0-1\) and
$$\begin{aligned} \delta _{\tau }^{h_0} u_{00} = \gamma _{\tau }. \end{aligned}$$
Since \({\mathbf {R}}\), regarded as an algebraic polynomial \(R\) in \(u_{00}^{(h_0)}\), is a minimal polynomial of \(\gamma _{\tau }\), \(\hbox {{S}}_{{\mathbf {R}}}=\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}}\) does not vanish at \(u_{00}^{(h_0)}=\gamma _{\tau }\). Now, we define the derivatives of \(\delta _{\tau }^{i} u_{00}\) for \(i> h_0\) by induction. Firstly, since \(R(\gamma _{\tau })=0\), \(\delta _\tau (R(\gamma _{\tau }))=\hbox {{S}}_{{\mathbf {R}}}\big |_{u_{00}^{(h_0)}=\gamma _\tau }\delta _\tau (\gamma _{\tau })+T\big |_{u_{00}^{(h_0)}=\gamma _\tau }=0\), where \(T={\mathbf {R}}'-\hbox {{S}}_{\mathbf {R}}u_{00}^{(h_0+1)}\). We define \(\delta _{\tau }^{h_0+1} u_{00}\) to be \(\delta _\tau (\gamma _{\tau })=-\frac{T}{\hbox {{S}}_{{\mathbf {R}}}}\Big |_{u_{00}^{(h_0)}=\gamma _{\tau }}\). Supposing the derivatives of \(\delta _{\tau }^{h_0+j} u_{00}\) with order less than \(j<i\) have been defined, we now define \(\delta _{\tau }^{h_0+i} u_{00}\). Since \({\mathbf {R}}^{(i)}=\hbox {{S}}_{{\mathbf {R}}}u_{00}^{(h_0+i)}+T_{i}\) is linear in \(u_{00}^{(h_0+i)}\), we define \(\delta _{\tau }^{h_0+i} u_{00}\) to be \(-\frac{T_{i}}{\hbox {{S}}_{{\mathbf {R}}}}\Big |_{u_{00}^{(h_0+j)}=\delta _\tau ^{h_0+j}u_{00}, j <i}\).

In this way, \((\mathbb {Q}_\tau ,\delta _\tau )\) is a differential field which can be considered as a finitely generated differential extension field of \(\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle \). Recall that \(\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle \) is a finitely generated differential extension field of \(\mathbb {Q}\) contained in \(\mathcal {E}\). By the definition of universal differential extension field, there exists a differential extension field \(\mathcal {G}\subset \mathcal {E}\) of \(\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle \) and a differential isomorphism \(\varphi _\tau \) over \(\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle \) from \((\mathbb {Q}_\tau ,\delta _\tau )\) to \((\mathcal {G},\delta )\). Summing up the above results, we have

Lemma 5.14

\((\mathbb {Q}_\tau , \delta _\tau )\) defined above is a finitely generated differential extension field of \(\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle \), which is differentially \(\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle \)-isomorphic to a subfield of \(\mathcal {E}\).

Let \(G\) be a differential polynomial in \(\mathbb {Q}\{{\mathbf {u}}_0,{\mathbf {u}}_1,\ldots ,{\mathbf {u}}_n\} = \mathbb {Q}\{\tilde{{\mathbf {u}}},u_{00}\}\). For convenience, by the symbol \(G\Big |_{u_{00}^{(h_0)}=\gamma _{\tau }}\), we mean substituting \(u_{00}^{(h_0+i)}\) by \(\delta _\tau ^{i}\gamma _{\tau }\,(i\ge 0)\) in \(G\). Similarly, by saying \(G\) vanishes at \(u_{00}^{(h_0)}=\gamma _{\tau }\), we mean \(G\Big |_{u_{00}^{(h_0)}=\gamma _{\tau }}=0\). It is easy to prove the following lemma.

Lemma 5.15

Let \(G\) be a differential polynomial in \(\mathbb {Q}\{\tilde{{\mathbf {u}}},u_{00}\}\). Then \(G\in \mathrm{sat}({\mathbf {R}})\) if and only if \(G\) vanishes at \(u_{00}^{(h_0)}=\gamma _{\tau }\).

When a differential polynomial \(G\in \mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle \{{\mathbb {Y}}\}\) vanishes at a point \(\eta \in \mathbb {Q}_\tau ^n\), it is easy to see that \(G\) vanishes at \(\varphi _\tau (\eta )\in \mathcal {E}^n\). For convenience, by saying \(\eta \) is a point in a differential variety \(V\) over \(\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle \), we mean \(\varphi _\tau (\eta )\in V\).

With these preparations, we now give the following theorem.

Theorem 5.16

Let \({\mathbf {R}}({\mathbf {u}}_{0},{\mathbf {u}}_{1},\ldots ,{\mathbf {u}}_{n})\) be the sparse differential resultant of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) given in (3) with \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_{0})=h_0\ge 0\). Let \(\mathrm{deg}({\mathbf {R}},u_{00}^{(h_0)})=t_0\). Then there exist extension fields \((\mathbb {Q}_\tau ,\delta _\tau )\) of \((\mathbb {Q}\langle \tilde{{\mathbf {u}}}\rangle ,\delta )\) and \(\xi _{\tau k}\in \mathbb {Q}_\tau \) for \(\tau =1,\ldots ,t_0\) and \(k=1,\ldots ,l_0\) such that
$$\begin{aligned} {\mathbf {R}}=A\prod _{\tau =1}^{t_0} \left( u_{00}+\sum \limits _{k=1}^{l_0} u_{0 k}\xi _{\tau k}\right) ^{(h_0)}, \end{aligned}$$
(18)
where \(A\) is a polynomial in \(\mathbb {Q}\langle {\mathbf {u}}_1,\ldots ,{\mathbf {u}}_n\rangle [{\mathbf {u}}_0^{[h_0]}\backslash u_{00}^{(h_0)}]\). Note that equation (18) is formal and should be understood in the following precise meaning: \((u_{00}+\sum _{k=1}^{l_0} u_{0k}\xi _{\tau k})^{(h_0)} \mathop {=}\limits ^{\triangle } \delta ^{h_0}u_{00}+\delta _\tau ^{h_0}(\sum _{k=1}^{l_0} u_{0k}\xi _{\tau k})\).

Proof

Since \({\mathbf {R}}\) is irreducible, \({\mathbf {R}}_{\tau 0}=\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}}\Big | _{u_{00}^{(h_0)}=\gamma _{\tau }} \ne 0\). Let \(\xi _{\tau \rho }={\mathbf {R}}_{\tau \rho }\big /{\mathbf {R}}_{\tau 0} (\rho =1,\ldots ,l_0)\), where \({\mathbf {R}}_{\tau \rho }=\frac{\partial {\mathbf {R}}}{\partial u_{0 \rho }^{(h_0)}}\Big | _{u_{00}^{(h_0)}=\gamma _{\tau }}\). Note that \({\mathbf {R}}_{\tau \rho }\) and \(\xi _{\tau \rho }\) are in \(\mathbb {Q}_\tau \). We will prove
$$\begin{aligned} \gamma _{\tau }=-\delta _\tau ^{h_0}(u_{01}\xi _{\tau 1}+u_{02}\xi _{\tau 2}+\cdots +u_{0l_0}\xi _{\tau l_0}). \end{aligned}$$
Multiplying \(u_{0\rho }\) to (11) for \(\rho \) from 1 to \(l_0\), adding them together, and noting (9), we have
$$\begin{aligned} \sum _{\rho =1}^{l_0} u_{0\rho }\overline{\frac{\partial {\mathbf {R}}}{\partial u_{0\rho }^{(h_0)}}}+ \overline{\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}}} \bigg (-\sum _{\rho =1}^{l_0} u_{0\rho }\frac{N_{0\rho }(\eta )}{N_{00}(\eta )}\bigg ) =\sum _{\rho =1}^{l_0} u_{0\rho }\overline{\frac{\partial {\mathbf {R}}}{\partial u_{0\rho }^{(h_0)}}} +\zeta _0 \overline{\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}}}=0. \end{aligned}$$
By Corollary 3.12, \(f=\sum _{\rho =1}^{l_0} u_{0\rho }\frac{\partial {\mathbf {R}}}{\partial u_{0\rho }^{(h_0)}}+u_{00}\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}} \in \,\mathrm{sat}({\mathbf {R}})\). Since \(f\) is of order not greater than \({\mathbf {R}}\), it must be divisible by \({\mathbf {R}}\). Since \(f\) and \({\mathbf {R}}\) have the same degree, there exists an \(a\in \mathbb {Q}\) such that
$$\begin{aligned} f=\sum _{\rho =1}^{l_0} u_{0\rho }\frac{\partial {\mathbf {R}}}{\partial u_{0\rho }^{(h_0)}}+u_{00}\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}} =a{\mathbf {R}}. \end{aligned}$$
(19)
Setting \(u_{00}^{(h_0)}=\gamma _{\tau }\) in both sides of \(f=a{\mathbf {R}}\), we have \(\sum _{\rho =1}^{l_0} u_{0\rho }{\mathbf {R}}_{\tau \rho }+u_{00}{\mathbf {R}}_{\tau 0}=0\). Hence, as an algebraic equation, we have
$$\begin{aligned} u_{00}+\sum _{\rho =1}^{l_0} u_{0\rho }\xi _{\tau \rho }=0 \end{aligned}$$
under the constraint \(u_{00}^{(h_0)}=\gamma _\tau \). Equivalently, the above equation is valid in \((\mathbb {Q}_\tau ,\delta _\tau )\). As a consequence, \(\gamma _{\tau }=-\delta _\tau ^{h_0}(\sum _{\rho =1}^{l_0} u_{0\rho }\xi _{\tau \rho })\). Substituting them into (16), the theorem is proved. \(\square \)

Note that the quantities \(\xi _{\tau \rho }\) are not expressions in terms of \(y_i\). In the following theorem, we will show that if \(\mathcal {A}_i\,(i=0,\ldots ,n)\) satisfy certain conditions, Theorem 5.16 can be strengthened to make \(\xi _{\tau \rho }\) as products of certain values of \(y_i\) and its derivatives.

Theorem 5.17

If \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) is normal rank essential, then there exist \(\eta _{\tau j}\in {\mathbb Q}_{\tau }\)  \((\tau =1,\ldots ,t_0;\) \(j=1,\ldots ,n)\) such that
$$\begin{aligned} {\mathbf {R}}&= A\prod _{\tau =1}^{t_0} \bigg (u_{00}+\sum _{k=1}^{l_0}u_{0k}\frac{M_{0k} (\eta _\tau )}{M_{00}(\eta _\tau )}\bigg )^{(h_0)}\\&= A\prod _{\tau =1}^{t_0}\bigg [\frac{{\mathbb {P}}_0(\eta _\tau )}{M_{00}(\eta _\tau )}\bigg ]^{(h_0)}, \quad \text { where } \eta _\tau =(\eta _{\tau 1},\ldots ,\eta _{\tau n}). \nonumber \end{aligned}$$
(20)
Moreover, each \(\eta _\tau \,(\tau =1,\ldots ,t_0)\) is a common non-polynomial differential zero of \({\mathbb {P}}_1,\ldots ,{\mathbb {P}}_n\).

Proof

Since \({\mathbb {P}}\) is rank essential, each \({\mathbf {u}}_i\) effectively occurs in \({\mathbf {R}}\), so each \(h_i\ge 0.\) By Theorem 3.9, \(\theta =(\eta ;\zeta _0,u_{01},\ldots ,u_{0l_0};\ldots ; \zeta _n,u_{n1},\ldots ,u_{nl_n})\) is a generic point of \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). By Lemma 5.6, there exist \(S_j\) and \(T_j\) which are nonnegative power products of \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\) such that \(S_jy_j-T_j\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). That is, \(\eta _j=\overline{T_j}/\overline{S_j}\) for \(j=1,\ldots ,n\), where \(\overline{T_j} \) and \( \overline{S_j}\) are obtained by substituting \((u_{00},\ldots ,u_{n0})=(\zeta _0,\ldots ,\zeta _n)\) in \(T_j\) and \(S_j\), respectively. Since \({\mathbf {R}}\) is an irreducible polynomial, every \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\) does not vanishes at \(u_{00}^{(h_0)}=\gamma _\tau \). Let \(\eta _{\tau j}=\frac{T_j}{S_j}\big |_{u_{00}^{(h_0)}=\gamma _\tau }\) and \(\eta _\tau =(\eta _{\tau 1},\ldots ,\eta _{\tau n})\). By (11), \(\frac{N_{0k}(\eta )}{N_{00}(\eta )}= \prod \limits _{j=1}^n\prod \limits _{k=0}^{s_0}(\eta _j^{(k)})^{(\alpha _{0k}-\alpha _{00})_{jk}} =\overline{\frac{\partial {\mathbf {R}}}{\partial u_{0k}^{(h_0)}}}\Big /\overline{\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}}}\). So \(\prod \limits _{j=1}^n\prod \limits _{k=0}^{s_0}\Big [\Big (\frac{\overline{T_j}}{\overline{S_j}}\Big )^{(k)}\Big ]^{(\alpha _{0k}-\alpha _{00})_{jk}} =\overline{\frac{\partial {\mathbf {R}}}{\partial u_{0k}^{(h_0)}}}\Big /\overline{\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}}}\). Let \(\mathcal {S}\) be the differential polynomial set consisting of \(\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\) and \((S_j)^{m+1}\big (\frac{T_j}{S_j}\big )^{(m)}\) for all \(i=0,\ldots ,n;k=0,\ldots ,l_i;j=1,\ldots ,n\) and \(m\in \mathbb {N}\). By Corollary 3.12, there exists a finite set \(\mathcal {S}_1\) of \(\mathcal {S}\) and \(a\in \mathbb {N}\) such that \(H=\big (\prod \limits _{S\in \mathcal {S}_1}S\big )^a \Big (\prod \limits _{j=1}^n\prod \limits _{k=0}^{s_0} \big [\big (T_j/S_j\big )^{(k)}\big ]^{(\alpha _{0k}-\alpha _{00})_{jk}} -\frac{\partial {\mathbf {R}}}{\partial u_{0k}^{(h_0)}}\Big /\frac{\partial {\mathbf {R}}}{\partial u_{00}^{(h_0)}}\Big )\in \mathrm{sat}({\mathbf {R}})\). By Lemma 5.15, \(H\) vanishes at \(u_{00}^{(h_0)}=\gamma _\tau .\) And by the proof of Theorem 5.7, \(\mathcal {S}\cap \mathrm{sat}({\mathbf {R}})=\emptyset \). So \(\xi _{\tau k}=\frac{N_{0k}(\eta _\tau )}{N_{00}(\eta _\tau )}\). By Theorem 5.16, \({\mathbf {R}}=A\prod _{\tau =1}^{t_0} (u_{00}+\sum \limits _{k=1}^{l_0} u_{0 k}\xi _{\tau k})^{(h_0)}\). Thus, (20) follows.

To prove the second part of this theorem, we need first to show that \(\delta _\tau ^{k}\eta _{\tau j}\ne 0\) for each \(k\ge 0.\) Suppose the contrary, that is, there exists some \(k\) such that \(\delta _\tau ^{k}\eta _{\tau j}=0\). From \(\eta _{\tau j}=\frac{T_j}{S_j}\big |_{u_{00}^{(h_0)}=\gamma _\tau }\), \(\delta _\tau ^{k}\eta _{\tau j}=\big (\frac{T_j}{S_j}\big )^{(k)}\big |_{u_{00}^{(h_0)}=\gamma _\tau }=0\). Thus, \(S_j^{k+1}\big (\frac{T_j}{S_j}\big )^{(k)}\in \mathrm{sat}({\mathbf {R}})\). It follows that \(\eta _j^{(k)}=\big (\frac{\overline{T_j}}{\overline{S_j}}\big )^{(k)}=0\), a contradiction to the fact that \(\eta _j\) is a differential indeterminate.

Following the above procedure, we can show that \(\frac{N_{ik}(\eta _\tau )}{N_{i0}(\eta _\tau )}=\widehat{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}}\big /\widehat{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}}\) where \(\widehat{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}}=\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}\Big |_{u_{00}^{(h_0)}=\gamma _\tau }\). Similar to (19), we have \(\sum _{k=0}^{l_i}u_{ik}\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}=b{\mathbf {R}}\) for some \(b\) in \({\mathbb Q}\). So, for each \(i\ne 0\), \(\sum _{k=0}^{l_i}u_{ik}\widehat{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}}=0.\) It follows that for each \(i\ne 0\), \({\mathbb {P}}_i(\eta _\tau )=\sum _{k=0}^{l_i}u_{ik}N_{ik}(\eta _\tau ) =\frac{N_{i0}(\eta _\tau )}{\widehat{\frac{\partial {\mathbf {R}}}{\partial u_{i0}^{(h_i)}}}}\bigg (\sum _{k=0}^{l_i}u_{ik}\widehat{\frac{\partial {\mathbf {R}}}{\partial u_{ik}^{(h_i)}}}\bigg )=0\). So each \(\eta _\tau \) is a common non-polynomial differential zero of \({\mathbb {P}}_1,\ldots ,{\mathbb {P}}_n\). \(\square \)

Under the conditions of Theorem 5.17, we further have the following result.

Theorem 5.18

The elements \(\eta _\tau \,(\tau =1,\ldots ,t_0)\) defined in Theorem 5.17 are generic points of the prime ideal \(([{\mathbb {P}}_1^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}]\hbox {{:}}\mathbb {m})_{ \mathbb {Q}\langle \hat{{\mathbf {u}}}\rangle \{{\mathbb {Y}}\}}\), where \(\hat{{\mathbf {u}}}=\cup _{i=1}^n{\mathbf {u}}_i\).

Proof

Let \({\mathcal {J}}=([{\mathbb {P}}_1^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}]\hbox {{:}}\mathbb {m})_{\mathbb {Q}\langle \hat{{\mathbf {u}}}\rangle \{{\mathbb {Y}}\}}\) and \({\mathcal {J}}_0=([{\mathbb {P}}_1^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}]\hbox {{:}}\mathbb {m})_{\mathbb {Q}\{{\mathbb {Y}},\hat{{\mathbf {u}}}\}}\). Similar to the proof of Theorem 3.9, it is easy to show that \({\mathcal {J}}_0\) is a prime differential ideal. Since \({\mathbb {P}}\) is rank essential, \({\mathcal {J}}_0\cap \mathbb {Q}\{\hat{{\mathbf {u}}}\}=[0]\). Thus, \({\mathcal {J}}=([{\mathcal {J}}_0])_{\mathbb {Q}\langle \hat{{\mathbf {u}}}\rangle \{{\mathbb {Y}}\}}\) is a prime differential ideal and \({\mathcal {J}}\cap \mathbb {Q}\{{\mathbb {Y}},\hat{{\mathbf {u}}}\}={\mathcal {J}}_0\). Let \(\xi =(\xi _1,\ldots ,\xi _n)\) be a generic point of \({\mathcal {J}}\). Then \((\xi ;\hat{{\mathbf {u}}})\) is a generic point of \({\mathcal {J}}_0\). Let \(\beta =-\sum _{k=1}^{l_0}u_{0k}N_{0k}(\xi )/N_{00}(\xi )\). Then \((\xi ;\beta ,u_{01},\ldots ,u_{0l_0};\hat{{\mathbf {u}}})\) is a generic point of \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}=([{\mathbb {P}}_0^{\text {N}},{\mathbb {P}}_1^\text {N},\ldots ,{\mathbb {P}}_n^{\text {N}}]\hbox {{:}}\mathbb {m})_{\mathbb {Q}\{{\mathbb {Y}};{\mathbf {u}}_0,\hat{{\mathbf {u}}}\}}\). Since \(\mathrm{sat}({\mathbf {R}})=\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\hat{{\mathbf {u}}}\}\), \(\gamma =(\beta ,u_{01},\ldots ,u_{0l_0};\hat{{\mathbf {u}}})\) is a generic point of \(\mathrm{sat}({\mathbf {R}})\). By Lemma 5.6, for \(j=1,\ldots ,n\), \(S_jy_j-T_j\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Then, \(\xi _j=\frac{T_j}{S_j}(\gamma )\).

By Theorem 5.17, \(\eta _\tau \) is a common non-polynomial solution of \({\mathbb {P}}_i^{\text {N}}=0\,(i=1,\ldots ,n)\) and thus also a differential zero of \({\mathcal {J}}\). Recall \(\eta _{\tau j}=\frac{T_j}{S_j}\big |_{u_{00}^{(h_0)}=\gamma _\tau }\). If \(f\) is any differential polynomial in \(\mathbb {Q}\langle \hat{{\mathbf {u}}}\rangle \{{\mathbb {Y}}\}\) such that \(f(\eta _\tau )=0\), then \(f(\frac{T_1}{S_1},\ldots ,\frac{T_n}{S_n})\big |_{u_{00}^{(h_0)}=\gamma _\tau }=0\). There exist \(a_j\in \mathbb {N}\) such that \(g=\prod _jS_j^{a_j}f(\frac{T_1}{S_1},\ldots ,\frac{T_n}{S_n})\in \mathbb {Q}\{{\mathbf {u}}_0,\hat{{\mathbf {u}}}\}.\) Then \(g|_{u_{00}^{(h_0)}=\gamma _\tau }=0\). By Lemma 5.15, \(g\in \mathrm{sat}({\mathbf {R}})\) while \(S_j\not \in \mathrm{sat}({\mathbf {R}})\). As a consequence, \(g(\gamma )=0\) and \(S_j(\gamma )\ne 0\). It follows that \(f(\xi )=f(\xi _1,\ldots ,\xi _n)=f(\frac{T_1}{S_1}(\gamma ),\ldots ,\frac{T_n}{S_n}(\gamma ))=0\) and hence \(f\in {\mathcal {J}}\), since \(\xi \) is a generic point of \({\mathcal {J}}\). Thus, \(\eta _\tau \) is a generic point of \({\mathcal {J}}\). \(\square \)

With Theorems 5.16, 5.17, and 5.18, property 3) of Theorem 1.2 is proved.

5.4 Differential Toric Variety and Sparse Differential Resultant

In this section, we will introduce the concept of differential toric variety and establish its relation with the sparse differential resultant.

We will deal with the special case when all the \(\mathcal {A}_i\) coincide with each other, i.e., \(\mathcal {A}_0=\cdots =\mathcal {A}_n=\mathcal {A}\). In this case, \(\mathcal {A}\) is said to be Laurent differentially essential when \(\mathcal {A}_0,\ldots ,\mathcal {A}_n\) form a Laurent differentially essential system. Let \(\mathcal {A}=\{M_0=({\mathbb {Y}}^{[o]})^{\alpha _{0}}, M_1=({\mathbb {Y}}^{[o]})^{\alpha _{1}},\ldots ,M_l=({\mathbb {Y}}^{[o]})^{\alpha _{l}}\}\) be Laurent differentially essential where \(\alpha _k\in \mathbb {Z}^{n(o+1)}\). Then by Definition 3.6, \(l\ge n\) and there exist indices \(k_1,\ldots ,k_n\in \{1,\ldots ,l\}\) such that \(\frac{({\mathbb {Y}}^{[o]})^{\alpha _{k_1}}}{({\mathbb {Y}}^{[o]})^{\alpha _{0}}},\ldots , \frac{({\mathbb {Y}}^{[o]})^{\alpha _{k_n}}}{({\mathbb {Y}}^{[o]})^{\alpha _{0}}}\) are differentially independent over \(\mathbb {Q}\). Let
$$\begin{aligned} {\mathbb {P}}_i=u_{i0}M_0+u_{i1}M_1+\cdots +u_{il}M_l\quad (i=0,\ldots ,n) \end{aligned}$$
(21)
be \(n+1\) generic Laurent differential polynomials w.r.t. \({\mathcal A}\).
Consider the following map
$$\begin{aligned} \phi _{\mathcal {A}}:\,(\mathcal {E}^\wedge )^n\,\longrightarrow \,\mathbf P (l) \end{aligned}$$
defined by
$$\begin{aligned} \phi _{\mathcal {A}}(\xi _1,\ldots ,\xi _n)=((\xi ^{[o]})^{\alpha _{0}}, (\xi ^{[o]})^{\alpha _{1}},\ldots ,(\xi ^{[o]})^{\alpha _{l}}) \end{aligned}$$
(22)
where \(\mathbf P (l)\) is the \(l\)-dimensional differential projective space over \(\mathcal {E}\) and \(\xi =(\xi _1,\ldots ,\xi _n)\) \(\in (\mathcal {E}^\wedge )^n\). Note that \(((\xi ^{[o]})^{\alpha _{0}},(\xi ^{[o]})^{\alpha _{1}},\ldots ,(\xi ^{[o]})^{\alpha _{l}})\) is never the zero vector since \(\xi _i\in \mathcal {E}^\wedge \) for all \(i\). Thus, \(\phi _{\mathcal {A}}\) is well defined on \( (\mathcal {E}^\wedge )^n\), though the image of \(\phi _{\mathcal {A}}\) is not necessarily a differential projective variety of \(\mathbf P (l)\). Now we give the definition of differential toric variety.

Definition 5.19

The Kolchin projective differential closure of the image of \(\phi _{\mathcal {A}}\) is defined to be the differential toric variety w.r.t. \(\mathcal {A}\), denoted by \(X_\mathcal {A}\). That is, \(X_\mathcal {A}=\overline{\phi _{\mathcal {A}}\big ((\mathcal {E}^\wedge )^n\big )}\).

Then we have the following theorem.

Theorem 5.20

\(X_\mathcal {A}\) is an irreducible projective differential variety over \(\mathbb {Q}\) of dimension \(n\).

Proof

Denote \({\mathbb {P}}_i^{\text {N}}=\sum _{k=0}^lu_{ik}N_{k}\,(i=0,\ldots ,n)\) and let
$$\begin{aligned} {\mathcal {J}}=([N_0z_1-N_1z_0,\ldots ,N_0z_l-N_lz_0]\hbox {{:}}\mathbb {m})_{\mathbb {Q}\{{\mathbb {Y}};z_0,z_1,\ldots ,z_l\}} \end{aligned}$$
where \(\mathbb {m}\) is the set of all monomials in \({\mathbb {Y}}\). Let \(\eta \) be a generic point of \([0]_{\mathbb {Q}\{{\mathbb {Y}}\}}\) and \(v\) a differential indeterminate over \(\mathbb {Q}\langle \eta \rangle \). Let \(\theta =(v,\frac{N_1(\eta )}{N_0(\eta )}v,\ldots ,\frac{N_l(\eta )}{N_0(\eta )}v)\). We claim that \((\eta ;\theta )\) is a generic point of \({\mathcal {J}}\) which follows that \({\mathcal {J}}\) is a prime differential ideal. Indeed, on the one hand, since each \(N_0z_i-N_iz_0\,(i=1,\ldots ,l)\) vanishes at \((\eta ;\theta )\) and \(\eta \) annuls none of the elements of \(\mathbb {m}\), \((\eta ;\theta )\) is a common zero of \({\mathcal {J}}\). On the other hand, for any \(f\in \mathbb {Q}\{{\mathbb {Y}};z_0,z_1,\ldots ,z_l\}\) which vanishes at \((\eta ;\theta )\), let \(f_1\) be the differential remainder of \(f\) w.r.t. \(N_0z_i-N_iz_0\,(i=1,\ldots ,l)\) under the elimination ranking \(z_1\succ \ldots \succ z_l\succ z_0\succ {\mathbb {Y}}\). Then \(f_1\in \mathbb {Q}\{{\mathbb {Y}};z_0\}\) satisfies that \(N_0^af\equiv \,f_1,\,\mathrm{{mod}}\,[N_0z_1-N_1z_0,N_0z_2-N_2z_0,\ldots ,N_0z_l-N_lz_0]\). Since \(f(\eta ;\theta )=0\), \(f_1(\eta _1,\ldots ,\eta _n,v)=0\), and \(f_1=0\) follows. Thus, \(f\in \,{\mathcal {J}}\) and the claim is proved.

Let \({\mathcal {J}}_1={\mathcal {J}}\cap \mathbb {Q}\{z_0,z_1,\ldots ,z_l\}\). Then \({\mathcal {J}}_1\) is a prime differential ideal with a generic point \(\theta \). Denote \({\mathbf {z}}=(z_0,z_1,\ldots ,z_l)\). For any \(f\in {\mathcal {J}}_1\hbox {{:}}{\mathbf {z}}\), since \(z_0f\in {\mathcal {J}}_1\), \(z_0f\) vanishes at \(\theta \) and \(f(\theta )=0\) follows. So \(f\in {\mathcal {J}}_1\), and it follows that \({\mathcal {J}}_1\hbox {{:}}{\mathbf {z}}={\mathcal {J}}_1\). And for any \(f\in {\mathcal {J}}_1\subset {\mathcal {J}}\) and any differential indeterminate \(\lambda \) over \({\mathbb Q}\langle \eta ,v\rangle \), let \(f(\lambda {\mathbf {z}})=\sum \phi (\lambda )f_\phi ({\mathbf {z}})\) where \(\phi (\lambda )\) are distinct differential monomials in \(\lambda \) and \(f_\phi ({\mathbf {z}})\in {\mathbb Q}\{{\mathbf {z}}\}\). Then \(f(\lambda \theta )=0=\sum \phi (\lambda )f_\phi (\theta )\). So each \(f_\phi (\theta )=0\) and \(f_\phi \in {\mathcal {J}}_1\) follows. Thus, \(f(\lambda {\mathbf {z}})\in {\mathbb Q}\{\lambda \}{\mathcal {J}}_1\). By Definition 2.2, \({\mathcal {J}}_1\) is a differentially homogenous differential ideal. Then \(V={\mathbb {V}}({\mathcal {J}}_1)\) is an irreducible projective differential variety in \(\mathbf P (l)\). Since \(\theta \) is a generic point of \(V\) and \({\mathcal A}\) is differentially essential, \(\hbox {{dim}}(V)=\hbox {{d.tr.deg}}\,\mathbb {Q}\langle \frac{N_1(\eta )}{N_0(\eta )},\ldots ,\frac{N_l(\eta )}{N_0(\eta )}\rangle /\mathbb {Q}=n\). If we can show \(X_\mathcal {A}=V\), then it follows that \(X_\mathcal {A}\) is an irreducible projective differential variety of dimension \(n\).

For any point \(\xi \in (\mathcal {E}^\wedge )^n\), it is clear that \((\xi ;N_0(\xi ),N_1(\xi ),\ldots ,N_l(\xi ))\) is a differential zero of \({\mathcal {J}}\) and consequently \((N_0(\xi ),N_1(\xi ),\ldots ,N_l(\xi ))\in {\mathbb {V}}({\mathcal {J}}_1)=V\). So \(\phi _{\mathcal {A}}(\xi )=(N_0(\xi ),N_1(\xi ),\ldots ,\) \(N_l(\xi ))\in V\). Thus, \(\phi _{\mathcal {A}}\big ((\mathcal {E}^\wedge )^n\big )\subseteq V\) and \(X_{\mathcal {A}}=\overline{\phi _{\mathcal {A}}\big ((\mathcal {E}^\wedge )^n\big )} \subseteq V\) follows. Conversely, since \(\phi _{\mathcal {A}}(\eta )=(1,\frac{N_1(\eta )}{N_0(\eta )},\ldots ,\frac{N_l(\eta )}{N_0(\eta )})\in X_\mathcal {A}\) is a generic point of \(V\), \(V\subseteq X_\mathcal {A}\). Thus, \(V= X_\mathcal {A}\). \(\square \)

Now, suppose \(z_0,\ldots ,z_l\) are the homogenous coordinates of \(\mathbf P (l)\). Let
$$\begin{aligned} {\mathbb {L}}_i=u_{i0}z_0+u_{i1}z_1+\cdots +u_{il}z_l\quad \,(i=0,\ldots ,n) \end{aligned}$$
(23)
be generic differential hyperplanes in \(\mathbf P (l)\). Then, clearly, \({\mathbb {P}}_i={\mathbb {L}}_i\circ \phi _{\mathcal {A}}\). In the following, we will explore the close relation between \(\hbox {Res}_{\mathcal {A}}\) and \(X_\mathcal {A}\), or more specifically, the differential Chow form of \(X_\mathcal {A}\). Before doing so, we first recall the concept of projective differential Chow form [33].

Let \(V\) be an irreducible projective differential variety of dimension \(d\) over \(\mathbb {Q}\) with a generic point \(\xi =(\xi _0,\xi _1,\ldots ,\xi _l)\). Suppose \(\xi _0\ne 0\). Let \({\mathbb {L}}_i=\sum _{k=0}^lu_{ik}z_k\,(i=0,\ldots ,d)\) be \(d+1\) generic projective differential hyperplanes. Denote \(\zeta _i=-\sum _{k=1}^lu_{ik}\xi _0^{-1}\xi _k\,(i=0,\ldots ,d)\) and \({\mathbf {u}}_i=(u_{i0},\ldots ,u_{il})\). Then it is proved in [33] that the prime ideal \(\mathbb {I}\big ((\zeta _0,\ldots ,\zeta _d)\big )\) over \(\mathbb {Q}\langle \cup _i{\mathbf {u}}_i\backslash \{u_{i0}\}\rangle \) is of codimension one. That is, there exists an irreducible differential polynomial \(F\in \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_d\}\) such that \(\mathbb {I}\big ((\zeta _0,u_{01},\ldots ,u_{0l};\ldots ;\zeta _d,u_{d1},\ldots ,u_{dl})\big )=\mathrm{sat}(F)\). This \(F\) is defined to be the differential Chow form of \({\mathbb {V}}(\mathcal {I})\) or \(\mathcal {I}\). We list one of its properties which will be used in this section.

Theorem 5.21

[33, Theorem4.7] Let \(F({\mathbf {u}}_{0},{\mathbf {u}}_{1},\ldots ,{\mathbf {u}}_{d})\) be the differential Chow form of \(V\) with \({\mathrm{ord}}(F)=h\) and \(S_{F}=\frac{\partial F}{\partial u_{00}^{(h)}}\) . Suppose that \({\mathbf {u}}_i\) are differentially specialized over \(\mathbb {Q}\) to sets \({\mathbf {v}}_i\subset \mathcal {E}\) and \(\overline{{\mathbb {P}}}_{i}\) are obtained by substituting \({\mathbf {u}}_i\) by \({\mathbf {v}}_i\) in \({\mathbb {P}}_i\,(i=0,\ldots ,d)\). If  \(\overline{{\mathbb {P}}}_i=0\,(i=0,\ldots ,d)\) meet \(V\), then \(\mathrm{sat}(F)\) vanishes at \(({\mathbf {v}}_{0},\ldots ,{\mathbf {v}}_{d})\). Furthermore, if \(F({\mathbf {v}}_{0},\ldots ,{\mathbf {v}}_{d})=0\) and \(S_{F}({\mathbf {v}}_{0},\ldots ,{\mathbf {v}}_{d})\ne 0\), then the \(d+1\) differential hyperplanes \(\overline{{\mathbb {P}}}_{i}=0\) \(\,(i=0,\ldots ,d)\) meet \(V\).

The following theorem shows that the sparse differential resultant is closely related to the differential Chow form of \(X_\mathcal {A}\).

Theorem 5.22

Let \(\mathrm{Res}_{\mathcal {A}}\) be the sparse differential resultant of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\) given in (21). Then \(\mathrm{Res}_{\mathcal {A}}\) is the differential Chow form of \(X_\mathcal {A}\) with respect to the generic hyperplanes \({\mathbb {L}}_0,\ldots ,{\mathbb {L}}_n\) given in (23).

Proof

By the proof of Theorem 5.20, \(X_\mathcal {A}\) is an irreducible projective differential variety of dimension \(n\) with a generic point \((1,\frac{N_1(\eta )}{N_0(\eta )},\ldots ,\frac{N_l(\eta )}{N_0(\eta )})\). Let \(\zeta _i=-\sum _{k=1}^lu_{ik}\frac{N_k(\eta )}{N_0(\eta )}\,(i=0,\ldots ,n)\) and \(\zeta =(\zeta _0,u_{01},\ldots ,u_{0l};\ldots ;\zeta _n,u_{n1},\ldots ,u_{nl})\). Then \(\mathrm{sat}(\hbox {{Chow}}(X_\mathcal {A}))=\mathbb {I}(\zeta )\), which is the vanishing differential ideal of \(\zeta \) in \({\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\). And by the definition of sparse differential resultant, \(\mathrm{sat}(\hbox {Res}_{\mathcal {A}})=\mathbb {I}(\zeta )\). By Lemma 2.3, \(\hbox {{Chow}}(X_\mathcal {A})\) and \(\hbox {Res}_{\mathcal {A}}\) can only differ at most by a nonzero element in \(\mathbb {Q}\). Thus, \(\hbox {Res}_{\mathcal {A}}\) is just the differential Chow form of \(X_\mathcal {A}\). \(\square \)

We give another characterization of the vanishing of sparse differential resultants below, where the zeros are taken from \(\mathcal {E}\) instead of \(\mathcal {E}^{\wedge }\).

Corollary 5.23

Let \(\overline{{\mathbb {L}}}_i=v_{i0}z_0+v_{i1}z_1+\cdots +v_{il}z_l=0\,(i=0,\ldots ,n)\) be projective differential hyperplanes with \({\mathbf {v}}_i=(v_{i0},\ldots ,v_{il})\in \mathcal {E}^{l+1}\). Denote \({\mathrm{ord}}(\mathrm{Res}_{\mathcal {A}})=h\) and \(S_{\mathbf {R}}=\frac{\partial \mathrm{Res}_{\mathcal {A}}}{\partial u_{00}^{(h)}}\). If \(X_\mathcal {A}\) meets \(\overline{{\mathbb {L}}}_i=0\,(i=0,\ldots ,n)\), then \(\mathrm{Res}_{\mathcal {A}}({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)=0\). And if \(\mathrm{Res}_{\mathcal {A}}({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)=0\) and \(S_{\mathbf {R}}({\mathbf {v}}_0,\ldots ,{\mathbf {v}}_n)\ne 0\), then \(X_\mathcal {A}\) meets \(\overline{{\mathbb {L}}}_i=0\,(i=0,\ldots ,n)\).

Proof

It follows directly from Theorems 5.22 and 5.21. \(\square \)

Example 5.24

Let \({\mathcal A}={\mathcal A}_0\), where \({\mathcal A}_0\) is given in Example 3.21. Following the proof of Theorem 5.20, let \({\mathcal {J}}=[y_1z_1-y_1'z_0,y_1z_2-y_1^2z_0]\hbox {{:}}\mathbb {m}\). It is easy to show that \(X_\mathcal {A}\) is the general component of \(z_1z_2-(z_0z_2'-z_0'z_2)\), that is, \(X_\mathcal {A}={\mathbb {V}}(\mathrm{sat}(z_1z_2-(z_0z_2'-z_0'z_2)))\). And \(\mathrm{Res}_\mathcal {A}\) is equal to the differential Chow form of \(X_\mathcal {A}\).

By Theorems 5.20 and 5.22, property 4) of Theorem 1.2 is proved.

6 A Single Exponential Algorithm to Compute the Sparse Differential Resultant

In this section, we give an algorithm to compute the sparse differential resultant for a Laurent differentially essential system with single exponential complexity. The idea is first to estimate the order and degree bounds for the resultant and then to use linear algebra to find the coefficients of the resultant.

6.1 Order Bounds of Sparse Differential Resultants in Terms of Jacobi Numbers

In this section, we will give an order bound for the sparse differential resultant in terms of the Jacobi number of the given system.

Let \(A=(a_{ij})\) be an \(n\times n\) matrix where \(a_{ij}\) is an integer or \(-\infty \). A diagonal sum of \(A\) is any sum \(\sum _{i=1}^n a_{i\sigma (i)}\) where \(\sigma \) a permutation of \(1,\ldots ,n\). If \(B\) is an \(m\times n\) matrix with \(w=\min \{m,n\}\), then a diagonal sum of \(B\) is a diagonal sum of any \(w\times w\) sub-matrix of \(A\). The Jacobi number of \(B\) is defined as the maximal diagonal sum of \(B\), denoted by \(\mathrm{Jac}(B)\).

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) and \(\widehat{{\mathbb {P}}}=\{{\mathbb {P}}_0^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {\tiny N}}\}\) be given in (3) and (5), respectively. Let \({\mathrm{ord}}({\mathbb {P}}_i^{\text {N}},y_j)=e_{ij}\,(i=0,\ldots ,n;j=1,\ldots ,n)\) and \({\mathrm{ord}}({\mathbb {P}}_i^{\text {N}},{\mathbb {Y}})=e_i\). We call the \((n+1)\times n\) matrix \(E=(e_{ij})\) the order matrix of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\). By \(E_{\hat{i}}\), we mean the sub-matrix of \(E\) obtained by deleting the \((i+1)\)th row from \(E\). Let \(\widehat{{\mathbb {P}}}_{\hat{i}}=\widehat{{\mathbb {P}}}\backslash \{{\mathbb {P}}_i^{\text {N}}\}\). We call \(\text { J}_i=\mathrm{Jac}(E_{\hat{i}})\) the Jacobi number of the system \(\widehat{{\mathbb {P}}}_{\hat{i}}\), also denoted by \(\mathrm{Jac}(\widehat{{\mathbb {P}}}_{\hat{i}})\). Before giving an order bound for the sparse differential resultant in terms of Jacobi numbers, we first give several lemmas.

Given a vector \({\mathbf {k}}=(k_0,k_1,\ldots ,k_n)\in \mathbb {Z}_{\ge 0}^{n+1}\), we can obtain a prolongation of \(\widehat{{\mathbb {P}}}\):
$$\begin{aligned} \widehat{{\mathbb {P}}}^{[{\mathbf {k}}]} = \bigcup _{i=0}^n({\mathbb {P}}_i^{\text {N}})^{[k_i]}. \end{aligned}$$
(24)
Let \(t_j=\mathrm{max}\{e_{0j}+k_0, e_{1j}+k_1, \ldots , e_{nj}+k_n\}\). Then \(\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}\) is contained in the polynomial ring \({\mathbb Q}[{\mathbf {u}}^{[{\mathbf {k}}]},{\mathbb {Y}}^{[\widetilde{\mathbf {k}}]}]\), where \({\mathbf {u}}^{[{\mathbf {k}}]}=\cup _{i=0}^n {\mathbf {u}}_i^{[k_i]}\) and \({\mathbb {Y}}^{[\widetilde{\mathbf {k}}]}= \cup _{j=1}^n y_j^{[t_j]}\).
Denote \(\nu (\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]})\) to be the number of \({\mathbb {Y}}\) and their derivatives appearing effectively in \( \widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}\). In order to derive a differential relation among \({\mathbf {u}}_{i}\,(i=0,\ldots ,n)\) from \(\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}\), a sufficient condition is
$$\begin{aligned} |\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}|\ge \nu (\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]})+1. \end{aligned}$$
(25)
Note that \( \nu (\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]})\le |{\mathbb {Y}}^{[\widetilde{\mathbf {k}}]}|= \sum _{j=1}^{n} (t_j+1)\). Thus, if \(|\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}|\ge {\mathbb {Y}}^{[\widetilde{\mathbf {k}}]}+1\), or equivalently,
$$\begin{aligned} k_0 + k_1 + \cdots + k_n \ge \sum \limits _{j=1}^{n} \mathrm{max}(e_{0j}+k_0, e_{1j}+k_1, \ldots , e_{nj}+k_n) \end{aligned}$$
(26)
is satisfied, then so is the inequality (25).

Lemma 6.1

Let \({\mathbb {P}}\) be a Laurent differentially essential system and \({\mathbf {k}}=(k_0,k_1,\ldots ,k_n)\in \mathbb {Z}_{\ge 0}^{n+1}\) be a vector satisfying 26. Then \(\mathrm{ord}({\mathbf {R}},{\mathbf {u}}_i)\le k_i\) for each \(i=0,\ldots ,n\).

Proof

Denote \(\mathbb {m}^{[{\mathbf {k}}]}\) to be the set of all monomials in variables \({\mathbb {Y}}^{[\widetilde{\mathbf {k}}]}\). Suppose \({\mathcal {I}}=(\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]})\hbox {{:}}\mathbb {m}^{[{\mathbf {k}}]}=\)

\(\{f\in \mathbb {Q}[{\mathbb {Y}}^{[\widetilde{\mathbf {k}}]},{\mathbf {u}}^{[{\mathbf {k}}]}]\big | \exists M\in \mathbb {m}^{[{\mathbf {k}}]}, Mf\in (\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}) \}\). Denote \(U={\mathbf {u}}^{[{\mathbf {k}}]}\backslash \cup _{i=0}^nu_{i0}^{[k_i]}\). Assume \({\mathbb {P}}_i^{\text {N}}=\sum _{k=0}^{l_i}u_{ik} N_{ik}\,(i=0,\ldots ,n)\). Let \(\zeta _{il}=-(\sum _{k=1}^{l_i}u_{ik}N_{ik}/N_{i0})^{(l)}\) for \(i=0,1,\ldots ,n;l=0,1,\ldots ,k_i\). Denote \(\bar{\zeta } =(U,\zeta _{0k_0},\ldots ,\zeta _{00},\ldots ,\zeta _{nk_n},\ldots ,\zeta _{n0})\). It is easy to show that \(({\mathbb {Y}}^{[\widetilde{\mathbf {k}}]},\bar{\zeta } )\) is a generic point of \({\mathcal {I}}\). Indeed, it is clear that each polynomial in \({\mathcal {I}}\) vanishes at \(({\mathbb {Y}}^{[\widetilde{\mathbf {k}}]},\bar{\zeta } )\). And if \(f\) is an arbitrary polynomial in \(\mathbb {Q}[{\mathbb {Y}}^{[\widetilde{\mathbf {k}}]},{\mathbf {u}}^{[{\mathbf {k}}]}]\) such that \(f({\mathbb {Y}}^{[\widetilde{\mathbf {k}}]},\bar{\zeta } )=0\), substitute \(u_{i0}^{(l)}=\big (({\mathbb {P}}_i^{\text {N}}-\sum _{k=1}^{l_i}u_{ik}N_{ik})/N_{i0}\big )^{(l)}\) into \(f\), then we have \(\prod _{i=0}^nN_{i0}^{a_i}f\,\equiv \,f_1,\mathrm{{mod}}\,(\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}),\) where \(f_1\in \mathbb {Q}[{\mathbb {Y}}^{[\widetilde{\mathbf {k}}]},U]\). Clearly, \(f_1=0\) and \(f\in {\mathcal {I}}\) follows.

Let \({\mathcal {I}}_1={\mathcal {I}}\cap \mathbb {Q}[{\mathbf {u}}^{[{\mathbf {k}}]}]\). Then \({\mathcal {I}}_1\) is a prime ideal with \(\bar{\zeta } \) as its generic point. Since \(\mathbb {Q}(\bar{\zeta } )\subset \mathbb {Q}({\mathbb {Y}}^{[\widetilde{\mathbf {k}}]},U)\), \(\mathrm{Codim}({\mathcal {I}}_1)=|U|+\sum _{i=0}^n(k_i+1)-\hbox {{tr.deg}}\,\mathbb {Q}(\bar{\zeta } )/\mathbb {Q}\ge |U|+|\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}|-\hbox {{tr.deg}}\,\mathbb {Q}({\mathbb {Y}}^{[\widetilde{\mathbf {k}}]},U)/\mathbb {Q}=|\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}|-|{\mathbb {Y}}^{[\widetilde{\mathbf {k}}]}|\ge 1\). Thus, \({\mathcal {I}}_1\ne (0)\). Suppose \(f\) is any nonzero polynomial in \({\mathcal {I}}_1\). Clearly, \({\mathrm{ord}}(f,{\mathbf {u}}_{i})\le k_i\). Since \({\mathcal {I}}_1\subset \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}=\mathrm{sat}({\mathbf {R}})\), \(f\in \mathrm{sat}({\mathbf {R}})\). Note that \({\mathbf {R}}\) is a characteristic set of \(\mathrm{sat}({\mathbf {R}})\) w.r.t. any ranking by Lemma 2.3. Thus, \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_{i})\le {\mathrm{ord}}(f,{\mathbf {u}}_i)\le k_i\). \(\square \)

Lemma 6.2

Let \({\mathbb {P}}\) be a Laurent differentially essential system and \(\text { J}_i\ge 0\) for each \(i=0,\ldots ,n\). Then \(\sum _{j=1}^n\mathrm{max}(e_{0j}+\text { J}_0,\cdots ,e_{nj}+\text { J}_{n})=\sum _{i=0}^n\text { J}_{i}\).

Proof

Let \(E = (e_{ij})\) be the \((n+1)\times n\) order matrix of \(\widehat{{\mathbb {P}}}\), where \(e_{ij} = {\mathrm{ord}}({\mathbb {P}}_i^{\text {N}},y_j)\). Without loss of generality, suppose \(\text { J}_{0} = e_{11} + e_{22} + \cdots + e_{nn}\).

Firstly, we will show that for each \(k\ne 1\), \(e_{11}+\text { J}_1 \ge e_{k1}+\text { J}_k\). Since \(\text { J}_k\) is the Jacobi number of \(\widehat{{\mathbb {P}}}_{\hat{k}}\) and \(k\ne 1\), \(\text { J}_k\) has a summand of the form \(e_{1p_1}\). Let \(m\) be the biggest \(s\) such that \(e_{1p_1}+e_{p_1p_2}+ \cdots + e_{p_{s-1}p_s}\) is a partial sum of successive summands in \(\text { J}_k\) and denote \(T_0=e_{1p_1}+e_{p_1p_2}+ \cdots + e_{p_{m-1}p_m}\). Suppose \(\text { J}_k = T_0 + T_1\). Since \(\text { J}_k\) is a diagonal sum, \(p_i\ne p_j\) for \(1\le i < j\). For otherwise, \(\text { J}_k\) contains \(e_{p_{i-1}p_i}\) and \(e_{p_{j-1}p_i}\) as summands \((p_0=1)\), a contradiction. Also note that \(p_i\ne 0\) for \(1\le i \le m\). Now we claim that \(p_m\) is either equal to \(1\) or equal to \(k\). Indeed, if \(p_m=1\) or \(p_m=k\), \(T_0\) cannot be any longer and these two cases may happen. But if \(p_m\ne 1\) and \(p_m\ne k\), then we can add another summand \(e_{p_{m}p_{m+1}}\) to \(T_0\), which contradicts the fact that \(T_0\) is the longest one. So \(p_m=1\) or \(k.\) Now three cases are considered.

Case 1) If \(p_1=1\), \(\text { J}_k = e_{11}+T_1\) and \(e_{k1}+\text { J}_k = e_{11} + e_{k1} + T_1\). Since \(e_{k1} + T_1\) is a diagonal sum of \(\widehat{{\mathbb {P}}}_{\hat{1}}\), \(e_{k1} + T_1 \le \text { J}_1\). Thus, \(e_{11}+\text { J}_1 \ge e_{k1}+\text { J}_k\).

Case 2) If \(p_m=1\) for \(m>1\), \(T_0 = e_{1p_1}+e_{p_1p_2}+\cdots +e_{p_{m-1}1}\). Since \(\text { J}_{0} = e_{11}+\cdots + e_{nn}\), \(T_0 \le e_{11} + e_{p_1p_1} + \cdots + e_{p_{m-1}p_{m-1}}\). For otherwise, since \(p_i\ne 0\), \(T_0+\sum _{k\in \{2,\ldots ,n\}\backslash \{p_1,\ldots ,p_{m-1}\}}e_{kk}\) is a diagonal sum of \(\widehat{{\mathbb {P}}}_{\hat{0}}\) which is greater than \(\text { J}_0\), a contradiction. Then \(e_{k1}+\text { J}_k = e_{k1}+T_0 + T_1 \le e_{k1}+e_{11} + e_{p_1p_1} + \cdots + e_{p_{m-1}p_{m-1}} + T_1\le e_{11}+\text { J}_1\), where the last inequality follows from the fact that \(e_{k1}+e_{p_1p_1} + \cdots + e_{p_{m-1}p_{m-1}} + T_1\) is a diagonal sum of \(\widehat{{\mathbb {P}}}_{\hat{1}}\).

Case 3) If \(p_m=k\), \(T_0 = e_{1p_1}+e_{p_1p_2}+\cdots + e_{p_{m-1}k}\). Then, similar to case 2), we can show that \(e_{k1}+e_{1p_1}+e_{p_1p_2}+\cdots + e_{p_{m-1}k}\le e_{11}+e_{kk}+e_{p_1p_1}+\cdots +e_{p_{m-1}p_{m-1}}\). Thus,
$$\begin{aligned} \begin{array}{lll} e_{k1}+ \text { J}_k &{}=&{} e_{k1}+e_{1p_1}+e_{p_1p_2}+\cdots + e_{p_{m-1}k} + T_1 \\ &{}\le &{} e_{kk} + e_{11} + e_{p_1p_1} + \cdots + e_{p_{m-1}p_{m-1}} + T_1 \\ &{} \le &{} e_{11} + \text { J}_1. \end{array} \end{aligned}$$
Similarly, we can prove that for each \(j\), \(e_{jj}+\text {J}_j\ge e_{kj}+\text { J}_k\) with \(0\le k\le n\). Thus, we have
$$\begin{aligned} \begin{array}{lll} \sum \limits _{j=1}^{n} \mathrm{max}(e_{0j}+\text { J}_0,\cdots ,e_{nj}+\text { J}_{n}) &{}=&{} e_{11}+\text { J}_1 + e_{22}+\text { J}_2 + \cdots + e_{nn}+\text { J}_{n} \\ &{} = &{} \text { J}_0 + \text { J}_1 +\cdots + \text { J}_{n}. \end{array} \end{aligned}$$
\(\square \)

Corollary 6.3

Let \({\mathbb {P}}\) be a Laurent differentially essential system and \(\text { J}_i\ge 0\) for each \(i=0,\ldots ,n\). Then \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\le \text { J}_i\,(i=0,\ldots ,n)\).

Proof

It is a direct consequence of Lemma 6.1 and Lemma 6.2. \(\square \)

The above corollary shows that when all the Jacobi numbers are not less that \(0\), then Jacobi numbers are order bounds for the sparse differential resultant. In the following, we deal with the remaining case when some \(\text { J}_i=-\infty \). To this end, two more lemmas are needed.

Lemma 6.4

[9, 32] Let \(E\) be an \(m\times n\) matrix whose entries are \(0\)’s and \(1\)’s. Let \(\mathrm{Jac}(E)=\text { J}<\min \{m,n\}\). Then \(E\) contains an \(a\times b\) zero sub-matrix with \(a+b=m+n-\text { J}\).

Lemma 6.5

Let \({\mathbb {P}}\) be a Laurent differentially essential system with the following \((n+1)\times n\) order matrix
$$\begin{aligned} E=\left( \begin{array}{cc} E_{11} &{} \, (-\infty )_{r\times t} \\ E_{21} &{} \,E_{22} \end{array}\right) , \end{aligned}$$
where \(r+t\ge n+1\). Then \(r+t=n+1\) and \(\mathrm{Jac}(E_{22})\ge 0\). Moreover, when regarded as differential polynomials in \(y_1,\ldots ,y_{r-1}\), \(\{{\mathbb {P}}_{0},\ldots ,{\mathbb {P}}_{r-1}\}\) is a Laurent differentially essential system.

Proof

The structure of \(E\) implies that the symbolic support matrix of \({\mathbb {P}}\) has the following form:
$$\begin{aligned} \mathrm{{D}}_{\mathbb {P}}=\left( \begin{array}{cc} B_{11} &{} \, 0_{r\times t} \\ B_{21} &{} \,B_{22} \end{array}\right) . \end{aligned}$$
Since \({\mathbb {P}}\) is Laurent differentially essential, by Corollary 4.16, \(\hbox {{rk}}(\mathrm{{D}}_{\mathbb {P}})=n\). Since \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}})\le \hbox {{rk}}( B_{11})+\hbox {{rk}}\big ((B_{21}\,\,\,B_{22})\big )\), we have \(n \le (n-t)+(n+1-r) = 2n+1-(r+t)\). Thus, \(r+t\le n+1\), and \(r+t = n+1\) follows. Since the above inequality becomes equality, \(B_{11}\) has full column rank. As a consequence, \(\mathrm{rank}(\mathrm{{D}}_{\mathbb {P}}) = \mathrm{rank}(B_{11})+\mathrm{rank}(B_{22})\). Hence, \(B_{22}\) is a \(t\times t\) non-singular matrix. Regarding \({\mathbb {P}}_{0},\ldots ,{\mathbb {P}}_{r-1}\) as differential polynomials in \(y_1,\ldots ,y_{r-1}\), \(B_{11}\) is the symbolic support matrix of \(\{{\mathbb {P}}_{0},\ldots ,{\mathbb {P}}_{r-1}\}\) which is of full rank. Thus, \(\{{\mathbb {P}}_{0},\ldots ,{\mathbb {P}}_{r-1}\}\) is a Laurent differentially essential system.
It remains to show that \(\mathrm{Jac}(E_{22})\ge 0\). Suppose the contrary, i.e., \(\mathrm{Jac}(E_{22})=-\infty \). Let \(\bar{E}_{22}\) be a \(t\times t\) matrix obtained from \(E_{22}\) by replacing \(-\infty \) by \(0\) and replacing all other elements in \(E_{22}\) by \(1\)’s. Then \(\mathrm{Jac}(\bar{E}_{22})<t\), and by Lemma 6.4, \(\bar{E}_{12}\) contains an \(a\times b\) zero sub-matrix with \(a+b=2t-\mathrm{Jac}(\bar{E}_{22})\ge t+1\). By interchanging rows and interchanging columns when necessary, suppose such a zero sub-matrix is in the upper right corner of \(\bar{E}_{22}\). Then
$$\begin{aligned} E_{22}=\left( \begin{array}{cc } C_{11} &{} \, (-\infty )_{a\times b}\\ C_{21} &{} \,C_{22} \end{array}\right) , \end{aligned}$$
where \(a+b\ge t+1\). Thus,
$$\begin{aligned} B_{22}=\left( \begin{array}{cc } D_{11} &{} \, 0_{a\times b} \\ D_{21} &{} \,D_{22} \end{array}\right) , \end{aligned}$$
which is singular for \(a+b\ge t+1\), a contradiction. Thus, \(\mathrm{Jac}(E_{22})\ge 0.\) \(\square \)

Now, we are ready to prove the main result of this section.

Theorem 6.6

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) given in (3) be a Laurent differentially essential system and \({\mathbf {R}}\) the sparse differential resultant of \({\mathbb {P}}\). Then
$$\begin{aligned} {\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=\left\{ \begin{array}{lll} -\infty &{}&{} \text {if}\quad \,\text { J}_i = -\infty ,\\ h_i\le \text { J}_i&{}&{} \text {if}\quad \,\text { J}_i \ge 0.\end{array}\right. \end{aligned}$$

Proof

Corollary 6.3 proves the case when \(\text { J}_i\ge 0\) for each \(i\). Now suppose there exists at least one \(i\) such that \(\text { J}_i = -\infty \). Without loss of generality, we assume \(\text { J}_n = -\infty \) and let \(E_n = (e_{ij})_{0\le i\le n-1; 1\le j\le n}\) be the order matrix of \(\widehat{{\mathbb {P}}}_{\hat{n}}\). By Lemma 6.4 and similarly as the procedures in the proof of Lemma 6.5, we can assume that \(E_n\) is of the following form
$$\begin{aligned} E_n=\left( \begin{array}{cc} E_{11} &{} \,(-\infty )_{r\times t} \\ \bar{E}_{21} &{} \,\bar{E}_{22} \end{array}\right) , \end{aligned}$$
where \(r+t\ge n+1\). Then the order matrix of \({\mathbb {P}}\) is equal to
$$\begin{aligned} E=\left( \begin{array}{cc} E_{11} &{} \,(-\infty )_{r\times t} \\ E_{21} &{} \,E_{22} \end{array}\right) . \end{aligned}$$
Since \({\mathbb {P}}\) is Laurent differentially essential, by Lemma 6.5, \(r+t = n+1\) and \(\mathrm{Jac}(E_{22})\ge 0\). Moreover, regarded as differential polynomials in \(y_1,\ldots ,y_{r-1}\), \(\widetilde{{\mathbb P}} = \{{\mathbb {P}}_{0},\ldots ,{\mathbb {P}}_{r-1}\}\) is Laurent differentially essential and \(E_{11}\) is its order matrix. Let \(\widetilde{\text { J}}_i=\mathrm{Jac}((E_{11})_{\hat{i}})\). By applying the above procedure when necessary, we can suppose that \(\widetilde{\text { J}}_i\ge 0\) for each \(i=0,\ldots ,r-1\). Since \([{\mathbb {P}}]\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}=[\widetilde{{\mathbb {P}}}]\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_{r-1}\}=\mathrm{sat}({\mathbf {R}})\), \({\mathbf {R}}\) is also the sparse differential resultant of the system \(\widetilde{{\mathbb {P}}}\) and \({\mathbf {u}}_r,\ldots ,{\mathbf {u}}_{n}\) will not occur in \({\mathbf {R}}\). By Corollary 6.3, \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_{i})\le \widetilde{\text { J}_i}\). Since \(\text { J}_{i} = \mathrm{Jac}(E_{22})+ \widetilde{\text { J}_i}\ge \widetilde{\text { J}_i}\) for \(0\le i\le r-1\), \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\le \text { J}_{i}\) for \(0\le i\le r-1\) and \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=-\infty \) for \(i=r,\ldots ,n.\) \(\square \)

Corollary 6.7

Let \({\mathbb {P}}\) be rank essential. Then \(\text { J}_i\ge 0\) for \(i=0,\ldots ,n\) and \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i) \le \text { J}_i\).

Proof

From the proof of Theorem 6.6, if \(\hbox {J}_i=-\infty \) for some \(i\), then \({\mathbb {P}}\) contains a proper differentially essential sub-system, which contradicts Theorem 4.20. Therefore, \(\text { J}_i\ge 0\) for \(i=0,\ldots ,n\). \(\square \)

By Theorem 6.6, \(\text { J}_i\ge 0\) is a necessary condition for \({\mathbf {u}}_i\) appearing in \({\mathbf {R}}\). The following example shows that this condition is not sufficient.

Example 6.8

Let \({\mathbb {P}}= \{ {\mathbb {P}}_0, {\mathbb {P}}_1, {\mathbb {P}}_2,{\mathbb {P}}_3 \}\) be a Laurent differential polynomial system where
$$\begin{aligned} \begin{array}{l@{\quad }l@{\quad }l@{\quad }l} {\mathbb {P}}_0 &{}=&{} u_{00}+ u_{01}y_1y_1'y_2y_2'' \\ {\mathbb {P}}_1 &{}=&{} u_{10}+ u_{11}y_1y_1'y_2y_2'' \\ {\mathbb {P}}_2 &{}=&{} u_{20}+ u_{21}y_1 +u_{22}y_2 \\ {\mathbb {P}}_3 &{}=&{} u_{30}+ u_{31}y_1' + u_{32}y_3. \\ \end{array} \end{aligned}$$
Then, the corresponding order matrix is
$$\begin{aligned}E=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1 &{} \, 2 &{} \, -\infty \\ 1 &{} \, 2 &{} \, -\infty \\ 0 &{} \, 0 &{} \, -\infty \\ 1 &{} \, -\infty &{} \, 0 \end{array}\right) \!. \end{aligned}$$
It is easy to show that \({\mathbb {P}}\) is Laurent differentially essential and \(\{ {\mathbb {P}}_0, {\mathbb {P}}_1 \}\) is the rank essential sub-system. Here \({\mathbf {R}}=u_{00}u_{11}-u_{01}u_{10}\). Clearly, \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_0) = {\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_1)= 0\) and \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_2) = {\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_3)= -\infty \), but \(\text { J}_0 = 2, \text { J}_1 = 2, \text { J}_2 = 3, \text { J}_3 = -\infty \).

We conclude this section by giving two improved order bounds based on the Jacobi bound given in Theorem 6.6.

For each \(j\in \{1,\ldots ,n\}\), let \(\underline{o}_j=\min \{k\in \mathbb {N}| \,\exists \, i\, \text {s.t.}\, \mathrm{deg}({\mathbb {P}}_i^{\text {N}},y_j^{(k)})>0\}\). In other words, \(\underline{o}_j\) is the smallest number such that \(y_j^{(\underline{o}_j)}\) occurs in \(\{{\mathbb {P}}_0^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}\}\). Let \(B=(e_{ij}-\underline{o}_j)\) be an \((n+1)\times n\) matrix. We call \(\bar{J}_i=\mathrm{Jac}(B_{\hat{i}})\) the modified Jacobi number of the system \({\mathbb {P}}_{\hat{i}}\). Denote \(\underline{\gamma }=\sum _{j=1}^n\underline{o}_j\). Clearly, \(\bar{J}_i=\text { J}_i-\underline{\gamma }.\) Then we have the following result.

Theorem 6.9

Let \({\mathbb {P}}\) be a Laurent differentially essential system and \({\mathbf {R}}\) the sparse differential resultant of \({\mathbb {P}}\). Then
$$\begin{aligned}{\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=\left\{ \begin{array}{lll} -\infty &{}&{} \text {if}\quad \,\text { J}_i = -\infty ,\\ h_i\le \text { J}_i-\underline{\gamma }&{}&{} \text {if}\quad \,\text { J}_i \ge 0.\end{array}\right. \end{aligned}$$

Proof

Let \(\tilde{{\mathbb {P}}}_i\) be obtained from \({\mathbb {P}}_i\) by replacing \(y_j^{(k)}\) by \(y_j^{(k-\underline{o}_j)}\,\) \((j=1,\ldots ,n;k\ge \underline{o}_j)\) in \({\mathbb {P}}_i\) for \(i=0,\ldots ,n\) and denote \(\tilde{{\mathbb {P}}}=\{\tilde{{\mathbb {P}}}_0,\ldots ,\tilde{{\mathbb {P}}}_n\}\). Since
https://static-content.springer.com/image/art%3A10.1007%2Fs10208-015-9249-9/MediaObjects/10208_2015_9249_Equ99_HTML.gif
we obtain \(\hbox {{rk}}(\mathrm{{D}}_{\tilde{{\mathbb {P}}}})=\hbox {{rk}}(\mathrm{{D}}_{{\mathbb {P}}})=n.\) Thus, \({\mathcal {I}}=[\tilde{{\mathbb {P}}}]\cap \mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) is a prime differential ideal of codimension \(1\). We claim that \({\mathcal {I}}=\mathrm{sat}({\mathbf {R}})\). Suppose \({\mathbb {P}}_i=u_{i0}M_{i0}+T_i\) and \(\tilde{{\mathbb {P}}}_i=u_{i0}\tilde{M}_{i0}+\tilde{T}_i\). Let \(\zeta _i=-T_i/M_{i0}\) and \(\theta _i=-\tilde{T}_i/\tilde{M}_{i0}\). Denote \({\mathbf {u}}=\cup _{i=0}^n{\mathbf {u}}_i\backslash \{u_{i0}\}\). Then \(\zeta =({\mathbf {u}},\zeta _0,\ldots ,\zeta _n)\) is a generic point of \(\mathrm{sat}({\mathbf {R}})\) and \(\theta =({\mathbf {u}}, \theta _0,\ldots ,\theta _n)\) is a generic point of \({\mathcal {I}}\). For any differential polynomial \(G\in \mathrm{sat}({\mathbf {R}})\), \(G(\zeta )=0=(\sum _\phi \phi ({\mathbb {Y}})F_\phi ({\mathbf {u}}))/N({\mathbb {Y}})\) where \(\phi ({\mathbb {Y}})\) are distinct differential monomials in \({\mathbb {Y}}\) and so is \(N({\mathbb {Y}})\). Then \(F_\phi ({\mathbf {u}})\equiv 0\) for each \(\phi \). Thus, \(G(\theta )=(\sum _\phi \tilde{\phi }({\mathbb {Y}})F_\phi ({\mathbf {u}}))/\tilde{N}({\mathbb {Y}})=0\) and \(G\in {\mathcal {I}}\) follows. So \(\mathrm{sat}({\mathbf {R}})\subseteq {\mathcal {I}}\). Similarly, we can show that \({\mathcal {I}}\subseteq \mathrm{sat}({\mathbf {R}})\). Hence, \({\mathbf {R}}\) is the sparse differential resultant of \(\tilde{{\mathbb {P}}}\). Since \(\mathrm{Jac}(\tilde{{\mathbb {P}}}_{\hat{i}})=\mathrm{Jac}({\mathbb {P}}_{\hat{i}})-\underline{\gamma }\), by Theorem 6.6, the theorem is proved. \(\square \)

Remark 6.10

Let \({\mathbf {k}}=(e-e_0,e-e_1,\ldots ,e-e_n)\) where \(e=\sum _{i=0}^n e_i\). Clearly, \(|\widehat{{\mathbb {P}}}^{[{\mathbf {k}}]}|=ne+n+1=|{\mathbb {Y}}^{[e]}|+1 \ge |{\mathbb {Y}}^{[\widetilde{\mathbf {k}}]}|+1\). Then by Lemma 6.1, \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i) \le e-e_i\le s-s_i\). Here \(s_i\) is the order of \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) and \(s=\sum _{i=0}^ns_i\). If \(L_i = e-e_i-\gamma ({\mathbb {P}})\) where \(\gamma ({\mathbb {P}})=\sum _{j=1}^n (\underline{o}_j + \overline{e}_j)\) and \(\overline{e}_j = \min _i \{e_i-{\mathrm{ord}}({\mathbb {P}}^{\text {N}}_i,y_j) | {\mathrm{ord}}({\mathbb {P}}^{\text {N}}_i,y_j)\ne -\infty \}\). By [43], \((L_0,\ldots ,L_n)\) also consists of a solution to 26. Then \(\mathrm{deg}({\mathbf {R}},{\mathbf {u}}_i)\le L_i\). One can easily check that \(\bar{\text { J}}_i\le L_i \le e-e_i\) for each \(i\), and the modified Jacobi bound is better than the other two bounds as shown by the following example.

Example 6.11

Let \(E=(e_{ij})_{0\le i\le n, 1\le j\le n}\) be the order matrix of a system \({\mathbb {P}}\):
$$\begin{aligned}E=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 5 &{} \, -\infty &{} \, 0 \\ 5 &{} \, 0 &{} \, -\infty \\ 0 &{} \, 3 &{} \, 5 \\ 5 &{} \, 2 &{} \, -\infty \end{array}\right) \!. \end{aligned}$$
Then \(\{ \text { J}_0, \text { J}_1, \text { J}_2, \text { J}_3\} = \{12,12,7,10\}\), \(\{ L_0, L_1, L_2, L_3\} = \{13,13,13,13\}\), \(\{ e-e_0, e-e_1, e-e_2,e-e_3\} = \{15,15,\) \(15,15\}\). This shows that the modified Jacobi bound could be strictly less than the other two bounds.

Now, we assume that \({\mathbb {P}}\) is a Laurent differentially essential system which is not rank essential. Let \({\mathbf {R}}\) be the sparse differential resultant of \({\mathbb {P}}\). We will give a better order bound for \({\mathbf {R}}\). By Theorem 4.20, \({\mathbb {P}}\) contains a unique rank essential sub-system \({\mathbb {P}}_{I}\). Without loss of generality, suppose \(I = \{ 0,\ldots , r\}\) with \(r< n\). Let \(E_I\) be the order matrix of \({\mathbb {P}}_I\) and for \(i=0,\ldots ,r\), let \((E_{I})_{\hat{i}}\) be the matrix obtained from \(E_I\) by deleting the \((i+1)\)th row. Note that \((E_{I})_{\hat{i}}\) is an \(r\times n\) matrix. Then we have the following result.

Theorem 6.12

With the above assumptions, we have
$$\begin{aligned} {\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=\left\{ \begin{array}{lll} h_i\le \mathrm{Jac}((E_{I})_{\hat{i}})&{}&{}i=0,\ldots ,r,\\ -\infty &{}&{}i=r+1,\ldots ,n.\\ \end{array}\right. \end{aligned}$$

Proof

It suffices to show that \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i) \le \mathrm{Jac}((E_{I})_{\hat{i}})\) for \(i=0,\ldots ,r\). Let \({\mathbb {L}}_i = u_{i0} + \sum _{j=1}^n u_{ij}y_j\) for \(i=r+1,\ldots ,n\). Since \({\mathbb {P}}_{I}\) is rank essential, there exist \(\frac{N_{ik_i}}{N_{i0}}\,( i= 1,\ldots ,r)\) such that their symbolic support matrix \(B\) is of full rank. Without loss of generality, we assume that the \(r\)th principal sub-matrix of \(B\) is of full rank. Consider a new Laurent differential polynomial system \(\widetilde{\mathbb {P}} = {\mathbb {P}}_{I} \cup \{{\mathbb {L}}_{r+1},\ldots ,{\mathbb {L}}_{n}\}\). This system is also Laurent differentially essential since the symbolic support matrix of \(\frac{N_{1k_1}}{N_{10}},\ldots , \frac{N_{rk_r}}{N_{r0}},y_{r+1},\ldots ,y_n\) is of full rank. And \({\mathbf {R}}\) is also the sparse differential resultant of \(\widetilde{\mathbb {P}}\), for \({\mathbb {P}}_I\) is the rank essential sub-system of \(\widetilde{\mathbb {P}}\). The order vector of \({\mathbb {L}}_i\) is \((0,\ldots ,0)\) for \(i=r+1,\ldots ,n\). So \(\mathrm{Jac}(\widetilde{\mathbb {P}}_{\hat{i}})=\mathrm{Jac}((E_{I})_{\hat{i}})\) for \(i=0,\ldots ,r\). By Theorem 6.6, \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i) \le \mathrm{Jac}((E_{I})_{\hat{i}})\) for \(i=0,\ldots ,r\).\(\square \)

Example 6.13

The order matrix of \({\mathbb {P}}_0,{\mathbb {P}}_1,{\mathbb {P}}_2,{\mathbb {P}}_3\) given in Example 4.23 is
$$\begin{aligned}E=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 &{} \, 0 &{} \, 0 \\ 0 &{} \, 0 &{} \, 1 \\ 0 &{} \, 0 &{} \, 1 \\ o &{} \, o&{}\, o \end{array}\right) \!. \end{aligned}$$
Here \({\mathbf {R}}=u_{01}u_{10}((u_{21}u_{10})'u_{20}u_{11}-u_{21}u_{10}(u_{20}u_{11})') - u_{01}u_{10}u_{20}^2u_{11}^2 \). Clearly, \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_0) =0, {\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_1)= {\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_2)=1\), and \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_3) = -\infty \). But \(\text { J}_0 = \text { J}_1 = \text { J}_2 = o+1, \text { J}_3 = 1\), and \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)\ll \text { J}_i\) for \(i=0,1,2\). If using Theorem 6.12, then \(E_I\) consists of the first three rows of \(E\) and Jacobi numbers for \(E_I\) are \(1,1,1\), respectively, which give much better bounds for the sparse differential resultant.

With Theorem 6.6, property 5) of Theorem 1.2 is proved.

6.2 Degree Bounds of Sparse Differential Resultants

In this section, we give an upper bound for the degree of the sparse differential resultant, which is crucial to our algorithm to compute the sparse resultant. We will recall several properties about the degrees of ideals in the algebraic case.

Let \(\mathcal {K}\) be a field and \(\overline{\mathcal {K}}\) its algebraic closure. Let \(\mathcal {I}\) be a prime ideal in \(\mathcal {K}[{\mathbb {X}}]=\mathcal {K}[x_1,\ldots ,x_n]\) with \(\hbox {{dim}}(\mathcal {I})=d\) and \(V\subset \overline{\mathcal {K}}^n\) be the irreducible variety defined by \(\mathcal {I}\). The degree of \(\mathcal {I}\) or \(V\), denoted by \(\mathrm{deg}(\mathcal {I})\) or \(\mathrm{deg}(V)\), is defined as the number of solutions of the zero-dimensional prime ideal \((\mathcal {I},{\mathbb {L}}_1,\ldots ,{\mathbb {L}}_d)_{\mathcal {K}_1[{\mathbb {X}}]}\) in the algebraic closure of \(\mathcal {K}_1\), where \({\mathbb {L}}_i = u_{i0}+\sum _{j=1}^n u_{ij} x_j \,(i=1,\ldots ,d)\) are \(d\) generic hyperplanes and \(\mathcal {K}_1=\mathcal {K}((u_{ij})_{1\le i\le n; 0\le j \le n})\) [23].

The following result gives a relation between the degree of an ideal and that of its elimination ideal, which has been proved in [34, Theorem2.1] and is also a consequence of [21, Lemma 2].

Lemma 6.14

Let \(\mathcal {I}\) be a prime ideal in \(\mathcal {K}[{\mathbb {X}}]\) and \(\mathcal {I}_r=\mathcal {I}\cap \mathcal {K}[x_1,\ldots ,x_r]\) for any \(1\le r\le n\). Then \(\mathrm{deg}(\mathcal {I}_r)\le \mathrm{deg}(\mathcal {I})\).

The notion of degree can be defined for more general sets of \(\overline{\mathcal {K}}^n\) other than varieties. A constructible set of \(\overline{\mathcal {K}}^n\) is a Boolean combination of varieties in \(\overline{\mathcal {K}}^n\), that is, a finite union of quasi-varieties in \(\overline{\mathcal {K}}^n\). Let \(X\subset \overline{\mathcal {K}}^n\) be constructible and \(V_1,\ldots ,V_l\) be the set of the irreducible components of the Zariski closure of \(X\). The degree of \(X\) is defined to be the sum of the degrees of \(V_i\), that is, \(\mathrm{deg}(X)=\sum _{i=1}^l \mathrm{deg}(V_i)\). The following lemma shows how degree behaves under intersections.

Lemma 6.15

[21, Theorem1] Let \(V_1,\ldots ,V_r\,(r\ge 2)\) be a finite number of constructible sets in \(\overline{\mathcal {K}}^n\). Then \(\mathrm{deg}(V_1\cap \cdots \cap V_r)\le \prod _{i=1}^r\mathrm{deg}(V_i).\)

We now give a degree bound for the sparse differential resultant. The idea is to express \(({\mathbf {R}})\) as the elimination ideal of certain algebraic ideals generated by \({\mathbb {P}}_i^{(j)}\) and use Lemmas 6.14 and 6.15 to estimate the degree of \({\mathbf {R}}\).

Theorem 6.16

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) be a Laurent differentially essential system given in (3) with \({\mathrm{ord}}({\mathbb {P}}_i^\text {N},y_j)=e_{ij}\) and \(\mathrm{deg}({\mathbb {P}}_i^{\text {N}},{\mathbb {Y}})\) \(=m_i\). Suppose \({\mathbb {P}}_{i}^{\text {N}}=\sum _{k=0}^{l_i}u_{ik}N_{ik}\) and \(\text { J}_i\) is the modified Jacobi number of \(\{{\mathbb {P}}_0^{\text {N}},\ldots ,{\mathbb {P}}^{\text {N}}_{n}\}\backslash \{{\mathbb {P}}^{\text {N}}_{i}\}\). Let \({\mathbf {R}}({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n)\) be the sparse differential resultant of \({\mathbb {P}}\). Suppose \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=h_i\) for each \(i\). Then the following assertions hold:
1)

\(\mathrm{deg}({\mathbf {R}})\le \prod _{i=0}^n (m_i+1)^{h_i+1}\le (m+1)^{\sum _{i=0}^n(\text { J}_i+1)}\), where \(m=\mathrm{max}_i\{m_i\}\).

2)
\({\mathbf {R}}\) has a representation
$$\begin{aligned} \prod _{i=0}^n N_{i0}^{(h_i+1)\mathrm{deg}({\mathbf {R}})}\cdot {\mathbf {R}}=\sum _{i=0}^n\sum _{j=0}^{h_i}G_{ij}\big ({\mathbb {P}}_{i}^{\text {N}}\big )^{(j)} \end{aligned}$$
(27)
where \(G_{ij}\in {\mathbb Q}[{\mathbf {u}}_0^{[h_0]},\ldots ,{\mathbf {u}}_n^{[h_n]},y_1^{[t_1]},\ldots ,y_n^{[t_n]}]\) with \(t_j=\mathrm{max}_{i=0}^n\{h_i+e_{ij}\}\) such that \(\mathrm{deg}(G_{ij}({\mathbb {P}}_{i}^{\text {N}})^{(j)})\le [m+1+\sum _{i=0}^n(h_i+1)\mathrm{deg}(N_{i0})]\mathrm{deg}({\mathbf {R}})\).

Proof

1) Let \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}=\big ([{\mathbb {P}}_0^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}]\hbox {{:}}\mathbb {m}\big )_{\mathbb {Q}\{{\mathbb {Y}},{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}}\). By (10), \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap \,\mathbb {Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}=\mathrm{sat}({\mathbf {R}})\). By Theorem 3.9, \(\theta =(\eta ;\zeta _0,u_{01},\ldots ,u_{0l_0}; \ldots ;\zeta _n,u_{n1},\ldots ,u_{nl_n})\) is a generic point of \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\). Clearly, \(\widehat{{\mathbb {P}}}=\{{\mathbb {P}}_0^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}\}\) is a characteristic set of \(\mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\) w.r.t. the elimination ranking \(u_{n0}\succ \cdots \succ u_{10}\succ u_{00}\succ {\mathbf {u}}\succ {\mathbb {Y}}\). Taking the differential remainder of \({\mathbf {R}}\) w.r.t. \(\widehat{{\mathbb {P}}}\), by (2)
$$\begin{aligned} \prod N_{i0}^{a_i}{\mathbf {R}}=\sum _{i=0}^n\sum _{k=0}^{h_i}G_{ik}\big ({\mathbb {P}}_{i}^{\text {N}}\big )^{(k)} \end{aligned}$$
for some \(a_i\in \mathbb {Z}_{\ge 0}\). Let \(t_j=\mathrm{max}_{i=0}^n\{h_i+e_{ij}\}\) and \({\mathbb {Y}}^{[\mathbf t ]}=\{y_1^{[t_1]},\ldots ,y_n^{[t_n]}\}.\) Denote \(\mathbb {m}^{[\mathbf t ]}\) to be the set of all monomials in \({\mathbb {Y}}^{[\mathbf t ]}\), which is a multiplicative set. Let
$$\begin{aligned} \mathcal {J}=\big (({\mathbb {P}}_{0}^{\text {N}})^{[h_0]},\ldots ,({\mathbb {P}}_{n}^{\text {N}})^{[h_n]}\big )\hbox {{:}}\mathbb {m}^{[\mathbf t ]} \end{aligned}$$
be an algebraic ideal in \(\mathcal {R}=\mathbb {Q}[{\mathbb {Y}}^{[\mathbf t ]},{\mathbf {u}}_0^{[h_0]},\ldots ,{\mathbf {u}}_n^{[h_n]}]\), where \(({\mathbb {P}}_i^\text {N})^{(j)}\) are treated as polynomials in \(\mathcal {R}\). Then \({\mathbf {R}}\in \mathcal {J}\). Let \(\eta ^{[\mathbf t ]}=(\eta _1^{[t_1]},\ldots ,\eta _n^{[t_n]})\) and \(\widetilde{{\mathbf {u}}}=\bigcup _i({\mathbf {u}}_i\backslash \{u_{i0}\})^{[h_i]}\). Then, it is easy to show that \(\mathcal {J}\) is a prime ideal in \(\mathcal {R}\) with a generic point \((\eta ^{[\mathbf t ]};\widetilde{{\mathbf {u}}},\zeta _0^{[h_0]},\ldots ,\zeta _n^{[h_n]})\) and
$$\begin{aligned} \mathcal {J}\cap {\mathbb Q}[{\mathbf {u}}_0^{[h_0]},\ldots ,{\mathbf {u}}_n^{[h_n]}]=({\mathbf {R}}). \end{aligned}$$
Since \({\mathbb {V}}\big ((({\mathbb {P}}_{i}^{\text {N}})^{(k)})_{1\le i\le n;0\le k\le h_i}\big )={\mathbb {V}}(\mathcal {J})\bigcup \bigcup \limits _{j,l}{\mathbb {V}}(({\mathbb {P}}_{i}^{\text {N}})^{(k)},y_{j}^{(l)})\), \({\mathbb {V}}(\mathcal {J})\) is an irreducible component of \({\mathbb {V}}\big ((({\mathbb {P}}_{i}^{\text {N}})^{(k)})_{1\le i\le n;0\le k\le h_i}\big )\). By Lemma 6.15, \(\mathrm{deg}(\mathcal {J})\le \prod _{i=0}^n\prod _{k=0}^{h_i}(m_i+1)\). Since \(\mathcal {J}\cap {\mathbb Q}[{\mathbf {u}}_0^{[h_0]},\ldots ,{\mathbf {u}}_n^{[h_n]}]=({\mathbf {R}})\), by Lemma 6.14, \(\mathrm{deg}({\mathbf {R}})\le \mathrm{deg}(\mathcal {J})\le \prod _{i=0}^n(m_i+1)^{h_i+1}\le (m+1)^{\sum _{i=0}^n(\text { J}_i+1)}\) follows. The last inequality holds because \(h_i\le \text { J}_i\) by Theorem 6.9.

2) In \({\mathbf {R}}\), let \(u_{i0}\,(i=0,\ldots ,n)\) be replaced, respectively, by \(\big ({\mathbb {P}}_{i}^{\text {N}}-\sum _{k=1}^{l_i}u_{ik}N_{ik}\big )/N_{i0}\) and let \({\mathbf {R}}\) be expanded as polynomials in \({\mathbb {P}}_{i}^{\text {N}}\) and their derivatives with coefficients in \(\mathbb {Q}\{{\mathbb {Y}}^{\pm };{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\). We obtain \({\mathbf {R}}=\sum \limits _{M}q_M\cdot M({\mathbf {u}};u_{00},\ldots ,u_{n0})=\sum \limits _{M}q_M\cdot M\big ({\mathbf {u}};\frac{ {\mathbb {P}}_{0}^{\text {N}}-\sum _{k=1}^{l_0}u_{0k}N_{0k}}{N_{00}},\ldots ,\frac{{\mathbb {P}}_{n}^{\text {N}}-\sum _{k=1}^{l_n}u_{nk}N_{nk}}{N_{n0}}\big ) =\big (\sum \limits _{i=0}^n\sum \limits _{k=0}^{h_i}G_{ik}({\mathbb {P}}_{i}^{\text {N}})^{(k)}+T\big )\big /\prod \limits _{i=0}^nN_{i0}^{a_i}\), where \(q_M\in {\mathbb Q}\), \(a_i\in \mathbb {N}\), \(G_{ik}\in {\mathbb Q}[{\mathbf {u}}_0^{[h_0]},\ldots ,{\mathbf {u}}_n^{[h_n]},{\mathbb {Y}}^{[\mathbf t ]}]\) and \(T\in {\mathbb Q}\{{\mathbf {u}},{\mathbb {Y}}\}\) is free from \(u_{i0}\). So \(\prod \limits _{i=0}^nN_{i0}^{a_i}{\mathbf {R}}=\sum \limits _{i=0}^n\sum \limits _{k=0}^{h_i}G_{ik}({\mathbb {P}}_{i}^{\text {N}})^{(k)}+T\), and \(T\in \mathcal {I}_{{\mathbb {Y}},{\mathbf {u}}}\cap {\mathbb Q}\{{\mathbf {u}},{\mathbb {Y}}\}=\{0\}\) follows. Thus, \(T=0\) and we obtain a representation for \({\mathbf {R}}\) of the form (27).

To obtain degree bounds for this representation of \({\mathbf {R}}\), we take each monomial \(M\) in \({\mathbf {R}}\) to estimate degrees of the terms after performing the above substitution for \(u_{i0}\). Let \(M=M({\mathbf {u}};u_{00},\ldots ,u_{n0})={\mathbf {u}}^{\gamma }\prod _{i=0}^n\prod _{k=0}^{h_i}(u_{i0}^{(k)})^{d_{ik}}\) with \(|\gamma |+\sum _{i=0}^n\sum _{k=0}^{h_i}d_{ik}= \mathrm{deg}({\mathbf {R}})\), where \({\mathbf {u}}^{\gamma }\) represents a monomial in \({\mathbf {u}}\) and their derivatives with exponent vector \(\gamma \). In \(M\), substitute \(u_{i0}\) by \(\big ({\mathbb {P}}_{i}^{\text {N}}-\sum _{k=1}^{l_i}u_{ik}N_{ik}\big )/N_{i0}\), that is,
$$\begin{aligned} M({\mathbf {u}};u_{00},\ldots ,u_{n0})= {\mathbf {u}}^{\gamma }\prod _{i=0}^n\prod _{k=0}^{h_i} \bigg (\Big (\big ({\mathbb {P}}_{i}^{\text {N}}-\sum _{k=1}^{l_i}u_{ik}N_{ik}\big )/N_{i0}\Big )^{(k)}\bigg )^{d_{ik}}. \end{aligned}$$
When expanded, the denominator is of the form \(\prod _{i=0}^n N_{i0}^{\sum _{k}(k+1)d_{ik}}\) and every term of the numerator has total degree \(|\gamma |+\sum _{i=0}^n\sum _{k=0}^{h_i}[\mathrm{deg}({\mathbb {P}}_{i}^{\text {N}},{\mathbf {u}}_i\cup {\mathbb {Y}})+k\cdot \mathrm{deg}(N_{i0})]d_{ik}\) as polynomials in \({\mathbf {u}}_0^{[h_0]},\ldots ,{\mathbf {u}}_n^{[h_n]}\) and \({\mathbb {Y}}^{[\mathbf t ]}\). So by multiplying \({\mathbf {R}}\) by certain power products of \(N_{i0}\), we can clear the denominators in this representation of \({\mathbf {R}}\). Since for each \(M\) in \({\mathbf {R}}\), \(\sum _{k=0}^{h_i}(k+1)d_{ik}\le (h_i+1)\mathrm{deg}({\mathbf {R}})\). Thus, take \(a_i=(h_i+1)\mathrm{deg}({\mathbf {R}})\) and multiply \({\mathbf {R}}\) by \(\prod _{i=0}^nN_{i0}^{a_i}\), following the same procedures in the above paragraph, then we obtain \(\prod _{i=0}^nN_{i0}^{a_i}\cdot {\mathbf {R}}=\sum _{i=0}^n\sum _{j=0}^{h_i}G_{ij}\big ({\mathbb {P}}_{i}^{\text {N}}\big )^{(j)}\) where \(G_{ij}\in {\mathbb Q}[{\mathbf {u}}_0^{[h_0]},\ldots ,{\mathbf {u}}_n^{[h_n]},{\mathbb {Y}}^{[\mathbf t ]}]\). Since for each \(M\), every term of \(\prod _{i=0}^nN_{i0}^{a_i}\cdot M\) after performing the substitution for \(u_{i0}\) has degree bounded by \(|\gamma |+\sum _{i=0}^n\sum _{k=0}^{h_i}\big [\mathrm{deg}({\mathbb {P}}_{i}^{\text {N}},{\mathbf {u}}_i\cup {\mathbb {Y}})+k\cdot \mathrm{deg}(N_{i0})\big ] d_{ik}+\sum _{i=0}^na_i\mathrm{deg}(N_{i0})-\sum _{i=0}^n \sum _{k=0}^{h_i}(k+1)d_{ik}\cdot \mathrm{deg}(N_{i0})\), we have
$$\begin{aligned}&\mathrm{deg}(G_{ij}\big ({\mathbb {P}}_{i}^{\text {N}}\big )^{(j)})\\&\quad \le \mathrm{max}_{M}\Big \{|\gamma |+\sum _{i=0}^n\sum _{k=0}^{h_i} \big [\mathrm{deg}({\mathbb {P}}_{i}^{\text {N}},{\mathbf {u}}_i\cup {\mathbb {Y}})+k\cdot \mathrm{deg}(N_{i0})\big ]d_{ik}\\&\quad \quad +\sum _{i=0}^na_i\mathrm{deg}(N_{i0})-\sum _{i=0}^n \sum _{k=0}^{h_i}(k+1)d_{ik}\cdot \mathrm{deg}(N_{i0})\Big \}\\&\quad = \mathrm{max}_{M}\big \{|\gamma |+\sum _{i=0}^n\sum _{k=0}^{h_i} \big [(m_i+1)d_{ik}+\sum _{i=0}^na_i\mathrm{deg}(N_{i0})-\sum _{i=0}^n \sum _{k=0}^{h_i}d_{ik}\cdot \mathrm{deg}(N_{i0})\big \}\\&\quad \le \mathrm{max}_{M}\big \{\big (m+1\big )\big (|\gamma |+\sum _{i=0}^n \sum _{k=0}^{h_i}d_{ik}\big )+\sum _{i =0}^na_i\mathrm{deg}(N_{i0})\big \}\\&\quad =[m+1+\sum _{i=0}^n(h_i+1)\mathrm{deg}(N_{i0})]\mathrm{deg}({\mathbf {R}}). \end{aligned}$$
\(\square \)

Example 6.17

In Example 3.19, \(\text { J}_0=2\), \(\text { J}_1=\text { J}_2=1\) and \(m_0=m_1=m_2=2\). The expression of \({\mathbf {R}}\) shows that \(h_0={\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_0)=1< \text { J}_0\), \(h_i={\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=0<\text { J}_i\,(i=1,2)\) and \(\mathrm{deg}({\mathbf {R}})=5<<3^4=\prod _{i=0}^2(m_i+1)^{h_i+1}.\)

With Theorem 6.16, properties 6) and 7) of Theorem 1.2 are proved.

6.3 A Single Exponential Algorithm to Compute Sparse Differential Resultants

If a polynomial \(R\) is a linear combination of some known polynomials \(F_i(i=1,\ldots ,s)\), that is, \(R=\sum _{i=1}^s H_i F_i\), and we know the upper bounds of the degrees of \(R\) and \(H_iF_i\), then a general idea to estimate the computational complexity of \(R\) is to use linear algebra to find the coefficients of \(R\).

For the sparse differential resultant, its degree bound and the degrees of the expressions in the linear combination are given in Theorem 6.16.

Now, we give the algorithm SDResultant to compute sparse differential resultants based on linear algebra techniques. The algorithm works adaptively by searching for \({\mathbf {R}}\) with an order vector \((h_0,\ldots ,h_n)\in \mathbb {N}^{n+1}\) with \(h_i \le \text { J}_i\) by Theorem 6.16. Denote \(o=\sum _{i=0}^n h_i\). We start with \(o=0\). And for this \(o\), choose one vector \((h_0,\ldots ,h_n)\) at a time. For this \((h_0,\ldots ,h_n)\), we search for \({\mathbf {R}}\) from degree \(d=1\). If we cannot find an \({\mathbf {R}}\) with such a degree, then we repeat the procedure with degree \(d+1\) until \(d>\prod _{i=0}^n(m_i+1)^{h_i+1}\). In that case, we choose another \((h_0,\ldots ,h_n)\) with \(\sum _{i=0}^nh_i=o\). But if for all \((h_0,\ldots ,h_n)\) with \(h_i\le \text { J}_i\) and \(\sum _{i=0}^nh_i=o\), \({\mathbf {R}}\) cannot be found, then we repeat the procedure with \(o+1.\) In this way, we will find an \({\mathbf {R}}\) with the smallest order satisfying Eq. (27), which is the sparse resultant.
https://static-content.springer.com/image/art%3A10.1007%2Fs10208-015-9249-9/MediaObjects/10208_2015_9249_Figa_HTML.gif

Theorem 6.18

Let \({\mathbb {P}}=\{{\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\}\) be a Laurent differentially essential system given in (3). Denote \(\widehat{{\mathbb {P}}}=\{{\mathbb {P}}_0^{\text {N}},\ldots ,{\mathbb {P}}_n^{\text {N}}\}\), \(\text { J}_i=\mathrm{Jac}(\widehat{{\mathbb {P}}}_{\hat{i}})\), \({\text { J}}=\sum _{i=0}^n\text { J}_i\) and \(m=\mathrm{max}_{i=0}^n \mathrm{deg}({\mathbb {P}}_i^{\text {N}},{\mathbb {Y}})\). Algorithm SDResultant computes the sparse differential resultant \({\mathbf {R}}\) of \({\mathbb {P}}\) with the following complexities:
  1. 1)

    In terms of a degree bound \(D\) of \({\mathbf {R}}\), the algorithm needs at most \(O\Big (\frac{\big [\big (m(\text { J}+n+2)+1\big )D\big ]^{O(l\text { J}+l)}}{n^{n}}\Big )\) \({\mathbb Q}\)-arithmetic operations, where \(l=\sum _{i=0}^n(l_i+1)\) is the size of the system.

     
  2. 2)

    The algorithm needs at most \(O\big ({(\text { J}+n+2)^{O(l \text { J}+l)}(m+1)^{O((l\text { J}+l)(\text { J}+n+2))}}/{n^{n}}\big )\) \({\mathbb Q}\)-arithmetic operations.

     

Proof

The algorithm finds a \(P\in {\mathbb Q}\{{\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n\}\) satisfying (27), which has the smallest order and the smallest degree among those with the same order. Existence for such a differential polynomial is guaranteed by Theorem 6.16. Such a \(P\) is in \(\mathrm{sat}({\mathbf {R}})\) by (10). Since each differential polynomial in \(\mathrm{sat}({\mathbf {R}})\) not equal to \({\mathbf {R}}\) either has greater order than \({\mathbf {R}}\) or has the same order but greater degree than \({\mathbf {R}}\), \(P\) must be \({\mathbf {R}}\) (up to a factor in \(\mathbb {Q}\)).

We will estimate the complexity of the algorithm below. Denote \(D\) to be the degree bound of \({\mathbf {R}}.\) By Theorem 6.16, \(D\le (m+1)^{\sum _{i=0}^n(\text { J}_i+1)}=(m+1)^{\text { J}+n+1}\), where \(\text { J}=\sum _{i=0}^n\text { J}_i\). In each loop of Step 3, the complexity of the algorithm is clearly dominated by Step 3.1.2, where we need to solve a system of linear equations \(\mathcal {P}=0\) over \({\mathbb Q}\) in \({\mathbf {c}}_{0}\) and \({\mathbf {c}}_{ij}\). It is easy to show that \(|{\mathbf {c}}_{0}|={d+L_1-1\atopwithdelims ()L_1-1}\) and \(|{\mathbf {c}}_{ij}|={d_1-m_i-1+L_1+L_2\atopwithdelims ()L_1+L_2}\), where \(L_1=|U|=\sum \limits _{i=0}^n (h_i+1)(l_i+1)\), \(L_2=|{\mathbb {Y}}^{[\mathbf t ]}|=\sum \limits _{j=1}^n(\mathrm{max}_{i}\{h_i+e_{ij}\}+1)\) and \(d_1=[m+1+\sum _{i=0}^n(h_i+1)m_{i0}]d\). Then \(\mathcal {P}=0\) is a linear equation system with \(W_1={d+L_1-1\atopwithdelims ()L_1-1}+\sum _{i=0}^n(h_i+1){d_1-m_i-1+L_1+L_2\atopwithdelims ()L_1+L_2}\) variables and \(W_2={d_1+L_1+L_2\atopwithdelims ()L_1+L_2}\) equations. To solve it, we need at most \((\mathrm{max}\{W_1, W_2\})^{\omega }\) arithmetic operations over \({\mathbb Q}\), where \(\omega \) is the matrix multiplication exponent and the currently best known \(\omega \) is 2.376.

The iteration in Step 3.1.2 may go through \(1\) to \(\prod _{i=0}^n(m_i+1)^{h_i+1}\le (m+1)^{\sum _{i=0}^n(\text { J}_i+1)}\), and the iteration in Step 3.1 at most will repeat \(\prod _{i=0}^n(\text { J}_i+1)\) times. And by Theorem 6.16, Step 3 may loop from \(o=0\) to \(\sum _{i=0}^n\text { J}_i\). In the whole algorithm, \(L_1\le \sum \limits _{i=0}^n (\text { J}_i+1)(l_i+1)\le l\text { J}+l\), \(L_2=|{\mathbb {Y}}^{[\mathbf t ]}|\le \sum \limits _{j=1}^n(\mathrm{max}_{i}\{\text { J}_i+e_{ij}\}+1)=\text { J}+n\) by Lemma 6.2, and \(d_1\le [m+1+\sum _{i=0}^n(\text { J}_i+1)m_{i0}]D=\big (m(\text { J}+n+2)+1\big )D\). Thus, \(W_1\le {D+l\text { J}+l-1\atopwithdelims ()l\text { J}+l-1}+\sum _{i=0}^n(\text { J}_i+1){ (m(\text { J}+n+2)+1)D-m_i-1+l\text { J}+l+\text { J}+n \atopwithdelims ()l\text { J}+l+\text { J}+n}\) and \(\mathrm{max}\{W_1,W_2\}\le (\text { J}+n+2){[m(\text { J}+n+2)+1]D+l\text { J}+l+\text { J}+n \atopwithdelims ()l\text { J}+l+\text { J}+n}\).

Hence, the whole algorithm needs at most
$$\begin{aligned}&\sum _{o=0}^{\sum _{i=0}^n \text { J}_i}\sum _{{\mathop {h_i\le \text {J}_i}\limits _{\sum _{i}h_i=o}}}\sum _{d=1}^{\prod _{i=0}^n (m_i+1)^{h_i+1}}\big (\mathrm{max}\{W_1,W_2\}\big )^{2.376}\nonumber \\&\quad \le \Big (\prod _{i=0}^n(\text { J}_i+1)\Big )\cdot D \cdot \bigg [(\text { J}+n+2)\left( {\begin{array}{c}[m(\text { J}+n+2)+1]D +l\text { J}+l+\text { J}+n\\ l\text { J}+l +\text { J}+n\end{array}}\right) \bigg ]^{2.376}\nonumber \\&\quad \le (\text { J}+n+2)^{3.376}\big (\frac{\sum _{i=0}^n (\text { J}_i+1)}{n+1}\big )^{n+1}\cdot D\cdot \big [\big (m(\text { J}+n+2)+1\big )D\big ]^{2.376 (l\text { J}+l+\text { J}+n)}\nonumber \\&\quad \le (\text { J}+n+2)^{3.376}\frac{(\text { J}+n+1)^{n+1}}{n^{n}}\cdot D\cdot \big [\big (m(\text { J}+n+2) +1\big )D\big ]^{2.376(l\text { J}+l+\text { J} +n)}\nonumber \end{aligned}$$
\({\mathbb Q}\)-arithmetic operations. In the above inequalities, we assume \( [m(\text { J}+n+2)+1]D\ge l\text { J}+l+\text { J}+n\).

Since \(l\ge 2(n+1)\), the complexity bound is \(O\big ({\big [(m(\text { J}+n+2)+1)D\big ]^{O(l\text { J}+l)}}/{n^{n}}\big )\). Our complexity assumes an \(O(1)\)-complexity cost for all field operations over \(\mathbb {Q}\). Thus, the complexity follows. Now 1) is proved. To prove 2), we just need to replace \(D\) by the degree bound for \({\mathbf {R}}\) in Theorem 6.16 in the complexity bound in 1). \(\square \)

Example 6.19

Let \(n=1\), \({\mathbb {P}}_0=u_{00}+u_{01}y' \), and \( {\mathbb {P}}_1=u_{10}+u_{11}y'\). We use this simple example to illustrate Algorithm SDResultant. Here, \(m_{i0}=0, m=1\), \(J_0=J_1=1\). In step 3.1, \(o=0\) and \((h_0,h_1)=(0,0)\). So \(U=\{u_{00},u_{01},u_{10},u_{11}\}\) and \({\mathbb {Y}}^{[\mathbf t ]}=\{y,y'\}\). We first execute steps 3.1.2.1 to 3.1.2.7 for \(d=1\). Set \({\mathbf {R}}_0=c_{01}u_{00}+c_{02}u_{01}+c_{03}u_{10}+c_{04}u_{11}\) and \({\mathbf {c}}_0=(c_{01},c_{02},c_{03},c_{04})\). Set \(G_{i0}=c_{i01}\) and \({\mathbf {c}}_{i0}=(c_{i01})\) for \(i=0, 1\). In step 3.1.2.5, since \({\mathbf {R}}_0-G_{00}{\mathbb {P}}_0-G_{10}{\mathbb {P}}_1=(c_{01}-c_{001})u_{00} +c_{02}u_{01}+(c_{03}-c_{101})u_{10}+c_{04}u_{11}-c_{001}u_{01}y' -c_{101}u_{11}y'\), \(\mathcal {P}=0\) consists of equations \(\{c_{01}-c_{001}=0, c_{02}=0, c_{03}-c_{101}=0, c_{04}=0, c_{001}=0, c_{101}=0\}\). \(\mathcal {P}=0\) has a unique solution \({\mathbf {c}}=(0,0,0,0)\) and \(c_{i01}=0\). Then \({\mathbf {R}}=0\).

Next, we execute steps 3.1.2.1 to 3.1.2.7 for \(d=2\). Set \({\mathbf {R}}_0=c_{01}u_{00}u_{10}+c_{02}u_{00}u_{11} +c_{03}u_{01}u_{10}+c_{04}u_{01}u_{11}+ c_{05}u_{00}^2+c_{06}u_{00}u_{01}+c_{07}u_{01}^2 +c_{08}u_{10}^2+c_{09}u_{10}u_{11}+ c_{0,10}u_{11}^2\), and \({\mathbf {c}}_0=(c_{01},\ldots ,c_{0,10})\). Set \(\{M_1,\ldots ,M_{28}\} \) to be the set of monomials in \(U\) and \({\mathbb {Y}}^{[\mathbf t ]}\) of degree not greater than 2. Let \(G_{i0}=\sum _{j=1}^{28}c_{i0j}M_{j}\) and \({\mathbf {c}}_{i0}=(c_{i01},\ldots ,c_{i0,28})\) for \(i=0, 1\). Regarding \(T={\mathbf {R}}_0-G_{00}{\mathbb {P}}_0-G_{10}{\mathbb {P}}_1\) as polynomials in \(U\) and \({\mathbb {Y}}^{[\mathbf t ]}\), let \(\mathcal {P}\) be the set of coefficients of \(T\), which are linear polynomials in \({\mathbb Q}[{\mathbf {c}}_{0},{\mathbf {c}}_{00},{\mathbf {c}}_{10}]\). Then \(\mathcal {P}=0\) consists of at most \({10 \atopwithdelims ()4}\) linear equations in \(66\) variables \({\mathbf {c}}_{0},{\mathbf {c}}_{00}\), and \({\mathbf {c}}_{10}\) with integral coefficients. Solving \(\mathcal {P}=0\), we obtain \({\mathbf {c}}_0=(0,q,-q,0,0,0,0,0,0,0)\), where \(q\in {\mathbb Q}\). Thus, the algorithm returns \({\mathbf {R}}=u_{00}u_{11}-u_{01}u_{10}\).

Remark 6.20

By Remark 4.21, we can compute a rank essential set \(I\) and the algorithm can be improved by only considering the Laurent differential polynomials \({\mathbb {P}}_i\,(i\in I)\) in the linear combination of the sparse differential resultant.

Remark 6.21

If the given system is algebraic, then the complexity bound given in 1) of Theorem 6.18 is essentially the same as that given in [45, p. 288] since \(D\gg m\) and \(D\gg n\).

With Theorem 6.18, Theorem 1.4 is proved.

6.4 Degree Bounds of Differential Resultants in Terms of Mixed Volumes

The degree bound given in Theorem 6.16 is essentially a Bézout type bound. In this section, a BKK style degree bound for the differential resultant will be given, which is the sum of the mixed volumes of certain polytopes generated by the supports of certain differential polynomials and their derivatives.

We first recall results about degrees of algebraic sparse resultants given by Sturmfels [46]. Let \(\mathcal {K}[{\mathbb {X}}]=\mathcal {K}[x_1,\ldots ,x_n]\) be the polynomial ring defined over a field \(\mathcal {K}.\) For any vector \(\alpha =(a_1,\ldots ,a_n)\in \mathbb {Z}^n\), denote the Laurent monomial \(x_1^{a_1}x_2^{a_2}\cdots x_n^{a_n}\) by \({\mathbb {X}}^{\alpha }\). Let \(\mathcal {B}_0,\ldots ,\mathcal {B}_n\subset \mathbb {Z}^n\) be subsets which jointly span the affine lattice \(\mathbb {Z}^n\). Suppose \(\mathbf{0}=(0,\ldots ,0)\in \mathcal {B}_i\) for each \(i\) and \(|\mathcal {B}_i|=l_i+1\ge 2\). Let
$$\begin{aligned} {\mathbb {F}}_i(x_1,\ldots ,x_n)=c_{i0}+\sum _{\alpha \in \mathcal {B}_i\backslash \{\mathbf{0}\}}c_{i,\alpha }{\mathbb {X}}^{\alpha }\,\,\,(i=0,1,\ldots ,n) \end{aligned}$$
(28)
be generic sparse Laurent polynomials defined over \(\mathcal {B}_i\,(i=0,1,\ldots ,n)\). Here, \(\mathcal {B}_i\) or \(\{{\mathbb {X}}^{\alpha }\,|\,\alpha \in \mathcal {B}_i\}\) are called the support of \({\mathbb {F}}_i\). Denote \({\mathbf {c}}_i=(c_{i\alpha })_{\alpha \in \mathcal {B}_i}\) and \({\mathbf {c}}=\cup _i({\mathbf {c}}_i\backslash \{c_{i0}\})\). Let \(\mathcal {Q}_i\) be the convex hull of \(\mathcal {B}_i\) in \(\mathbb {R}^n\), which is the smallest convex set containing \(\mathcal {B}_i\). We call \(\mathcal {Q}_i\) the Newton polytope of \({\mathbb {F}}_i\), denoted by \(\hbox {{NP}}({\mathbb {F}}_i)\). In [46], Sturmfels gave the definition of algebraic essential set and proved that a necessary and sufficient condition for the existence of sparse resultants is that there exists a unique subset \(\{\mathcal {B}_i\}_{i\in \text { I}}\) which is essential. Now, we restate the definition of essential sets in our words for the sake of later use.

Definition 6.22

Suppose \({\mathbb {F}}_0,\ldots ,{\mathbb {F}}_n\) are generic sparse Laurent polynomials of the form (28).
  • A collection of \(\{\mathcal {B}_i\}_{i\in \text { I}}\), or \(\{{\mathbb {F}}_i\}_{i\in \text {I}}\), is said to be algebraically independent if \(\hbox {{tr.deg}}\,\mathbb {Q}({\mathbf {c}})({\mathbb {F}}_i-c_{i0}\,|\,i\in \text { I})/\mathbb {Q}({\mathbf {c}})=|\text {I}|\). Otherwise, they are said to be algebraically dependent.

  • A collection of \(\{\mathcal {B}_i\}_{i\in \text { I}}\) is said to be essential if \(\{\mathcal {B}_i\}_{i\in \text {I}}\) is algebraically dependent and for each proper subset \(\text {J}\) of I, \(\{\mathcal {B}_i\}_{i\in J}\) are algebraically independent.

In the case that \(\{\mathcal {B}_0,\ldots ,\mathcal {B}_n\}\) is essential, the degree of the sparse resultant can be described by mixed volumes.

Theorem 6.23

([46]) Suppose that \(\{\mathcal {B}_0,\ldots ,\mathcal {B}_n\}\) is essential. For each \(i\!\in \!\{0,\!1,\!\ldots ,\!n\}\), the degree of the sparse resultant in \({\mathbf {c}}_i\) is a positive integer, equal to the mixed volume
$$\begin{aligned} \quad&\quad&\mathcal {M}(\mathcal {Q}_0,\ldots ,\mathcal {Q}_{i-1},\mathcal {Q}_{i+1},\ldots ,\mathcal {Q}_n) =\sum _{\text {J}\subset \{0,\ldots ,i-1,i+1,\ldots ,n\}}(-1)^{n-|\text {J}|}\hbox {{vol}}\left( \sum _{j\in \text {J}}\mathcal {Q}_j\right) \end{aligned}$$
where \(\hbox {{vol}}(\mathcal {Q})\) means the \(n\)-dimensional volume of \(\mathcal {Q}\subset \mathbb {R}^n\) and \(\sum _{j\in \text {J}}\mathcal {Q}_j\) is the Minkowski sum of \(\mathcal {Q}_j\, (j\in J)\).

The mixed volume of the Newton polytopes of a polynomial system is important in that it relates to the number of solutions of these polynomial equations contained in \((\mathbb {C}^*)^n \), which is the famous BKK bound [2].

The following lemma shows that the BKK bound is always smaller than the Bézout bound.

Lemma 6.24

Let \(f_1,\ldots ,f_n\) be polynomials in \(\mathbb {C}[x_1,\ldots ,x_n]\) and \(\mathcal {Q}_i\) be the Newton polytope of \(f_i\) in \(\mathbb {R}^n\). Then \(\mathcal {M}(\mathcal {Q}_1,\ldots ,\mathcal {Q}_n)\le \prod _{i=1}^n\mathrm{deg}(f_i)\).

Proof

Let \(\Delta \) be the standard unitary simplex of \(\mathbb {R}^n\). Then for each \(i\), \(\mathcal {Q}_i\subset d_i\Delta \), where \(d_i=\mathrm{deg}(f_i)\). By the monotonicity of the mixed volume, \(\mathcal {M}(\mathcal {Q}_1,\ldots ,\mathcal {Q}_n)\le \mathcal {M}(d_1\Delta ,\ldots ,d_n\Delta )=\prod _{i=1}^nd_i\cdot \mathcal {M}(\Delta ,\ldots ,\Delta )=\prod _{i=1}^nd_i\). \(\square \)

In the rest of this section, the degree of algebraic sparse resultants will be used to give a degree bound for differential resultants in terms of mixed volumes. A system of \(n+1\) generic differential polynomials with degrees \(m_0,\ldots ,m_n\) and orders \(s_0,\ldots ,s_n\), respectively, of the form
$$\begin{aligned} {\mathbb {P}}_i=u_{i0}+\sum _{ \begin{array}{c} \alpha \in \mathbb {Z}^{n(s_i+1)}_{\ge 0} \\ 1\le |\alpha |\le m_i \end{array}}u_{i \alpha }({\mathbb {Y}}^{[s_i]})^{\alpha }\, (i=0,\ldots ,n), \end{aligned}$$
(29)
clearly forms a differentially essential system, and their sparse differential resultant is exactly equal to their differential resultant defined in [17]. So Theorem 6.16 also gives a degree bound for the differential resultant. But when we use Theorem 6.16 to estimate the degree of \({\mathbf {R}}\), not only Beźout bound is used, but also the degrees of \({\mathbb {P}}_i\) in both \({\mathbb {Y}}\) and \({\mathbf {u}}_i\) are considered.

The following theorem gives a BKK style upper bound for degrees of differential resultants, the proof of which is not valid for sparse differential resultants.

Theorem 6.25

Let \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) be generic differential polynomials in \({\mathbb {Y}}\) with order \(s_i\), degree \(m_i\), and coefficient vector \({\mathbf {u}}_i\), respectively. Let \({\mathbf {R}}({\mathbf {u}}_0,\ldots ,{\mathbf {u}}_n)\) be the differential resultant of \({\mathbb {P}}_0,\ldots ,{\mathbb {P}}_n\). Denote \(s=\sum _{i=0}^n s_i\). Then for each \(i\in \{0,1,\ldots ,n\}\),
$$\begin{aligned} \mathrm{deg}({\mathbf {R}},{\mathbf {u}}_i)\le \sum _{k=0}^{s-s_i}\mathcal {M}\big ((\mathcal {Q}_{jl})_{j\ne i,0\le l\le s-s_j},\mathcal {Q}_{i0},\ldots ,\mathcal {Q}_{i,k-1},\mathcal {Q}_{i,k+1},\ldots ,\mathcal {Q}_{i,s-s_i}\big ) \end{aligned}$$
(30)
where \(\mathcal {Q}_{jl}\) is the Newton polytope of \({\mathbb {P}}_j^{(l)}\) as a polynomial in \(y^{[s]}_1,\ldots ,y^{[s]}_n\).

Proof

By [17, Theorem6.8], \({\mathrm{ord}}({\mathbf {R}},{\mathbf {u}}_i)=s-s_i\,(i=0,\ldots ,n)\) and \(({\mathbf {R}})=({\mathbb {P}}_0^{[s-s_0]},\) \(\ldots ,\) \({\mathbb {P}}_n^{[s-s_n]})\cap {\mathbb Q}[{\mathbf {u}}_0^{[s-s_0]},\) \(\ldots ,{\mathbf {u}}_n^{[s-s_n]}]\). Regard each \({\mathbb {P}}_i^{(k)}~(i=0,\ldots ,n,k=0,\ldots ,s-s_i)\) as a polynomial in the \(n(s+1)\) variables \({\mathbb {Y}}^{[s]}=\{y_1,\ldots ,y_n,y'_1,\ldots ,y'_n,\ldots ,y^{(s)}_1,\) \(\ldots ,y^{(s)}_n\}\), and we denote its support by \(\mathcal {B}_{ik}\). Let \({\mathbb {F}}_{ik}\) be a generic sparse polynomial with support \(\mathcal {B}_{ik}\). Denote \({\mathbf {c}}_{ik}\) to be the set of coefficients of \({\mathbb {F}}_{ik}\), and in particular, suppose \(c_{ik0}\) is the coefficient of the monomial \(1\) in \({\mathbb {F}}_{ik}\). Now we claim that

C1) \(\overline{\mathcal {B}}=\{\mathcal {B}_{ik}\,|\,0\le i\le n; 0\le k \le s-s_i\}\) is an essential set.

C2) \(\overline{\mathcal {B}}=\{\mathcal {B}_{ik}\,|\,0\le i\le n; 0\le k \le s-s_i\}\) jointly span the affine lattice \(\mathbb {Z}^{n(s+1)}\).

Note that \(|\overline{\mathcal {B}}|=n(s+1)+1\). To prove C1), it suffices to show that for each fixed pair \((i,k)\), \(\overline{\mathcal {B}}\backslash \{\mathcal {B}_{ik}\}\) is algebraically independent over \(\mathbb {Q}({\mathbf {c}})\) where \({\mathbf {c}}=\cup _{i=0}^{n}\cup _{k=0}^{s-s_i}{\mathbf {c}}_{ik}\backslash \{c_{ik0}\}\). Without loss of generality, we prove that for a fixed \(k\in \{0,\ldots ,s-s_0\}\),
$$\begin{aligned} S_k=\{({\mathbb {F}}_{jl})_{1\le j \le n;0\le l\le s-s_j},{\mathbb {F}}_{00},\ldots , {\mathbb {F}}_{0,k-1},{\mathbb {F}}_{0,k+1},\ldots ,{\mathbb {F}}_{0,s-s_0}\} \end{aligned}$$
is an algebraically independent set.
Clearly, \(\{y_1,\ldots ,y_n,y'_1,\ldots ,y'_n,\ldots ,y^{(s_i+l)}_1,\ldots ,\) \(y^{(s_i+l)}_n\}\) is a subset of the support of \({\mathbb {F}}_{il}\). Now we choose a monomial from each \({\mathbb {F}}_{il}\) and denote it by \(m({\mathbb {F}}_{il})\). For each \(j\in \{1,\ldots ,n\}\) and \(l\in \{0,\ldots ,s-s_j\}\), let \(m({\mathbb {F}}_{jl})=y_j^{(s_j+l)}\). So \(\{m({\mathbb {F}}_{jl})_{1\le j \le n;0\le l\le s-s_j}\}={\mathbb {Y}}^{[s]}\backslash \{y_1^{[s_1-1]},\ldots ,y_n^{[s_n-1]}\}\). By convention, whenever some \(s_j=0\), \(y_j^{[s_j-1]}=\emptyset \). For the fixed \(k\), there exists a \(\tau \in \{0,1,\ldots ,n-1\}\) such that either \(\sum _{i=1}^{\tau }s_{i}\le k\le \sum _{i=1}^{\tau +1}s_{i}-1\) for some \(\tau \in \{0,1,\ldots ,n-2\}\) or \(\sum _{i=1}^{\tau }s_{i}\le k\le \sum _{i=1}^{\tau +1}s_{i}\) for \(\tau =n-1\). Here when \(\tau =0\), it means \(0\le k\le s_1-1\). Then for \(l\ne k\), let
$$\begin{aligned} m({\mathbb {F}}_{0l})=\left\{ \begin{array}{lll} y_1^{(l)}&{}\quad &{}0\le l\le s_1-1 \\ y_2^{(l-s_1)}&{}\quad &{}s_1\le l\le s_1+s_2-1 \\ \quad \vdots &{}\quad &{}\quad \vdots \\ y_{\tau +1}^{(l-\sum _{i=1}^{\tau }s_i)}&{}\quad &{}\sum _{i=1}^{\tau }s_{i}\le l\le k-1\\ y_{\tau +1}^{(l-\sum _{i=1}^{\tau }s_i-1)}&{}\quad &{}k+1\le l \le \sum _{i=1}^{\tau +1}s_{i}\\ y_{\tau +2}^{(l-\sum _{i=1}^{\tau +1}s_i-1)}&{}\quad &{}\sum _{i=1}^{\tau +1}s_{i}+1\le l\le \sum _{i=1}^{\tau +2}s_{i}\\ \quad \vdots &{}\quad &{}\quad \vdots \\ y_{n}^{(l-\sum _{i=1}^{n-1}s_i-1)}&{}\quad &{}\sum _{i=1}^{n-1}s_{i}+1\le l\le \sum _{i=1}^{n}s_{i}=s-s_0 \\ \end{array}\right. \end{aligned}$$
It is easy to see that \(\{m(F_{0l})| l\ne k\}=\{y_1^{[s_1-1]},\ldots ,y_n^{[s_n-1]}\}\). So \(m(S_k)=\{m(F_{il})| F_{il}\in S_k\}\) is equal to \({\mathbb {Y}}^{[s]}\), which are algebraically independent over \(\mathbb {Q}\). Thus, the \(n(s+1)\) members of \(S_k\) are algebraically independent over \({\mathbb Q}({\mathbf {c}})\). For if not, \({\mathbb {F}}_{il}-c_{il0}\) are algebraically dependent over \({\mathbb Q}({\mathbf {c}})\). Now specialize the coefficient of \(m({\mathbb {F}}_{il})\) in \({\mathbb {F}}_{il}\) to 1, and all the other coefficients of \({\mathbb {F}}_{il}-c_{il0}\) to 0, by the algebraic version of Lemma 2.1, \(\{m({\mathbb {F}}_{il})\,|\,{\mathbb {F}}_{il}\in S_k\}\) are algebraically dependent, which is a contradiction. Thus, claim C1) is proved.

Claim C2) follows from the fact that \(\{1,y_j^{[s]}\,|\,1\le j\le n\}\) is contained in the support of \({\mathbb {F}}_{0,s-s_0}\). From C1) and C2), the sparse resultant of \(({\mathbb {F}}_{ik})_{0\le i\le n; 0\le k\le s-s_i}\) exists and we denote it by \(G\). Then \((G)=\big (({\mathbb {F}}_{ik})_{0\le i\le n; 0\le k\le s-s_i}\big )\bigcap \) \(\mathbb {Q}[({\mathbf {c}}_{ik})_{0\le i\le n; 0\le k\le s-s_i}]\), and by Theorem 6.23, \(\mathrm{deg}(G,{\mathbf {c}}_{ik})=\mathcal {M}\big ((\mathcal {Q}_{jl})_{j\ne i,0\le l\le s-s_j},\mathcal {Q}_{i0},\ldots ,\) \(\mathcal {Q}_{i,k-1},\mathcal {Q}_{i,k+1},\ldots ,\mathcal {Q}_{i,s-s_i}\big )\).

Now suppose \(\xi \) is a generic point of the zero ideal \((0)_{\mathbb {Q}({\mathbf {c}})[{\mathbb {Y}}^{[s]}]}\). Let \(\zeta _{ik}=-{\mathbb {F}}_{ik}(\xi )+c_{ik0}\) and \(\overline{\zeta }_{ik}=-{\mathbb {P}}_i^{(k)}(\xi )+u_{i0}^{(k)}\) (\(i=0,\ldots ,n;k=0,\ldots ,s-s_i\)). Clearly, \(\zeta _{ik}\) and \(\overline{\zeta }_{ik}\) are free of \(c_{ik0}\) and \(u_{i0}^{(k)}\), respectively. It is easy to see that \((\xi ;{\mathbf {c}},\zeta _{00},\ldots ,\zeta _{0,s-s_0},\ldots ,\zeta _{n0},\ldots ,\zeta _{n,s-s_n})\) is a generic point of the algebraic prime ideal \(\big (({\mathbb {F}}_{ik})_{0\le i\le n; 0\le k\le s-s_i}\big )_{{\mathbb Q}[{\mathbb {Y}}^{[s]},({\mathbf {c}}_{ik})_{0\le i\le n; 0\le k\le s-s_i}]}\), while \((\xi ;\cup _{i=0}^n({\mathbf {u}}_i\backslash \{u_{i0}\})^{[s-s_i]},\) \(\overline{\zeta }_{00},\ldots ,\overline{\zeta }_{0,s-s_0},\) \(\ldots , \overline{\zeta }_{n0},\) \(\ldots ,\overline{\zeta }_{n,s-s_n})\) is a generic point of the algebraic prime ideal \(\big (({\mathbb {P}}_{i}^{(k)})_{0\le i\le n; 0\le k\le s-s_i}\big )_{{\mathbb Q}[{\mathbb {Y}}^{[s]},{\mathbf {u}}_0^{[s-s_0]},\ldots ,{\mathbf {u}}_n^{[s-s_n]}]}\). If we regard \(G\) as a polynomial in \(c_{ik0}\) over \(\mathbb {Q}({\mathbf {c}})\), then \(G\) is the vanishing polynomial of \((\zeta _{00},\ldots ,\zeta _{0,s-s_0},\ldots ,\zeta _{n0},\) \(\ldots ,\zeta _{n,s-s_n})\) over \({\mathbb Q}({\mathbf {c}})\). Now specialize the coefficients \({\mathbf {c}}_{ik}\) of \({\mathbb {F}}_{ik}\) to the corresponding coefficients of \({\mathbb {P}}_i^{(k)}\). Then each \(\zeta _{ik}\) is specialized to \(\overline{\zeta }_{ik}\). In particular, \(c_{ik0}\) are specialized to \(u_{i0}^{(k)}\) which are algebraically independent over the field \({\mathbb Q}(\xi ,\cup _{i=0}^n{\mathbf {u}}_i^{[s-s_i]}\backslash u_{i0}^{[s-s_i]})\). We claim that there exists a nonzero polynomial \(H(\cup _{i=0}^n{\mathbf {u}}_i^{[s-s_i]}\backslash u_{i0}^{[s-s_i]};\) \(u_{00},\ldots ,u_{00}^{(s-s_0)},\ldots ,\) \(u_{n0},\ldots ,u_{n0}^{(s-s_n)})\in {\mathbb Q}[{\mathbf {u}}_0^{[s-s_0]},\ldots ,\) \({\mathbf {u}}_n^{[s-s_n]}]\) such that

C3) \(H(\cup _{i=0}^n{\mathbf {u}}_i^{[s-s_i]}\backslash u_{i0}^{[s-s_i]};\overline{\zeta }_{00},\ldots ,\overline{\zeta }_{0,s-s_0},\ldots ,\overline{\zeta }_{n0},\ldots ,\overline{\zeta }_{n,s-s_n})=0\) and

C4) For each \(i\), \(\mathrm{deg}(H,{\mathbf {u}}_i^{[s-s_i]})\le \mathrm{deg}(G,\cup _{k=0}^{s-s_i}{\mathbf {c}}_{ik})\).

In the following, we construct \(H\) by specializing elements of \({\mathbf {c}}\) one by one in \(G\). For each \(v\in {\mathbf {c}}\), denote \(u\) to be its corresponding coefficient in \({\mathbb {P}}_i^{(k)}\). First specialize \(v \) to \(u \) and suppose \(\zeta _{ik}\) is specialized to \(\tilde{\zeta }_{ik}\) correspondingly. Clearly, \(G({\mathbf {c}}\backslash \{v\},u;\tilde{\zeta }_{00},\ldots ,\tilde{\zeta }_{0,s-s_0},\tilde{\zeta }_{n0},\ldots ,\tilde{\zeta }_{n,s-s_n})=0.\) If \(\bar{G}=G({\mathbf {c}}\backslash \{v\},u;c_{000},c_{010},\ldots ,c_{0,s-s_0,0},\ldots ,c_{n00},c_{n10}\ldots ,c_{n,s-s_n,0})\ne 0\), denote \(\bar{G}\) by \(H_1\). Otherwise, there exists some \(a\in \mathbb {N}\) such that \(G=(v-u)^aG_1\) with \(G_1|_{v=u}\ne 0\). But \(G({\mathbf {c}}\backslash \{v\},u;\tilde{\zeta }_{00},\) \(\ldots ,\tilde{\zeta }_{0,s-s_0},\tilde{\zeta }_{n0},\) \(\ldots ,\tilde{\zeta }_{n,s-s_n})=0= (v-u)^aG_1({\mathbf {c}}\backslash \{v\},u;\tilde{\zeta }_{00},\ldots ,\tilde{\zeta }_{0,s-s_0},\tilde{\zeta }_{n0},\) \(\ldots ,\tilde{\zeta }_{n,s-s_n})\), so \(G_1({\mathbf {c}}\backslash \{v\},\) \(u;\tilde{\zeta }_{00},\ldots ,\tilde{\zeta }_{0,s-s_0},\tilde{\zeta }_{n0},\ldots ,\tilde{\zeta }_{n,s-s_n})=0\). Denote \(G_1|_{v=u}\) by \(H_1\). Clearly, \(\mathrm{deg}(H_1,{\mathbf {u}}_i^{[s-s_i]}\bigcup \cup _{k}{\mathbf {c}}_{ik})\le \mathrm{deg}(G,\cup _{k}{\mathbf {c}}_{ik})\) for each \(i\). Continuing this process for \(|{\mathbf {c}}|\) times until each \(v\in {\mathbf {c}}\) is specialized to its corresponding element \(u\), we will obtain a nonzero polynomial \(H_{|{\mathbf {c}}|}(\cup _{i=0}^n({\mathbf {u}}_i\backslash \{u_{i0}\})^{[s-s_i]};c_{000},c_{010},\) \(\ldots ,c_{0,s-s_0,0},\ldots ,c_{n00},c_{n10},\) \(\ldots ,c_{n,s-s_n,0})\) satisfying \(H_{|{\mathbf {c}}|}(\cup _{i=0}^n({\mathbf {u}}_i\backslash \{u_{i0}\})^{[s-s_i]};\overline{\zeta }_{00},\ldots ,\overline{\zeta }_{0,s-s_0},\) \(\overline{\zeta }_{n0},\) \(\ldots ,\overline{\zeta }_{n,s-s_n})=0\) and moreover, for each \(i\), \(\mathrm{deg}(H_{|{\mathbf {c}}|},\) \({\mathbf {u}}_i^{[s-s_i]}\bigcup \cup _{k}\{c_{ik0}\})\le \mathrm{deg}(G,\cup _{k}{\mathbf {c}}_{ik})\). Since the \(u_{i0}^{(k)}\) are algebraically independent over the field \({\mathbb Q}(\xi ,\cup _{i=0}^n({\mathbf {u}}_i\backslash \{u_{i0}\})^{[s-s_i]})\), \(H=H_{|{\mathbf {c}}|}\big |_{c_{ik0}=u_{i0}^{(k)}}\in {\mathbb Q}[{\mathbf {u}}_0^{[s-s_0]},\ldots ,\) \({\mathbf {u}}_n^{[s-s_n]}]\) is a polynomial satisfying C3) and C4).

From C3), \(H\in ({\mathbb {P}}_0^{[s-s_0]},\ldots ,\) \({\mathbb {P}}_n^{[s-s_n]}).\) Since \(({\mathbb {P}}_0^{[s-s_0]},\ldots ,{\mathbb {P}}_n^{[s-s_n]})\cap {\mathbb Q}[{\mathbf {u}}_0^{[s-s_0]},\ldots ,{\mathbf {u}}_n^{[s-s_n]}]=({\mathbf {R}})\) and \({\mathbf {R}}\) is irreducible, \({\mathbf {R}}\) divides \(H\). Then \(\mathrm{deg}({\mathbf {R}},{\mathbf {u}}_i^{[s-s_i]})\le \mathrm{deg}(H,{\mathbf {u}}_i^{[s-s_i]})\) \(\le \mathrm{deg}(G,\cup _{k}{\mathbf {c}}_{ik})=\sum \limits _{k=0}^{s-s_i}\mathrm{deg}(G,{\mathbf {c}}_{ik})=\sum \limits _{k=0}^{s-s_i}\mathcal {M}\big ((\mathcal {Q}_{jl})_{j\ne i,0\le l\le s-s_j},\mathcal {Q}_{i0},\ldots ,\mathcal {Q}_{i,k-1},\mathcal {Q}_{i,k+1},\ldots ,\) \(\mathcal {Q}_{i,s-s_i}\big )\). \(\square \)

As a corollary, we give another Bézout type degree bound for the differential resultant, which is better than the bound given in Theorem 6.16 in that only the degrees of \({\mathbb {P}}_i\) in \({\mathbb {Y}}\) are involved.

Corollary 6.26

Let \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) be defined in (29) and \(s=\sum _{i=0}^n s_i\). Then for each \(i\in \{0,1,\ldots ,n\}\), \(\mathrm{deg}({\mathbf {R}},{\mathbf {u}}_i)\le \frac{s-s_i+1}{m_i}\prod _{j=0}^n m_j^{s-s_j+1}\).

Proof

By the proof of Theorem 6.25, \(\{\mathcal {B}_{ik}\,|\,0\le i\le n; 0\le k \le s-s_i\}\) is an essential set. Thus, for each fixed \(k\in \{0,\ldots ,s-s_i\}\), the polynomials in \(S_k\) together generate an ideal of dimension zero in \({\mathbb {Y}}^{[s]}\). By Lemma 6.24, \(\mathcal {M}\big ((\mathcal {Q}_{jl})_{j\ne i,0\le l\le s-s_j},\mathcal {Q}_{i0},\ldots ,\mathcal {Q}_{i,k-1},\mathcal {Q}_{i,k+1},\) \(\ldots ,\mathcal {Q}_{i,s-s_i}\big )\le \frac{1}{m_i}\prod _{j=0}^nm_j^{s-s_j+1}\). Hence, by Theorem 6.25,
$$\begin{aligned} \mathrm{deg}({\mathbf {R}},{\mathbf {u}}_i)&\le \sum _{k=0}^{s-s_i}\mathcal {M}\big ((\mathcal {Q}_{jl})_{j\ne i,0\le l\le s-s_j},\mathcal {Q}_{i0},\ldots ,\mathcal {Q}_{i,k-1},\mathcal {Q}_{i,k+1},\ldots ,\mathcal {Q}_{i,s-s_i}\big )\nonumber \\&\le \sum _{k=0}^{s-s_i}\frac{1}{m_i}\prod _{j=0}^nm_j^{s-s_j+1} =\frac{s-s_i+1}{m_i}\prod _{j=0}^nm_j^{s-s_j+1}. \end{aligned}$$
\(\square \)

Example 6.27

Consider two generic differential polynomials of order one and degree two in one indeterminate \(y\):
$$\begin{aligned} {\mathbb {P}}_0&= u_{00}+u_{01}y+u_{02}y'+u_{03}y^2+u_{04}yy'+u_{05}(y')^2,\\ {\mathbb {P}}_1&= u_{10}+u_{11}y+u_{12}y'+u_{13}y^2+u_{14}yy'+u_{15}(y')^2. \end{aligned}$$
Then the degree bound given by Theorem 6.16 is \(\mathrm{deg}({\mathbf {R}})\le (2+1)^4=81\). The degree bound given by Corollary 6.26 is \(\mathrm{deg}({\mathbf {R}},{\mathbf {u}}_0)\le 2^4=16\) and hence \(\mathrm{deg}({\mathbf {R}})\le 32\). The degree bound \(\mathrm{deg}({\mathbf {R}},{\mathbf {u}}_0)\) given by Theorem 6.25 is \(\mathcal {M}(\mathcal {Q}_{10},\mathcal {Q}_{11},\mathcal {Q}_{00}) + \mathcal {M}(\mathcal {Q}_{10},\mathcal {Q}_{11},\mathcal {Q}_{01})=4+6=10\) and consequently \(\mathrm{deg}({\mathbf {R}})\le 20\), where \(\mathcal {Q}_{01}=\mathcal {Q}_{10}=\hbox {{conv}}\{(0,0,0),(2,0,0),(0,2,0)\}\), \(\mathcal {Q}_{01}=\mathcal {Q}_{11}=\hbox {{conv}}\{(0,0,0),(2,0,0),(0,2,0),\) \((0,0,1),(1,0,1),\) \((0,1,1)\}\), and \(\hbox {{conv}}(\cdot )\) means taking the convex hull in \(\mathbb {R}^3\).

With Theorem 6.25, Theorem 1.3 is proved.

7 Conclusion

In this paper, we first introduce the concepts of Laurent differential polynomials and Laurent differentially essential systems and give a criterion for a set of Laurent differential polynomials to be differentially essential in terms of their supports. Then the sparse differential resultant for a Laurent differentially essential system is defined and its basic properties are proved, such as the differential homogeneity, necessary and sufficient conditions for the existence of solutions, differential toric variety, and Poisson product formulas. Furthermore, order and degree bounds for the sparse differential resultant are given. Based on these bounds, an algorithm to compute the sparse differential resultant is proposed, which is single exponential in terms of the Jacobi number and the size of the Laurent differentially essential system.

In the rest of this section, we propose several questions for further study.

It is useful to represent the sparse differential resultant as the quotient of two determinants, as done in [11, 15] in the algebraic case. In the differential case, we do not have such formulas, even in the simplest case of the resultant for two generic differential polynomials in one variable [49] or a system of linear sparse differential polynomials [43]. In [43], for a sparse linear differential system \(S\), Rueda gave an enlarged system \(S_1\) of \(S\) such that \(S_1\) has a matrix representation and the sparse differential resultant of \(S\) can be obtained from the determinant of \(S_1\). The treatment in [6] is far from complete. For instance, let \({\mathbb {P}}_0\) and \({\mathbb {P}}_1\) be two generic differential polynomials given in Example 6.27. Then, the differential resultant for \({\mathbb {P}}_0\) and \({\mathbb {P}}_1\) defined in [6] is zero, because all elements in the first column of the matrix \(M(\delta ,n,m)\) in [6, p. 543] are zero. Although using the idea of Dixon resultants, the algorithm in [48] does not give a matrix representation for the differential resultant.

There exist very efficient algorithms to compute algebraic sparse resultants [1416], which are based on matrix representations for the resultant. How to apply the principles behind these algorithms to compute sparse differential resultants is an important problem. A reasonable goal is to find an algorithm whose complexity depends on \(\mathrm{deg}({\mathbf {R}})\), but not on its degree bound in the worst case.

Let \(A\) be the factor in the Poisson formula (16). In the algebraic case, the corresponding \(A\) is a product of sparse resultants associated to the faces of the system supports [37]. It would be interesting for future work to analyze whether an analogous expression could be given in the differential case. On the other hand, to obtain Poisson product formulas in Theorem 5.18, we assume the Laurent differential polynomial system is normal rank essential. In the algebraic case, a Poisson product formula valid for arbitrary supports has been proved recently in [12]. It is desirable to see whether the assumption on the input supports can be weakened to derive similar Poisson formulas.

The degree of the algebraic sparse resultant is equal to the mixed volume of certain polytopes generated by the supports of the polynomials [37] or [19, p. 255]. A similar degree bound is given in Theorem 1.3 for the differential resultant. We conjecture that the bound given in Theorem 1.3 is also valid for the sparse differential resultant. Precisely, let \(\widetilde{{\mathbb {P}}}=\{\widetilde{{\mathbb {P}}}_0,\ldots ,\widetilde{{\mathbb {P}}}_n\}\) be a Laurent differentially essential system obtained from (29) by setting certain coefficients \(u_{i\alpha }\) to zero. Then, the degree bound given in Theorem 1.3 is also a degree bound for the sparse differential resultant of \(\widetilde{{\mathbb {P}}}\).

In the algebraic case, it is shown that the sparse polynomials \({\mathbb {P}}_i\,(i=0,\ldots ,n)\) can be re-parameterized to a new system \({\mathbb {S}}_i\,(i=0,\ldots ,n)\) with the help of the Newton polytope associated with \({\mathbb {P}}_i\) such that the vanishing of the sparse resultant gives a sufficient and necessary condition for \({\mathbb {S}}_i\,(i=0,\ldots ,n)\) to have solutions in \({\mathbb C}^N\), where \({\mathbb C}\) is the field of complex numbers [10, page 312]. It is interesting to extend this result to the differential case. To do that we need a deeper study of differential toric variety introduced in Sect. 5.4.

In the algebraic case, it is well known that the resultant vanishes if and only if the corresponding system of homogenous polynomials has common solutions in the projective space [22]. To extend this result to the differential case, several issues should be considered. First, the basis of differentially homogenous polynomials in \(\mathcal {F}\{{\mathbb {Y}}\}\) of degree \(d\), regarded as a vector space \(V(n,d)\) over \(\mathcal {F}\), is generally not differential monomials. For instance, the vector space \(V(1,2)\) is of dimension \(4\) and has a basis \(y_0^2,y_1^2,y_0y_1, y_0y_1'-y_1y_0'\), and it can easily be verified that this vector space has no basis consisting of purely differential monomials. Furthermore, the structure of \(V(n,d)\) is still unknown for \(n>1\) [39]. As a consequence, the sparse differential resultant for a generic differentially homogenous polynomial system cannot be defined properly. Second, in the differential case, the corresponding result might not be valid due to the reason that the projective differential space is not differentially complete [31]. In algebraic geometry, the fact that the projective space is complete plays a crucial role in the proof.

Finally, as mentioned in Sect. 1, the algebraic multivariate resultant has many applications. It is interesting to see whether sparse difference resultant can be used to achieve similar goals in the differential case.

Acknowledgments

Partially supported by a National Key Basic Research Project of China (2011CB302400) and by Grants from NSFC (60821002, 11101411, 11301519).

Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

insite
INHALT
download
DOWNLOAD
print
DRUCKEN
Fußnoten
1

Here, we can also consider the differential projective space \(\mathbf P (l_i)\) over \(\mathcal {E}\).

 
2

Here \({\mathbf {R}},S_1y_1-T_1,\ldots ,S_ny_n-T_n\) is a differential chain under an elimination ranking satisfying \(u_{ij} \prec y_1\prec \cdots \prec y_n\) with similar properties to auto-reduced sets[25].

 
Literatur
  1. L. M. Berkovich and V. G. Tsirulik, Differential resultants and some of their applications, Differentsial’nye Uravneniya 22 (1986), 750–757.MathSciNet
  2. D. N. Bernshtein, The number of roots of a system of equations, Functional Anal. Appl. 9 (1975), 183–185.View Article
  3. A. Buium and P. J. Cassidy, Differential algebraic geometry and differential algebraic groups, in Selected Works of Ellis Kolchin, with Commentary (H. Bass, A. Buium, and P. Cassidy, eds.), American Mathematical Society, Providence, RI, 1998, pp. 567–636.
  4. W. D. Brownawell, Bounds for the degrees in the nullstellensatz, Annals of Mathematics 126 (1987), 577-591.View ArticleMATHMathSciNet
  5. J. F. Canny, Generalized characteristic polynomials, Journal of Symbolic Computation 9 (1990), 241–250.View ArticleMATHMathSciNet
  6. G. Carrà-Ferro, A resultant theory for the systems of two ordinary algebraic differential equations, Appl. Algebra Engrg. Comm. Comput. 8 (1997), 539–560.View ArticleMATHMathSciNet
  7. M. Chardin, Differential resultants and subresultants, in Lecture Notes in Computer Science: Fundamentals of Computation Theory, 529 (1991), Springer, Berlin, 180–189.
  8. S. C. Chou and X. S. Gao, Automated reasoning in differential geometry and mechanics: part I. an improved version of Ritt-Wu’s decomposition algorithm, Journal of Automated Reasoning 10 (1993), 161–172.View ArticleMATHMathSciNet
  9. R. M. Cohn, Order and dimension, Proc. Amer. Math. Soc. 87 (1983), 1–6.View ArticleMATHMathSciNet
  10. D. Cox, J. Little, and D. O’Shea, Using Algeraic Geometry, Springer, New York, 1998.
  11. C. D’Andrea, Macaulay style formulas for sparse resultants, Trans. of Amer. Math. Soc. 354 (2002), 2595–2629.View ArticleMATHMathSciNet
  12. C. D’Andrea and M. Sombra, A Poisson formula for the sparse resultant, arXiv:​1310.​6617v2, 2014.
  13. D. Eisenbud, F. O. Schreyer, and J. Weyman, Resultants and Chow forms via exterior syzygies, Journal of Amer. Math. Soc. 16 (2004), 537–579.View ArticleMathSciNet
  14. I. Z. Emiris, On the complexity of sparse elimination, J. Complexity 12 (1996), 134–166.View ArticleMATHMathSciNet
  15. I. Z. Emiris and J. F. Canny, Efficient incremental algorithms for the sparse resultant and the mixed volume, Journal of Symbolic Computation 20 (2), 117–149, 1995.View ArticleMATHMathSciNet
  16. I. Z. Emiris and V. Y. Pan, Improved algorithms for computing determinants and resultants, Journal of Complexity 21 (2005), 43–71.View ArticleMATHMathSciNet
  17. X. S. Gao, W. Li, and C. M. Yuan. Intersection theory in differential algebraic geometry: generic intersections and the differential Chow Form, Trans. of Amer. Math. Soc. 365 (2013), 4575–4632.View ArticleMATHMathSciNet
  18. I. M. Gelfand, M. M. Kapranov, and A. V. Zelevinsky, Generalized Euler integrals and A-hypergeometric functions, Advances in Mathematics 84 (1990), 255–271.View ArticleMATHMathSciNet
  19. I. M. Gelfand, M. Kapranov, and A. V. Zelevinsky, Discriminants, Resultants and Multidimensional Determinants, Boston, Birkhäuser, 1994.MATH
  20. O. Golubitsky, M. Kondratieva, A. Ovchinnikov, and A. Szanto, A bound for orders in differential nullstellensatz, Journal of Algebra 322 (2009), 3852-3877.View ArticleMATHMathSciNet
  21. J. Heintz, Definability and fast quantifier elimination in algebraically closed fields, Theoretical Computer Science 24 (1983), 239–277.View ArticleMATHMathSciNet
  22. W. V. D. Hodge and D. Pedoe, Methods of Algebraic Geometry, Volume I, Cambridge Univ. Press, Cambridge, 1968.
  23. W. V. D. Hodge and D. Pedoe, Methods of Algebraic Geometry, Volume II, Cambridge Univ. Press, Cambridge, 1968.
  24. H. Hong, Ore subresultant coefficients in solutions, Appl. Algebra Engrg. Comm. Comput. 12 (2001), 421–428.View ArticleMATHMathSciNet
  25. E. Hubert, Factorization free decomposition algorithms in differential algebra, Journal of Symbolic Computations, 29 (2000), 641–662.View ArticleMATHMathSciNet
  26. G. Jeronimo, T. Krick, J. Sabia, and M. Sombra, The computational complexity of the Chow form, Foundations of Computational Mathematics 4 (2004), 41–117.View ArticleMATHMathSciNet
  27. J. P. Jouanolou, Le formalisme du rèsultant, Advances in Mathematics 90 (1991), 117–263.View ArticleMATHMathSciNet
  28. M. Kapranov, B. Sturmfels, and A Zelevinsky, Chow polytopes and general resultants, Duke Math. J. 67 (1992), 189–218.View ArticleMATHMathSciNet
  29. E. R. Kolchin, Differential Algebra and Algebraic Groups, Academic Press, New York, 1973.MATH
  30. E. R. Kolchin, A problem on differential polynomials, Contemporary Mathematics 131 (1992), 449–462.View ArticleMathSciNet
  31. E. R. Kolchin, Differential equations in a projective space and linear dependence over a projective Variety, in Contributions to Analysis: A Colletion of Papers Dedicated to Lipman Bers, Academic Press, New York, 1974, pp. 195-214.
  32. B. A. Lando, Jacobi’s bound for the order of systems of first order differential equations, Trans. Amer. Math. Soc. 152 (1970), 119–135.MATHMathSciNet
  33. W. Li and X. S. Gao, Differential Chow form for projective differential variety, Journal of Algebra 370 (2012), 344–360.View ArticleMATHMathSciNet
  34. W. Li, X. S. Gao, and C. M. Yuan, Sparse differential resultant, in Proc. ISSAC 2011, ACM Press, New York, 2011, 225-232.
  35. Z. Li, A subresultant theory for linear differential, linear difference and Ore polynomials, with applications, PhD thesis, Johannes Kepler University, 1996.
  36. O. Ore, Formale theorie der linearen differentialgleichungen, Journal für die Reine und Angewandte Mathematik 167 (1932), 221–234.
  37. P. Pedersen and B. Sturmfels, Product formulas for resultants and Chow forms, Mathematische Zeitschrift 214 (1993), 377–396.View ArticleMATHMathSciNet
  38. J. Renegar, On the computational complexity and geometry of the first-order theory of the reals, part I, Journal of Symbolic Computation 13 (1992), 255–299.View ArticleMATHMathSciNet
  39. G. M. Reinhart, The Schmidt-Kolchin conjecture, Journal of Symbolic Computation 28 (1999), 611–630.View ArticleMATHMathSciNet
  40. J. F. Ritt, Jacobi’s problem on the order of a system of differential equations, Annals of Mathematics 36 (1935), 303–312.View ArticleMathSciNet
  41. J. F. Ritt, Differential Equations from the Algebraic Standpoint, Amer. Math. Soc., New York, 1932.
  42. J. F. Ritt, Differential Algebra, Amer. Math. Soc., New York, 1950.MATH
  43. S. L. Rueda, Linear sparse differential resultant formulas, Linear Algebra and Its Applications 438 (2013), 4296–4321.View ArticleMATHMathSciNet
  44. S. L. Rueda and J. R. Sendra, Linear complete differential resultants and the implicitization of linear DPPEs, Journal of Symbolic Computation 45 (2010), 324–341.View ArticleMATHMathSciNet
  45. B. Sturmfels, Sparse elimination theory, in Computational Algebraic Geometry and Commutative Algebra (D. Eisenbud and L. Robbiano, eds.), Cambridge University Press, Cambridge, 1993, pp. 264-298.
  46. B. Sturmfels, On the Newton polytope of the resultant, Journal of Algebraic Combinatorics 3 (1994), 207–236.View ArticleMATHMathSciNet
  47. W. T. Wu, On the foundation of algebraic differential polynomial geometry, Journal of Systems Science and Mathematics 2 (1989), 289–312.MATH
  48. L. Yang, Z. Zeng, and W. Zhang, Differential elimination with Dixon resultants, Applied Mathematics and Computation 218 (2011), 10679–10690.View ArticleMathSciNet
  49. Z. Y. Zhang, C. M. Yuan, and X. S. Gao, Matrix formula of differential resultant for first order generic ordinary differential polynomials, in Computer Mathematics, Springer, Berlin Heidelberg, 2014, pp. 479-503.
  50. D. Zwillinger, Handbook of Differential Equations, Academic Press, San Diego, 1998.MATH
Metadaten
Titel
Sparse Differential Resultant for Laurent Differential Polynomials
verfasst von
Wei Li
Chun-Ming Yuan
Xiao-Shan Gao
Publikationsdatum
01.04.2015
Verlag
Springer US
Erschienen in
Foundations of Computational Mathematics / Ausgabe 2/2015
Print ISSN: 1615-3375
Elektronische ISSN: 1615-3383
DOI
https://doi.org/10.1007/s10208-015-9249-9

Weitere Artikel der Ausgabe 2/2015

Foundations of Computational Mathematics 2/2015 Zur Ausgabe

Premium Partner