Skip to main content

1993 | Buch

Effective Polynomial Computation

insite
SUCHEN

Über dieses Buch

Effective Polynomial Computation is an introduction to the algorithms of computer algebra. It discusses the basic algorithms for manipulating polynomials including factoring polynomials. These algorithms are discussed from both a theoretical and practical perspective. Those cases where theoretically optimal algorithms are inappropriate are discussed and the practical alternatives are explained.
Effective Polynomial Computation provides much of the mathematical motivation of the algorithms discussed to help the reader appreciate the mathematical mechanisms underlying the algorithms, and so that the algorithms will not appear to be constructed out of whole cloth.
Preparatory to the discussion of algorithms for polynomials, the first third of this book discusses related issues in elementary number theory. These results are either used in later algorithms (e.g. the discussion of lattices and Diophantine approximation), or analogs of the number theoretic algorithms are used for polynomial problems (e.g. Euclidean algorithm and p-adic numbers).
Among the unique features of Effective Polynomial Computation is the detailed material on greatest common divisor and factoring algorithms for sparse multivariate polynomials. In addition, both deterministic and probabilistic algorithms for irreducibility testing of polynomials are discussed.

Inhaltsverzeichnis

Frontmatter
1. Euclid’s Algorithm
Abstract
Among the mathematical problems we will investigate are computing greatest common divisors, factoring into prime factors and finding rational approximations. These computations may be performed on a variety of different mathematical quantities: polynomials, rational integers, power series, differential operators, etc. The most familiar of these algebraic structures are the natural numbers: ℕ={1,2,3,...}. If we include zero and the negative integers we have ℤ, the rational integers, which are commonly called the integers.
Richard Zippel
2. Continued Fractions
Abstract
By a continued fraction we mean an expression of the form
$${a_0} + \frac{1}{{{a_1} + \frac{1}{{{a_2} + \ddots }}}},$$
When this expression is finite, it represents a rational number. In its infinite form, the continued fraction is interpreted as the limiting value of the sequence
$${a_0},{a_0} + \frac{{{b_1}}}{{{a_1}}},{a_0} + \frac{{{b_1}}}{{{a_1} + \frac{{{b_2}}}{{{a_2}}}}}, \ldots ,$$
if the limit exists. Assuming this sequence converges, denote its limit by a. The elements of this sequence are called the continued fraction convergents of α. When the bi are equal to 1, the elements of the above sequence are quite good approximations to α and, in a certain sense, all of the “best” approximations to α are elements of the sequence.
Richard Zippel
3. Diophantine Equations
Abstract
Problems where only integral (or sometimes rational) solutions are of interest are called diophantine problems and the equations they involve are called diophantine equations after Diophantus of Alexandria, an early Greek mathematician who wrote a famous book that posed many such problems. Since one thinks of the real world as being continuous, one might think that diophantine equations are just mathematical curiosities. However, due to the quantization introduced by finite precision arithmetic, quantum mechanics and the discretization introduced by synchronous computers, they arise in a number of practical situations. This chapter discusses several different classes of diophantine equations that frequently arise and techniques for their solution.
Richard Zippel
4. Lattice Techniques
Abstract
Thus far we have considered diophantine approximation problems where we are trying to find good approximations to a single irrational number α.This can be expressed as determining integers p and q that minimize \( \left| {q\alpha - p} \right|\). Continued fraction techniques can be used to efficiently determine integers p and q satisfying
(4.1)
This is a rewritten form of Proposition 5.
Richard Zippel
5. Arithmetic Functions
Abstract
This chapter contains an amalgam of results from number theory that involve analytic techniques that will be needed in later sections.
Richard Zippel
6. Residue Rings
Abstract
This chapter presents some background material on computation in the ring ℤ/mℤ. The basic properties of the integers modulo an arbitrary integer are discussed in Section6.1. These objects are rings and not necessarily fields. One of the most important tools in symbolic computation, the Chinese remainder theorem, is discussed in Section6.2. The set of elements of ℤ/mℤ that have an inverse form a multiplicative group. The structure of this group is discussed in Section6.3. Section6.4 contains one of the most elegant theorems in number theory, Gauss’s law of quadratic reciprocity. Section6.5 discusses algebraic extensions of finite fields, which are themselves fields. In Section6.6 we show how the sequence of rings ℤ/pℤ,ℤ/m2ℤ,...can be used to recover ℤ from its images in the residue fields. In addition to recoving ℤ we also get many other elements. This is the completion of ℤ at (p) and its elements are called p-adic numbers.
Richard Zippel
7. Polynomial Arithmetic
Abstract
The core of most algebraic manipulation systems is a polynomial arithmetic package. There are a number of good reasons for this. First, a large number of problems in pure and applied mathematics can be expressed as problems solely involving polynomials. Second, polynomials provide a natural foundation on which to build more complex structures like rational functions, algebraic functions, power series and rings of transcendental functions. And third, the algorithms for polynomial arithmetic are well understood, efficient and relatively easy to implement.
Richard Zippel
8. Polynomial GCD’s Classical Algorithms
Abstract
The computation of polynomial greatest common divisors is required more frequently than one might at first imagine. For instance, nearly all computations with rational functions (quotients of polynomials) require a GCD to reduce the fraction to lowest terms. However, computing polynomial GCD’s is significantly more difficult than the arithmetic calculations discussed in Chapter 7.
Richard Zippel
9. Polynomial Elimination
Abstract
The fundamental problem in algebra is determine the values of \( {X_1}, \ldots ,{X_n} \) such that
$$ \begin{array}{*{20}{c}} {{F_1}\left( {{X_1}, \ldots ,{X_n}} \right) = 0,} \\ \vdots \\ {{F_m}\left( {{X_1}, \ldots ,{X_n}} \right) = 0,} \end{array} $$
(9.1)
where the F i are polynomials in the X j . If the m = n = 1 then we have the familiar problem of finding the zeroes of single polynomial in one variable. One way to solve the (9.1) is to try use the constraints of one equation to eliminate variables from others. If this is done properly, we can (often) obtain a system of equations of the form
$$ \begin{array}{*{20}{c}} {{G_1}\left( {{X_1}} \right) = 0,} \\ {{G_2}\left( {{X_1},{X_2}} \right) = 0,} \\ \vdots \\ {{G_m}\left( {{X_1}, \ldots ,{X_n}} \right) = 0.} \end{array}$$
(9.2)
By substituting each zero of the first equation into the others, and then solving the second and repeating we can obtain all of the solutions of (9.1).
Richard Zippel
10. Formal Power Series
Abstract
This chapter discusses a variety of algorithms for manipulating formal power series. Formal power series are infinite power series where we are not concerned with issues of convergence. Thus, both f(t) and g(t)
$$ \begin{array}{*{20}{c}} {f\left( t \right) = 1 + t + \frac{{{t^2}}}{{2!}} + \frac{{{t^3}}}{{3!}} + \frac{{{t^4}}}{{4!}} + \cdots ,} \\ {g\left( t \right) = 1 - t + 2!{t^2} - 3!{t^3} + 4!{t^4} + \cdots } \end{array} $$
are formal power series, even though g(t) does not converge for any non-zero value of t.
Richard Zippel
11. Bounds on Polynomials
Abstract
The more sophisticated polynomial algorithms discussed in the following chapters require estimates on the size of the coefficients of factors of a polynomial. The most natural of these results give bounds on the size of linear factors of univariate polynomials,i.e., bounds on the size of zeroes of univariate polynomials. These bounds are very useful when computing the polynomial zeroes numerically. The linear factor estimates can be extended to give bounds on the size of larger degree factors. These estimates are used in calculations of the GCD’s and factorizations of polynomials over the integers.
Richard Zippel
12. Zero Equivalence Testing
Abstract
This chapter begins the discussion of “modern” algorithms that take advantage of the sparsity of multivariate polynomials. These algorithms do not perform arithmetic operations with the polynomials directly, but rather compute with the values of the polynomials at selected points. They produce the value of the final answer at these points and then reconstruct the answer from the values. This approach eliminates intermediate expression swell and, when done properly, only requires time polynomial in the size of the answer. This is often called a black box model of computation since the polynomials are treated as opaque boxes—how the values of the polynomial are computed is available to us.
Richard Zippel
13. Univariate Interpolation
Abstract
The previous chapter showed how to determine if a polynomial was zero by examining its values at well chosen points. This chapter develops techniques to determine the polynomial itself from a sequence of values. This process is called interpolating a polynomial from its values. This technique is one of the most useful tools for constructing efficient symbolic computing algorithms for sparse problems.
Richard Zippel
14. Multivariate Interpolation
Abstract
Multivariate polynomial interpolation is a bit more complex than the univariate interpolation techniques discussed in Chapter 13. Because the number of possible terms in a multivariate polynomial can be exponential in the number of variables, techniques similar to those of Chapter 12 must be used to avoid spending inordinate time computing coefficients that are equal to zero.
Richard Zippel
15. Polynomial GCD’s Interpolation Algorithms
Abstract
We now use the interpolation algorithms of Chapters 13 and 14 to compute the GCD of two polynomials. This is the first of the modern algorithms that we discuss. Although the principles behind the sparse polynomial GCD algorithm are quite simple, the final algorithm is more complex than any discussed thus far.
Richard Zippel
16. Hensel Algorithms
Abstract
Previous chapters discussed techniques for interpolating a polynomial from a black box ℬP that represents the polynomial. These interpolation techniques were then used to compute the multivariate coefficients of the Gen of two polynomials. The modular interpolation approach requires no additional information about the coefficients other than degree or term bounds and thus can be used for a wide variety of other problems.
Richard Zippel
17. Sparse Hensel Algorithms
Abstract
The last chapter gave the framework for the type of Hensel method we will use for polynomial factorization problems. This chapter discusses a slight modification of the Hensel technique that allows us to take advantage of sparsity in the problem. The key idea used is the same as that of sparse interpolation. Based on some preliminary computation, the skeleton of the answer polynomial is developed. From then on, it is only necessary to reconstruct the the coefficients of those monomials that actually appear in the skeleton. In the Hensel approach this means that when proceeding from one stage to another, certain unknowns will be assumed to be zero and can be omitted when constructing the system of equations.
Richard Zippel
18. Factoring over Finite Fields
Abstract
Factoring polynomials is one of the most challenging problems in algebraic computation. The complete algorithm for factoring multivariate polynomials over the rational integers uses nearly all of the techniques developed in this book. This chapter considers the simpler problem of factoring univariate polynomials over the finite fields. This problem arises in coding theory, computational number theory and is a step in the practical algorithms for factoring polynomials over the rational integers.
Richard Zippel
19. Irreducibility of Polynomials
Abstract
Let F be a polynomial over an integral domainR, \( F \in R\left[ {\vec X} \right]\). As with rational integers, we say that F is reducible if there exist polynomialsG,\( H \in R\left[ {\vec X} \right]\),neither of which is inR, such that \( F = G \cdot H\).Otherwise,P is said to be irreducible or prime.
Richard Zippel
20. Univariate Factorization
Abstract
This chapter discusses the factorization of univariate polynomials over the rational integers, ℤ. This problem has been attacked by a combination of the Hensel lifting techniques and one of the algorithms for factorization over finite fields. For many problems this approach has been quite effective, but in the worst case can take exponential time. In 1982, A. K. Lenstra, H. W. Lenstra and Lovász [157] developed an algorithm for factoring univariate polynomials over ℤ in polynomial time by using basis reduction techniques for lattices. Suddenly, the complexity of factoring polynomials and a number of other problems was reduced to polynomial time, e.g., [152]. However, experience with actual implementations has shown that the lattice reduction techniques are not faster than the older exponential time techniques for any factorization problem that can be completed in reasonable time today.
Richard Zippel
21. Multivariate Factorization
Abstract
This final chapter is the culmination of our study of polynomial computations. Combining all the techniques discussed thus far we will demonstrate how to factor sparse multivariate polynomials in random polynomial time.
Richard Zippel
Backmatter
Metadaten
Titel
Effective Polynomial Computation
verfasst von
Richard Zippel
Copyright-Jahr
1993
Verlag
Springer US
Electronic ISBN
978-1-4615-3188-3
Print ISBN
978-1-4613-6398-9
DOI
https://doi.org/10.1007/978-1-4615-3188-3