Skip to main content

1992 | Buch

The Design and Analysis of Algorithms

verfasst von: Dexter C. Kozen

Verlag: Springer New York

Buchreihe : Monographs in Computer Science

insite
SUCHEN

Über dieses Buch

These are my lecture notes from CS681: Design and Analysis of Algo­ rithms, a one-semester graduate course I taught at Cornell for three consec­ utive fall semesters from '88 to '90. The course serves a dual purpose: to cover core material in algorithms for graduate students in computer science preparing for their PhD qualifying exams, and to introduce theory students to some advanced topics in the design and analysis of algorithms. The material is thus a mixture of core and advanced topics. At first I meant these notes to supplement and not supplant a textbook, but over the three years they gradually took on a life of their own. In addition to the notes, I depended heavily on the texts • A. V. Aho, J. E. Hopcroft, and J. D. Ullman, The Design and Analysis of Computer Algorithms. Addison-Wesley, 1975. • M. R. Garey and D. S. Johnson, Computers and Intractibility: A Guide to the Theory of NP-Completeness. w. H. Freeman, 1979. • R. E. Tarjan, Data Structures and Network Algorithms. SIAM Regional Conference Series in Applied Mathematics 44, 1983. and still recommend them as excellent references.

Inhaltsverzeichnis

Frontmatter

Lectures

Frontmatter
Lecture 1. Algorithms and Their Complexity

This is a course on the design and analysis of algorithms intended for first-year graduate students in computer science. Its purposes are mixed: on the one hand, we wish to cover some fairly advanced topics in order to provide a glimpse of current research for the benefit of those who might wish to specialize in this area; on the other, we wish to introduce some core results and techniques which will undoubtedly prove useful to those planning to specialize in other areas.

Dexter C. Kozen
Lecture 2. Topological Sort and MST

A recurring theme in asymptotic analysis is that it is often possible to get better asymptotic performance by maintaining extra information about the structure. Updating this extra information may slow down each individual step; this additional cost is sometimes called overhead. However, it is often the case that a small amount of overhead yields dramatic improvements in the asymptotic complexity of the algorithm.

Dexter C. Kozen
Lecture 3. Matroids and Independence

Before we prove the correctness of the blue and red rules for MST, let’s first discuss an abstract combinatorial structure called a matroid. We will show that the MST problem is a special case of the more general problem of finding a minimum-weight maximal independent set in a matroid. We will then generalize the blue and red rules to arbitrary matroids and prove their correctness in this more general setting. We will show that every matroid has a dual matroid, and that the blue and red rules of a matroid are the red and blue rules, respectively, of its dual. Thus, once we establish the correctness of the blue rule, we get the red rule for free.

Dexter C. Kozen
Lecture 4. Depth-First and Breadth-First Search

Depth-first search (DFS) and breadth-first search (BFS) are two of the most useful subroutines in graph algorithms. They allow one to search a graph in linear time and compile information about the graph. They differ in that the former uses a stack (LIFO) discipline and the latter uses a queue (FIFO) discipline to choose the next edge to explore.

Dexter C. Kozen
Lecture 5. Shortest Paths and Transitive Closure

Let G = (V, E) be an undirected graph and let ℓ be a function assigning a nonnegative length to each edge. Extend ℓ to domain V x V by defining ℓ(υ, υ) = 0 and ℓ(u, υ) = ∞ if (u, υ) ∉ E. Define the length2 of a path $$ p = {e_{1}}{e_{2}}...{e_{n}}{\text{ to be }}l(p) = \Sigma _{{i = 1}}^{n}l({e_{i}}). $$. For u, υ ∈ V, define the distanced(u, υ) from u to υ to be the length of a shortest path from u to υ, or ∞ if no such path exists. The single-source shortest path problem is to find, given s ∈ V, the value of d(s, u) for every other vertex u in the graph.

Dexter C. Kozen
Lecture 6. Kleene Algebra

Consider a binary relation on an n element set represented by an n x n Boolean matrix E. Recall from the last lecture that we can compute the reflexive transitive closure of E by divide-and-conquer as follows: partition E into four submatrices A, B, C, D of size roughly $$ \frac{n}{2}{\rm{ x }}\frac{n}{2} $$ such that A and D are square: $$ E\;{\rm{ = }}\left[ {\frac{{A|B}}{{C|D}}} \right]. $$

Dexter C. Kozen
Lecture 7. More on Kleene Algebra

In this lecture we will see how Kleene algebra can be used in a variety of situations involving *-like operations. The key result that allows these applications is that the n x n matrices over a Kleene algebra again form a Kleene algebra. Along the way we will establish a central lemma that establishes the importance of the regular sets Reg Σ over the finite alphabet Σ in reasoning about Kleene algebras in general.

Dexter C. Kozen
Lecture 8. Binomial Heaps

Binomial heaps were invented in 1978 by J. Vuillemin [106]. They give a data structure for maintaining a collection of elements, each of which has a value drawn from an ordered set, such that new elements can be added and the element of minimum value extracted efficiently. They admit the following operations: makeheap(i) return a new heap containing only element ifindmin(h) return a pointer to the element of h of minimum valueinsert(h, i) add element i to heap hdeletemin(h) delete the element of minimum value from hmeld(h, h′) combine heaps h and h′ into one heap

Dexter C. Kozen
Lecture 9. Fibonacci Heaps

Fibonacci heaps were developed by Fredman and Tarjan in 1984 [35] as a generalization of binomial heaps. The main intent was to improve Dijkstra’s single-source shortest path algorithm to O(m + n log n), but they have many other applications as well. In addition to the binomial heap operations, Fibonacci heaps admit two additional operations: decrement(h, i, Δ) decrease the value of i by Δdelete(h, i) remove i from heap h

Dexter C. Kozen
Lecture 10. Union-Find

The union-find data structure is motivated by Kruskal’s minimum spanning tree algorithm (Algorithm 2.6), in which we needed two operations on disjoint sets of vertices: determine whether vertices u and υ are in the same set;form the union of disjoint sets A and B.

Dexter C. Kozen
Lecture 11. Analysis of Union-Find
Dexter C. Kozen
Lecture 12. Splay Trees

A splay tree is a data structure invented by Sleator and Tarjan [94, 100] for maintaining a set of elements drawn from a totally ordered set and allowing membership testing, insertions, and deletions (among other operations) at an amortized cost of O(log n) per operation. The most interesting aspect of the structure is that, unlike balanced tree schemes such as 2–3 trees or AVL trees, it is not necessary to rebalance the tree explicitly after every operation—it happens automatically.

Dexter C. Kozen
Lecture 13. Random Search Trees

In this lecture we will describe a very simple probabilistic data structure that allows inserts, deletes, and membership tests (among other operations) in expected logarithmic time.

Dexter C. Kozen
Lecture 14. Planar and Plane Graphs

Planar graphs have many important applications in computer science, for example in VLSI layout. Many problems that are hard or even NP-complete for arbitrary graphs are much easier for planar graphs. In the next lecture we will prove a nice result due to Lipton and Tarjan in 1977 [73] which opens up planar graphs to divide-and-conquer.

Dexter C. Kozen
Lecture 15. The Planar Separator Theorem

The Planar Separator Theorem of Lipton and Tarjan [73] says that in any undirected planar graph G there exists a small separator$$ S\; \subseteq {\rm{ }}V $$ whose removal leaves two disjoint sets of vertices $$ A,\;B \subseteq {\rm{ }}V $$ with no edge between them; moreover, each of A and B is at most a constant fraction of the size of V.

Dexter C. Kozen
Lecture 16. Max Flow

Suppose we are given a tuple G = (V, c, s, t), where V is a set of vertices, s, t ∈ V are distinguished vertices called the source and sink respectively, and c is a function $$ c\::{V^{2}}\: \to {R_{ + }} $$ assigning a nonnegative real capacity to each pair of vertices. We make G into a directed graph by defining the set of directed edges $$ E{\rm{ = }}\left\{ {\left( {u,\;\upsilon } \right)|c\left( {u,\;\upsilon } \right) > 0} \right\}.$$

Dexter C. Kozen
Lecture 17. More on Max Flow

The Max Flow-Min Cut Theorem gives an algorithm for finding a flow with maximum value in a given network as long as the capacities are rational numbers. This algorithm was first published in 1956 by Ford and Fulkerson [34].

Dexter C. Kozen
Lecture 18. Still More on Max Flow

We follow Tarjan’s presentation [100]. In the Edmonds-Karp algorithm, we continue to augment by path flows along paths in the level graph L G until every path from s to t in L G contains at least one saturated edge. The flow at that point is called a blocking flow. The following modification, which improves the running time to O(mn2), was given by Dinic in 1970 [29]. Rather than constructing a blocking flow path by path, the algorithm constructs a blocking flow all at once by finding a maximal set of minimum-length augmenting paths. Each such construction is called a phase.

Dexter C. Kozen
Lecture 19. Matching

Matching refers to a class of problems with many important applications. Assigning instructors to courses or students to seminars with limited enrollment are two examples of matching problems.

Dexter C. Kozen
Lecture 20. More on Matching

Let G be an undirected graph with weight function w. Recall from last lecture that the weight of a matching M in G, denoted w(M), is the sum of the weights of the edges in M, and the incremental weight of a set A of edges, denoted Δ(A), is the sum of the weights of the unmatched edges in A less the sum of the weights of the matched edges in A. For an augmenting path p, Δ(p) gives the net change in weight that would be obtained by augmenting by p.

Dexter C. Kozen
Lecture 21. Reductions and NP-Completeness

We have seen several problems such as maximum flow and matching that at first glance appear intractible, but upon closer study admit very efficient algorithms. Unfortunately, this is the exception rather than the rule. For every interesting problem with a polynomial-time algorithm, there are dozens for which all known solutions require exponential time in the worst case.

Dexter C. Kozen
Lecture 22. More on Reductions and NP-Completeness

Before we give a formal definition of reduction, let us clarify the notion of a decision problem. Informally, a decision problem is a yes-or-no question. A decision problem is given by a description of the problem domain, i.e. the set of all possible instances of the problem, along with a description of the set of “yes” instances.

Dexter C. Kozen
Lecture 23. More NP-Complete Problems

Often in problems with a parameter k like k-CNFSat and k-colorability, larger values of k make the problem harder. This is not always the case. Consider the problem of determining whether a planar graph has a k-coloring. The problem is trivial for k = 1, easy for k = 2 (check by DFS or BFS whether the graph is bipartite, i.e. has no odd cycles), and trivial for k = 4 or greater by the Four Color Theorem, which says that every planar graph is 4-colorable. This leaves k = 3. We show below that 3-colorability of planar graphs is no easier than 3-colorability of arbitrary graphs. This result is due to Garey, Johnson, and Stockmeyer [40]; see also Lichtenstein [72] for some other NP-completeness results involving planar graphs.

Dexter C. Kozen
Lecture 24. Still More NP-Complete Problems

In this lecture we use the basic NP-complete problems given in previous lectures, which may have appeared contrived, to show that several very natural and important decision problems are NP-complete.

Dexter C. Kozen
Lecture 25. Cook’s Theorem

In this lecture we will prove the NP-hardness of CNFSat by exhibiting a reduction from an arbitrary problem in NP to CNFSat. We use the standard definition of one-tape deterministic and nondeterministic Turing machines; see for example [3, pp. 25ff.]. This landmark result was proved by S. Cook in 1971 [22]. A similar result was proved independently by L. Levin in the Soviet Union in 1973 [71].

Dexter C. Kozen
Lecture 26. Counting Problems and #P

In this lecture we discuss the complexity of counting problems. Instead of just determining whether a solution to a given problem exists, we will be interested in counting the number of different solutions to a given problem. Counting problems are naturally associated with many of the decision problems we have already discussed. The notion of a witness function formalizes this association.

Dexter C. Kozen
Lecture 27. Counting Bipartite Matchings

In this lecture we prove that computing the number of perfect matchings in a bipartite graph is #P complete, making it at least as difficult as computing the number of satisfying truth assignments for a Boolean expression.

Dexter C. Kozen
Lecture 28. Parallel Algorithms and NC

Parallel computing is a popular current research topic. The successful design of parallel algorithms requires identifying sources of data independence in a problem that allow it to be decomposed into independent subproblems, which can then be solved in parallel. This process often involves looking deeply into the mathematical structure of the problem.

Dexter C. Kozen
Lecture 29. Hypercubes and the Gray Representation

In this lecture we will take an algebraic approach to routing on the hypercube. We will develop some algebraic tools, which we will then use to analyze the hypercube implementation of parallel prefix described in the last lecture.

Dexter C. Kozen
Lecture 30. Integer Arithmetic in NC

Addition of two n-bit binary numbers can be performed in log n depth with n processors. We will use parallel prefix to calculate the carry string. Once the carry is computed, the sum is easily computed in constant time with n processors by taking the exclusive-or of the two summands and the carry string.

Dexter C. Kozen
Lecture 31. Csanky’s Algorithm

In 1976, Csanky gave a parallel algorithm to invert matrices [26]. This was one of the very first NC algorithms. It set the stage for a large body of research in parallel linear algebra that culminated with Mulmuley’s 1986 result that the rank of a matrix over an arbitrary field can be computed in NC [82].

Dexter C. Kozen
Lecture 32. Chistov’s Algorithm

Many important computational problems in algebra (such as the solution of polynomial equations) depend strongly on basic algorithms in linear algebra. In turn, many problems in linear algebra reduce to the computation of the rank of a matrix. This problem thus occupies a central position in computational algebra. NC algorithms for matrix rank were given by Ibarra, Moran, and Rosier in 1980 for matrices over the complex numbers [53] and over general fields in 1986 by Mulmuley [82]. We will devote a future lecture to this topic, but for now we lay the groundwork by showing how to calculate the characteristic polynomial of a matrix over an arbitrary field in NC.

Dexter C. Kozen
Lecture 33. Matrix Rank

Recall that the rank of an m x n matrix A over a field k is the maximum number of linearly independent rows (or columns) of A. It is the dimension of the image of the linear map kn → km defined by A; equivalently, it is n minus the dimension of the kernel (the set of vectors in kn annihilated by the map).

Dexter C. Kozen
Lecture 34. Linear Equations and Polynomial GCDs

It is still open whether one can find the greatest common divisor (gcd) of two integers in NC. In this lecture we will show how to compute the gcd of two polynomials in NC. We essentially reduce the problem to linear algebra. First we show how to solve systems of linear equations in NC; then we reduce the polynomial gcd problem to such a linear system.

Dexter C. Kozen
Lecture 35. The Fast Fourier Transform (FFT)

Consider two polynomials $$ \begin{array}{*{20}{c}} {f\left( x \right) = {a_{0}} + {a_{1}}{x^{2}} + \ldots + {a_{n}}{x^{n}}} \\ {g\left( x \right) = {b_{0}} + {b_{1}}{x^{2}} + \ldots + {b_{m}}{x^{m}}.} \\ \end{array} $$

Dexter C. Kozen
Lecture 36. Luby’s Algorithm

In this lecture and the next we develop a probabilistic NC algorithm of Luby for finding a maximal independent set in an undirected graph. Recall that a set of vertices of a graph is independent if the induced subgraph on those vertices has no edges. A maximal independent set is one contained in no larger independent set. A maximal independent set need not be of maximum cardinality among all independent sets in the graph.

Dexter C. Kozen
Lecture 37. Analysis of Luby’s Algorithm

In the previous lecture we proved that for each good vertex υ, the probability that υ is deleted in the current stage is at least 1/36. Recall that a vertex υ is good if 60$$ \sum\limits_{u \in N\left( \upsilon \right)} {} \;\frac{1}{{2d\left( u \right)}}\; \ge \;\frac{1}{6} $$ (intuitively, if it has lots of neighbors of low degree), and that an edge is good if it is incident to at least one good vertex. Since the probability that a good edge is deleted is at least as great as the probability that its good endpoint is deleted (if both its endpoints are good, so much the better), a good edge is deleted with probability at least 1/36.

Dexter C. Kozen
Lecture 38. Miller’s Primality Test

Factoring integers into their prime factors is one of the oldest and most important computational problems. It is a problem that goes back to Euclid and is still thriving today. It is generally assumed that factoring is computationally intractible, and many modern cryptographic protocols are based on this assumption. If someone discovered an efficient factoring algorithm tomorrow, there would be a lot of unhappy CIA and KGB agents around.

Dexter C. Kozen
Lecture 39. Analysis of Miller’s Primality Test
Dexter C. Kozen
Lecture 40. Probabilistic Tests with Polynomials

In this lecture, we give a useful probabilistic technique for testing properties that are equivalent to the vanishing of some polynomial of low degree. This technique has many interesting applications, not only in algebraic algorithms, but also in graph theory and combinatorics. Several examples of its use will be given later.

Dexter C. Kozen
Backmatter
Metadaten
Titel
The Design and Analysis of Algorithms
verfasst von
Dexter C. Kozen
Copyright-Jahr
1992
Verlag
Springer New York
Electronic ISBN
978-1-4612-4400-4
Print ISBN
978-1-4612-8757-5
DOI
https://doi.org/10.1007/978-1-4612-4400-4