Skip to main content

2004 | Buch

LATIN 2004: Theoretical Informatics

6th Latin American Symposium, Buenos Aires, Argentina, April 5-8, 2004. Proceedings

herausgegeben von: Martín Farach-Colton

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume contains the proceedings of the Latin American Theoretical Inf- matics (LATIN) conference that was held in Buenos Aires, Argentina, April 5–8, 2004. The LATIN series of symposia was launched in 1992 to foster interactions between the Latin American community and computer scientists around the world. This was the sixth event in the series, following S˜ ao Paulo, Brazil (1992), Valparaiso, Chile (1995), Campinas, Brazil (1998), Punta del Este, Uruguay (2000), and Cancun, Mexico (2002). The proceedings of these conferences were also published by Springer-Verlag in the Lecture Notes in Computer Science series: Volumes 583, 911, 1380, 1776, and 2286, respectively. Also, as before, we published a selection of the papers in a special issue of a prestigious journal. We received 178 submissions. Each paper was assigned to four program c- mittee members, and 59 papers were selected. This was 80% more than the previous record for the number of submissions. We feel lucky to have been able to build on the solid foundation provided by the increasingly successful previous LATINs. And we are very grateful for the tireless work of Pablo Mart´ ?nez L´ opez, the Local Arrangements Chair. Finally, we thank Springer-Verlag for publishing these proceedings in its LNCS series.

Inhaltsverzeichnis

Frontmatter

Invited Speakers

Analysis of Scheduling Algorithms for Proportionate Fairness

We consider a multiprocessor operating system in which each current job is guaranteed a given proportion over time of the total processor capacity. A scheduling algorithm allocates units of processor time to appropriate jobs at each time step. We measure the goodness of such a scheduler by the maximum amount by which the cumulative processor time for any job ever falls below the “fair” proportion guaranteed in the long term.In particular we focus our attention on very simple schedulers which impose minimal computational overheads on the operating system. For several such schedulers we obtain upper and lower bounds on their deviations from fairness. The scheduling quality which is achieved depends quite considerably on the relative processor proportions required by each job.We will outline the proofs of some of the upper and lower bounds, both for the unrestricted problem and for restricted versions where constraints are imposed on the processor proportions. Many problems remain to be investigated and we will give the results of some exploratory simulations. This is joint research with Micah Adler, Petra Berenbrink, Tom Friedetzky, Leslie Ann Goldberg and Paul Goldberg.

Mike Paterson
Advances in the Regularity Method

A beautiful result of Szemerédi on the asymptotic structure of graphs is his regularity lemma. Roughly speaking, his result tells us that any large graph may be written as a union of a bounded number of induced, random looking bipartite graphs (the so called ε-regular pairs).

Yoshiharu Kohayakawa
Fighting Spam: The Science

Consider the following simple approach to fighting spam [5]:If I don’t know you, and you want your e-mail to appear in my inbox, then you must attach to your message an easily verified “proof of computational effort”, just for me and just for this message.

Cynthia Dwork
The Consequences of Imre Simon’s Work in the Theory of Automata, Languages, and Semigroups

In this lecture, I will show how influential has been the work of Imre in the theory of automata, languages and semigroups. I will mainly focus on two celebrated problems, the restricted star-height problem (solved) and the decidability of the dot-depth hierarchy (still open). These two problems lead to surprising developments and are currently the topic of very active research. I will present the prominent results of Imre on both topics, and demonstrate how these results have been the motor nerve of the research in this area for the last thirty years.

Jean-Eric Pin

Contributions

Querying Priced Information in Databases: The Conjunctive Case
Extended Abstract

Query optimization that involves expensive predicates have received considerable attention in the database community. Typically, the output to a database query is a set of tuples that satisfy certain conditions, and, with expensive predicates, these conditions may be computationally costly to verify. In the simplest case, when the query looks for the set of tuples that simultaneously satisfy k expensive predicates, the problem reduces to ordering the evaluation of the predicates so as to minimize the time to output the set of tuples comprising the answer to the query.Here, we give a simple and fast deterministic k-approximation algorithm for this problem, and prove that k is the best possible approximation ratio for a deterministic algorithm, even if exponential time algorithms are allowed. We also propose a randomized, polynomial time algorithm with expected approximation ratio $1+\sqrt{2}/2\approx1.707$ for k=2, and prove that 3/2 is the best possible expected approximation ratio for randomized algorithms.

Sany Laber, Renato Carmo, Yoshiharu Kohayakawa
Sublinear Methods for Detecting Periodic Trends in Data Streams

We present sublinear algorithms — algorithms that use significantly less resources than needed to store or process the entire input stream – for discovering representative trends in data streams in the form of periodicities. Our algorithms involve sampling Õ$(\sqrt{n})$ positions. and thus they scan not the entire data stream but merely a sublinear sample thereof. Alternately, our algorithms may be thought of as working on streaming inputs where each data item is seen once, but we store only a sublinear – Õ$(\sqrt{n})$ – size sample from which we can identify periodicities. In this work we present a variety of definitions of periodicities of a given stream, present sublinear sampling algorithms for discovering them, and prove that the algorithms meet our specifications and guarantees. No previously known results can provide such guarantees for finding any such periodic trends. We also investigate the relationships between these different definitions of periodicity.

Funda Ergun, S. Muthukrishnan, S. Cenk Sahinalp
An Improved Data Stream Summary: The Count-Min Sketch and Its Applications

We introduce a new sublinear space data structure—the Count-Min Sketch— for summarizing data streams. Our sketch allows fundamental queries in data stream summarization such as point, range, and inner product queries to be approximately answered very quickly; in addition, it can be applied to solve several important problems in data streams such as finding quantiles, frequent items, etc. The time and space bounds we show for using the CM sketch to solve these problems significantly improve those previously known — typically from 1/ε2 to 1/ε in factor.

Graham Cormode, S. Muthukrishnan
Rotation and Lighting Invariant Template Matching

We address the problem of searching for a two-dimensional pattern in a two-dimensional text (or image), such that the pattern can be found even if it appears rotated and brighter or darker than its occurrence. Furthermore, we consider approximate matching under several tolerance models. We obtain algorithms that are almost worst-case optimal. The complexities we obtain are very close to the best current results for the case where only rotations, but not lighting invariance, are supported. These are the first results for this problem under a combinatorial approach.

Kimmo Fredriksson, Veli Mäkinen, Gonzalo Navarro
Computation of the Bisection Width for Random d-Regular Graphs

In this paper we provide an explicit way to compute asymptotically almost sure upper bounds on the bisection width of random d-regular graphs, for any value of d. We provide the bounds for 5 ≤ d ≤ 12. The upper bounds are obtained from the analysis of the performance of a randomized greedy algorithm to find bisections of d-regular graphs. We also give empirical values of the size of bisection found by the algorithm for some small values of d and compare it with numerical approximations of our theoretical bounds. Our analysis also gives asymptotic lower bounds for the size of the maximum bisection.

Josep Díaz, Maria J. Serna, Nicholas C. Wormald
Constrained Integer Partitions

We consider the problem of partitioning n integers into two subsets of given cardinalities such that the discrepancy, the absolute value of the difference of their sums, is minimized. The integers are i.i.d. random variables chosen uniformly from the set {1,...,M}. We study how the typical behavior of the optimal partition depends on n,M and the bias s, the difference between the cardinalities of the two subsets in the partition. In particular, we rigorously establish this typical behavior as a function of the two parameters $\kappa :=n^{-1}{\rm log}_{2}M$ and b:=|s|/n by proving the existence of three distinct “phases” in the κb-plane, characterized by the value of the discrepancy and the number of optimal solutions: a “perfect phase” with exponentially many optimal solutions with discrepancy 0 or 1; a “hard phase” with minimal discrepancy of order Me − Θ( n); and a “sorted phase” with an unique optimal partition of order Mn, obtained by putting the (s+n)/2 smallest integers in one subset.

Christian Borgs, Jennifer T. Chayes, Stephan Mertens, Boris Pittel
Embracing the Giant Component

Consider a game in which edges of a graph are provided a pair at a time, and the player selects one edge from each pair, attempting to construct a graph with a component as large as possible. This game is in the spirit of recent papers on avoiding a giant component, but here we embrace it.We analyze this game in the offline and online setting, for arbitrary and random instances, which provides for interesting comparisons. For arbitrary instances, we find a large lower bound on the competitive ratio. For some random instances we find a similar lower bound holds with high probability (whp). If the instance has $\frac{1}{4}(1+\epsilon)n$ random edge pairs, when 0<ε≤ 0.003 then any online algorithm generates a component of size O((logn)3/2)whp, while the optimal offline solution contains a component of size Ω(n) whp. For other random instances we find the average-case competitive ratio is much better than the worst-case bound. If the instance has $\frac{1}{2}(1-\epsilon)n$ random edge pairs, with 0<ε≤ 0.015, we give an online algorithm which finds a component of size Ω(n) whp.

Abraham Flaxman, David Gamarnik, Gregory B. Sorkin
Sampling Grid Colorings with Fewer Colors

We provide an optimally mixing Markov chain for 6-colorings of the square grid. Furthermore, this implies that the uniform distribution on the set of such colorings has strong spatial mixing. Four and five are now the only remaining values of k for which it is not known whether there exists a rapidly mixing Markov chain for k-colorings of the square grid.

Dimitris Achlioptas, Mike Molloy, Cristopher Moore, Frank Van Bussel
The Complexity of Finding Top-Toda-Equivalence-Class Members

We identify two properties that for P-selective sets are effectively computable. Namely we show that, for any P-selective set, finding a string that is in a given length’s top Toda equivalence class (very informally put, a string from Σn that the set’s P-selector function declares to be most likely to belong to the set) is FP$^{\Sigma^{p}_{2}}$ computable, and we show that each P-selective set contains a weakly-P$^{\Sigma^{p}_{2}}$-rankable subset.

Lane A. Hemaspaandra, Mitsunori Ogihara, Mohammed J. Zaki, Marius Zimand
List Partitions of Chordal Graphs

In an earlier paper we gave efficient algorithms for partitioning chordal graphs into k independent sets and ell cliques. This is a natural generalization of the problem of recognizing split graphs, and is NP-complete for graphs in general, unless k ≤ 2 and ell ≤ 2. (Split graphs have k = ell = 1.)In this paper we expand our focus and consider general M-partitions, also known as trigraph homomorphisms, for the class of chordal graphs. For each symmetric matrix M over 0, 1, *, the M-partition problem seeks a partition of the input graph into independent sets, cliques, or arbitrary sets, with some pairs of sets being required to have no edges, or to have all edges joining them, as encoded in the matrix M. Such partitions generalize graph colorings and homomorphisms, and arise frequently in the theory of graph perfection. We show that many M-partition problems that are NP-complete in general become solvable in polynomial time for chordal graphs, even in the presence of lists. On the other hand, we show that there are M-partition problems that remain NP-complete even for chordal graphs. We also discuss forbidden subgraph characterizations for the existence of M-partitions.

Tomás Feder, Pavol Hell, Sulamita Klein, Loana Tito Nogueira, Fábio Protti
Bidimensional Parameters and Local Treewidth

For several graph theoretic parameters such as vertex cover and dominating set, it is known that if their values are bounded by k then the treewidth of the graph is bounded by some function of k. This fact is used as the main tool for the design of several fixed-parameter algorithms on minor-closed graph classes such as planar graphs, single-crossing-minor-free graphs, and graphs of bounded genus. In this paper we examine the question whether similar bounds can be obtained for larger minor-closed graph classes, and for general families of parameters including all the parameters where such a behavior has been reported so far.Given a graph parameter P, we say that a graph family $\mathcal{F}$ has the parameter-treewidth property for P if there is a function f(p) such that every graph $G \in \mathcal{F}$ with parameter at most p has treewidth at most f(p). We prove as our main result that, for a large family of parameters called contraction-bidimensional parameters, a minor-closed graph family $\mathcal{F}$ has the parameter-treewidth property if $\mathcal{F}$ has bounded local treewidth. We also show “if and only if” for some parameters, and thus this result is in some sense tight. In addition we show that, for a slightly smaller family of parameters called minor-bidimensional parameters, all minor-closed graph families $\mathcal{F}$ excluding some fixed graphs have the parameter-treewidth property. The bidimensional parameters include many domination and covering parameters such as vertex cover, feedback vertex set, dominating set, edge-dominating set, q-dominating set (for fixed q). We use these theorems to develop new fixed-parameter algorithms in these contexts.

Erik D. Demaine, Fedor V. Fomin, Mohammad Taghi Hajiaghayi, Dimitrios M. Thilikos
Vertex Disjoint Paths on Clique-Width Bounded Graphs
Extended Abstract

We show that l vertex disjoint paths between l pairs of vertices can be found in linear time for co-graphs but is NP-complete for graphs of NLC-width at most 4 and clique-width at most 7. This is the first inartificial graph problem known to be NP-complete on graphs of bounded clique-width but solvable in linear time on co-graphs and graphs of bounded tree-width.

Frank Gurski, Egon Wanke
On Partitioning Interval and Circular-Arc Graphs into Proper Interval Subgraphs with Applications

In this note, we establish that any interval or circular-arc graph with n vertices admits a partition into O(log n) proper interval subgraphs. This bound is shown to be asymptotically sharp for an infinite family of interval graphs. Moreover, the constructive proof yields a linear-time and space algorithm to compute such a partition. The second part of the paper is devoted to an application of this result, which has actually inspired this research: the design of an efficient approximation algorithm for a $\mathcal{NP}$-hard problem of planning working schedules.

Frédéric Gardi
Collective Tree Exploration

An n-node tree has to be explored by k mobile agents (robots), starting in its root. Every edge of the tree must be traversed by at least one robot, and exploration must be completed as fast as possible. Even when the tree is known in advance, scheduling optimal collective exploration turns out to be NP-hard. We investigate the problem of distributed collective exploration of unknown trees. Not surprisingly, communication between robots influences the time of exploration. Our main communication scenario is the following: robots can communicate by writing at the currently visited node previously acquired information, and reading information available at this node. We construct an exploration algorithm whose running time for any tree is only O(k/log k) larger than optimal exploration time with full knowledge of the tree. (We say that the algorithm has overheadO(k/log k)). On the other hand we show that, in order to get overhead sublinear in the number of robots, some communication is necessary. Indeed, we prove that if robots cannot communicate at all, then every distributed exploration algorithm works in time Ω (k) larger than optimal exploration time with full knowledge, for some trees.

Pierre Fraigniaud, Leszek Gasieniec, Dariusz R. Kowalski, Andrzej Pelc
Off-Centers: A New Type of Steiner Points for Computing Size-Optimal Quality-Guaranteed Delaunay Triangulations

We introduce a new type of Steiner points, called off-centers, as an alternative to circumcenters, to improve the quality of Delaunay triangulations. We propose a new Delaunay refinement algorithm based on iterative insertion of off-centers. We show that this new algorithm has the same quality and size optimality guarantees of the best known refinement algorithms. In practice, however, the new algorithm inserts about 40% fewer Steiner points (hence runs faster) and generates triangulations that have about 30% fewer elements compared with the best previous algorithms.

Alper Üngör
Space-Efficient Algorithms for Computing the Convex Hull of a Simple Polygonal Line in Linear Time

We present space-efficient algorithms for computing the convex hull of a simple polygonal line in-place, in linear time. It turns out that the problem is as hard as stable partition, i.e., if there were a truly simple solution then stable partition would also have a truly simple solution, and vice versa. Nevertheless, we present a simple self-contained solution that uses O(log n) space, and indicate how to improve it to O(1) space with the same techniques used for stable partition. If the points inside the convex hull can be discarded, then there is a truly simple solution that uses a single call to stable partition, and even that call can be spared if only extreme points are desired (and not their order). If the polygonal line is closed, then the problem admits a very simple solution which does not call for stable partitioning at all.

Hervé Brönnimann, Timothy M. Chan
A Geometric Approach to the Bisection Method

The bisection method is the consecutive bisection of a triangle by the median of the longest side. This paper introduces a taxonomy of triangles that precisely captures the behavior of the bisection method. Our main result is an asymptotic upper bound for the number of similarity classes of triangles generated on a mesh obtained by iterative bisection, which previously was known only to be finite. We also prove that the number of directions on the plane given by the sides of the triangles generated is finite. Additionally, we give purely geometric and intuitive proofs of classical results for the bisection method.

Claudio Gutierrez, Flavio Gutierrez, Maria-Cecilia Rivara
Improved Linear Expected-Time Algorithms for Computing Maxima

The problem of finding the maxima of a point set plays a fundamental role in computational geometry. Based on the idea of the certificates of exclusion, two algorithms are presented to solve the maxima problem under the assumption that N points are chosen from a d-dimensional hypercube uniformly and each component of a point is independent of all other components. The first algorithm runs in O(N) expected time and finds the maxima using dN + dlnN + d2N1 − 1/d(lnN)1/d + O(dN1 − 1/d) expected scalar comparisons. The experiments show the second algorithm has a better expected running time than the first algorithm while a tight upper bound of the expected running time is not obtained. A third maxima-finding algorithm is presented for N points with a d-dimensional component independence distribution, which runs in O(N) expected time and uses 2dN + O(lnN(ln(lnN))) + d2N1 − 1/d(lnN)1/d + O(dN1 − 1/d) expected scalar comparisons. The substantial reduction of the expected running time of all three algorithms, compared with some known linear expected-time algorithms, has been attributed to the fact that a better certificate of exclusion has been chosen and more non-maximal points have been identified and discarded.

H. K. Dai, X. W. Zhang
A Constant Approximation Algorithm for Sorting Buffers

We consider an algorithmic problem that arises in manufacturing applications. The input is a sequence of objects of various types. The scheduler is fed the objects in the sequence one by one, and is equipped with a finite buffer. The goal of the scheduler/sorter is to maximally reduce the number of type transitions. We give the first polynomial-time constant approximation algorithm for this problem. We prove several lemmas about the combinatorial structure of optimal solutions that may be useful in future research, and we show that the unified algorithm based on the local ratio lemma performs well for a slightly larger class of problems than was apparently previously known.

Jens S. Kohrt, Kirk Pruhs
Approximation Schemes for a Class of Subset Selection Problems

In paper we develop an easily applicable algorithmic technique/tool for developing approximation schemes for certain types of combinatorial optimization problems. Special cases that are covered by our result show up in many places in the literature. For every such special case, a particular rounding trick has been implemented in a slightly different way, with slightly different arguments, and with slightly different worst case estimations. Usually, the rounding procedure depended on certain upper or lower bounds on the optimal objective value that have to be justified in a separate argument. Our easily applied result unifies many of these results, and sometimes it even leads to a simpler proof. We demonstrate how our result can be easily applied to a broad family of combinatorial optimization problems. As a special case, we derive the existence of an FPTAS for the scheduling problem of minimizing the weighted number of late jobs under release dates and preemption on a single machine. The approximability status of this problem has been open for some time.

Kirk Pruhs, Gerhard J. Woeginger
Finding k-Connected Subgraphs with Minimum Average Weight

We consider the problems of finding k-connected spanning subgraphs with minimum average weight. We show that the problems are NP-hard for k>1. Approximation algorithms are given for four versions of the minimum average edge weight problem: 13-approximation for k-edge-connectivity,2O(logk) approximation for k-node-connectivity32+ ε approximation for k-node-connectivity in Euclidian graphs, for any constant ε> 0,45.8-approximation for k-node-connectivity in graphs satisfying the triangle inequality.

Prabhakar Gubbala, Balaji Raghavachari
On the (Im)possibility of Non-interactive Correlation Distillation

We study the problem of non-interactive correlation distillation (NICD). Suppose Alice and Bob each has a string, denoted by A = a0a1...a n − 1 and B = b0b1...b n − 1, respectively. Furthermore, for every k=0,1,...,n-1, (a k ,b k ) is independently drawn from a distribution $\mathcal{N}$, known as the “noise mode”. Alice and Bob wish to “distill” the correlation non-interactively, i.e., they wish to each apply a function to their strings, and output one bit, denoted by X and Y, such that Prob[X = Y] can be made as close to 1 as possible. The problem is, for what noise model can they succeed? This problem is related to various topics in computer science, including information reconciliation and random beacons. In fact, if NICD is indeed possible for some general class of noise models, then some of these topics would, in some sense, become straightforward corollaries.We prove two negative results on NICD for various noise models. We prove that for these models, it is impossible to distill the correlation to be arbitrarily close to 1. We also give an example where Alice and Bob can increase their correlation with one bit of communication. This example, which may be of its own interest, demonstrates that even the smallest amount of communication is provably more powerful than no communication.

Ke Yang
Pure Future Local Temporal Logics Are Expressively Complete for Mazurkiewicz Traces

The paper settles a long standing problem for Mazurkiewicz traces: the pure future local temporal logic defined with the basic modalities exists-next and until is expressively complete. The analogous result with a global interpretation was solved some years ago by Thiagarajan and Walukiewicz (1997) and in its final form without any reference to past tense constants by Diekert and Gastin (2000). Each, the (previously known) global or the (new) local result generalizes Kamp’s Theorem for words, because for sequences local and global viewpoints coincide. But traces are labelled partial orders and then the difference between an interpretation globally over cuts (configurations) or locally at points (events) is significant. For global temporal logics the satisfiability problem is non-elementary (Walukiewicz 1998), whereas for local temporal logics both the satisfiability problem and the model checking problem are solvable in Pspace (Gastin and Kuske 2003) as in the case of words. This makes local temporal logics much more attractive.

Volker Diekert, Paul Gastin
How Expressions Can Code for Automata

In this paper we investigate how it is possible to recover an automaton from a rational expression that has been computed from that automaton. The notion of derived term of an expression, introduced by Antimirov, appears to be instrumental in this problem. The second important ingredient is the co-minimization of an automaton, a dual and generalized Moore algorithm on non-deterministic automata. If an automaton is then sufficiently “decorated”, the combination of these two algorithms gives the desired result. Reducing the amount of “decoration” is still the object of ongoing investigation.

Sylvain Lombardy, Jacques Sakarovitch
Automata for Arithmetic Meyer Sets

The set ℤ β of β-integers is a Meyer set when β is a Pisot number, and thus there exists a finite set F such that ℤ β  − ℤ β  ⊂ ℤ β  + F. We give finite automata describing the expansions of the elements of ℤ β and of ℤ β  − ℤ β . We present a construction of such a finite set F, and a method to minimize the size of F. We obtain in this way a finite transducer that performs the decomposition of the elements of ℤ β  − ℤ β as a sum belonging to ℤ β  + F.

Shigeki Akiyama, Frédérique Bassino, Christiane Frougny
Efficiently Computing the Density of Regular Languages

A regular language L is called dense if the fraction f m of words of length m over some fixed signature that are contained in L tends to one if m tends to infinity. We present an algorithm that computes the number of accumulation points of (f m ) in polynomial time, if the regular language L is given by a finite deterministic automaton, and can then also efficiently check whether L is dense. Deciding whether the least accumulation point of (f m ) is greater than a given rational number, however, is coNP-complete. If the regular language is given by a non-deterministic automaton, checking whether L is dense becomes PSPACE-hard. We will formulate these problems as convergence problems of partially observable Markov chains, and reduce them to combinatorial problems for periodic sequences of rational numbers.

Manuel Bodirsky, Tobias Gärtner, Timo von Oertzen, Jan Schwinghammer
Longest Repeats with a Block of Don’t Cares

We introduce an algorithm for extracting all longest repeats with k don’t cares from a given sequence. Such repeats are composed of two parts separated by a block of k don’t care symbols. The algorithm uses suffix trees to fulfill this task and relies on the ability to answer the lowest common ancestor queries in constant time. It requires O(n log n) time in the worst-case.

Maxime Crochemore, Costas S. Iliopoulos, Manal Mohamed, Marie-France Sagot
Join Irreducible Pseudovarieties, Group Mapping, and Kovács-Newman Semigroups

We call a pseudovariety finite join irreducible if$V\leq V_{1}v V_{2}\Longrightarrow V \leq V_{1} {\rm or} V \leq V_{2}. $We present a large class of group mapping semigroups generating finite join irreducible pseudovarieties. We show that many naturally occurring pseudovarieties are finite join irreducible including: S, DS, CR, CS and $\overline{H}$, where H is a group pseudovariety containing a non-nilpotent group.

John Rhodes, Benjamin Steinberg
Complementation of Rational Sets on Scattered Linear Orderings of Finite Rank

In a preceding paper (Bruyère and Carton, automata on linear orderings, MFCS’01), automata have been introduced for words indexed by linear orderings. These automata are a generalization of automata for finite, infinite, bi-infinite and even transfinite words studied by Büchi. Kleene’s theorem has been generalized to these words. We show that deterministic automata do not have the same expressive power. Despite this negative result, we prove that rational sets of words of finite ranks are closed under complementation.

Olivier Carton, Chloé Rispal
Expected Length of the Longest Common Subsequence for Large Alphabets

We consider the length L of the longest common subsequence of two randomly uniformly and independently chosen n character words over a k-ary alphabet. Subadditivity arguments yield that E[L]/n converges to a constant γ k . We prove a conjecture of Sankoff and Mainville from the early 80’s claiming that $\gamma_{\kappa}\sqrt{k}\longrightarrow 2$ as $K \longrightarrow \infty$.

Marcos Kiwi, Martin Loebl, Jiří Matoušek
Universal Types and Simulation of Individual Sequences

We define the universal type class of an individual sequence x$_{\rm 1}^{n}$, in analogy to the classical notion used in the method of types of information theory. Two sequences of the same length are said to be of the same universal (LZ) type if and only if they yield the same set of phrases in the incremental parsing of Ziv and Lempel (1978). We show that the empirical probability distributions of any finite order k of two sequences of the same universal type converge, in the variational sense, as the sequence length increases. Consequently, the logarithms of the probabilities assigned by any k-th order probability assignment to two sequences of the same universal type converge, for any k. We estimate the size of a universal type class, and show that its behavior parallels that of the conventional counterpart, with the LZ78 code length playing the role of the empirical entropy. We present efficient procedures for enumerating the sequences in a universal type class, and for drawing a sequence from the class with uniform probability. As an application, we consider the problem of universal simulation of individual sequences. A sequence drawn with uniform probability from the universal type class of x$_{\rm 1}^{n}$ is a good simulation of x$_{\rm 1}^{n}$ in a well defined mathematical sense.

Gadiel Seroussi
Separating Codes: Constructions and Bounds

Separating codes, initially introduced to test automaton, have revived lately in the study of fingerprinting codes, which are used for copyright protection. Separating codes play their role in making the fingerprinting scheme secure against coalitions of pirates. We provide here better bounds, constructions and generalizations for these codes.

Gérard Cohen, Hans Georg Schaathun
Encoding Homotopy of Paths in the Plane

We study the problem of encoding homotopy of simple paths in the plane. We show that the homotopy of a simple path with k edges in the presence of n obstacles can be encoded using O(n log(n + k)) bits. The bound is tight if k=Ω(n1 + ε). We present an efficient algorithm for encoding the homotopy of a path. The algorithm can be applied to find homotopic paths among a set of simple paths. We show that the homotopy of a general (not necessary simple) path can be encoded using O(k logn) bits. The bound is tight. The code is based on a homotopic minimum-link path and we present output-sensitive algorithms for computing a path and the code.

Sergei Bespamyatnikh
A Unified Approach to Coding Labeled Trees

We consider the problem of coding labeled trees by means of strings of node labels and we present a unified approach based on a reduction of both coding and decoding to integer (radix) sorting. Applying this approach to four well-known codes introduced by Prüfer [18], Neville [17], and Deo and Micikevicius [5], we close some open problems. With respect to coding, our general sequential algorithm requires optimal linear time, thus solving the problem of optimally computing the second code presented by Neville. The algorithm can be parallelized on the EREW PRAM model, so as to work in O(logn) time using O(n) or $O(n \sqrt{log n})$ operations, depending on the code.With respect to decoding, the problem of finding an optimal sequential algorithm for the second Neville code was also open, and our general scheme solves it. Furthermore, in a parallel setting our scheme yields the first efficient decoding algorithms for the codes in [5] and [17].

Saverio Caminiti, Irene Finocchi, Rossella Petreschi
Cost-Optimal Trees for Ray Shooting

Predicting and optimizing the performance of ray shooting is a very important problem in computer graphics due to the severe computational demands of ray tracing and other applications, e.g., radio propagation simulation. Aronov and Fortune were the first to guarantee an overall performance within a constant factor of optimal in the following model of computation: build a triangulation compatible with the scene, and shoot rays by locating origin and traversing until hit is found. Triangulations are not a very popular model in computer graphics, but space decompositions like kd-trees and octrees are used routinely. Aronov et al. [1] developed a cost measure for such decompositions, and proved it to reliably predict the average cost of ray shooting.In this paper, we address the corresponding optimization problem, and more generally d-dimensional trees with the cost measure of [1] as the optimizing criterion. We give a construction of quadtrees and octrees which yields cost O(M), where M is the infimum of the cost measure on all trees, for points or for (d-1)-simplices. Sometimes, a balance condition is important. (Informally, balanced trees ensures that adjacent leaves have similar size.) We also show that rebalancing does not affect the cost by more than a constant multiplicative factor, for both points and (d-1)-simplices. To our knowledge, these are the only results that provide performance guarantees within approximation factor of optimality for 3-dimensional ray shooting with the octree model of computation.

Hervé Brönnimann, Marc Glisse
Packing Problems with Orthogonal Rotations

In this extended abstract, we present approximation algorithms for the following packing problems: the strip packing problem, the two-dimensional bin packing problem, the three-dimensional strip packing problem, and the three-dimensional bin packing problem. For all these problems, we consider orthogonal packings where 90° rotations are allowed. The algorithms we show for these problems have asymptotic performance bounds 1.613, 2.64, 2.76 and 4.89, respectively. We also present an algorithm for the z-oriented three-dimensional packing problem with asymptotic performance bound 2.64. To our knowledge the bounds presented here are the best known for each problem.

Flavio Keidi Miyazawa, Yoshiko Wakabayashi
Combinatorial Problems on Strings with Applications to Protein Folding

We consider the problem of protein folding in the HP model on the 3D square lattice. This problem is combinatorially equivalent to folding a string of 0’s and 1’s so that the string forms a self-avoiding walk on the lattice and the number of adjacent pairs of 1’s is maximized. The previously best-known approximation algorithm for this problem has a guarantee of $\frac{3}{8}=.375$ [HI95]. In this paper, we first present a new $\frac{3}{8}$-approximation algorithm for the 3D folding problem that improves on the absolute approximation guarantee of the previous algorithm. We then show a connection between the 3D folding problem and a basic combinatorial problem on binary strings, which may be of independent interest. Given a binary string in { a,b }*, we want to find a long subsequence of the string in which every sequence of consecutive a’s is followed by at least as many consecutive b’s. We show a non-trivial lower-bound on the existence of such subsequences. Using this result, we obtain an algorithm with a slightly improved approximation ratio of at least .37501 for the 3D folding problem. All of our algorithms run in linear time.

Alantha Newman, Matthias Ruhl
Measurement Errors Make the Partial Digest Problem NP-Hard

The Partial Digest problem asks for the coordinates of m points on a line such that the pairwise distances of the points form a given multiset of $\left({m \atop 2}\right)$ distances. Partial Digest is a well-studied problem with important applications in physical mapping of DNA molecules. Its computational complexity status is open. Input data for Partial Digest from real-life experiments are always prone to error, which suggests to study variations of Partial Digest that take this fact into account. In this paper, we study the computational complexity of the variation of Partial Digest in which each distance is known only up to some error, due to experimental inaccuracies. The error can be specified either by some additive offset or by a multiplicative factor. We show that both types of error make the Partial Digest problem strongly NP-complete, by giving reductions from 3-Partition. In the case of relative errors, we show that the problem is hard to solve even for constant relative error.

Mark Cieliebak, Stephan Eidenbenz
Designing Small Keyboards Is Hard

We study the problem of placing symbols of an alphabet onto the minimum number of keys on a small keyboard so that any word of a given dictionary can be recognized univoquely only by looking at the corresponding sequence of pressed keys. This problem is motivated by the design of small keyboards for mobile devices. We show that the problem is hard in general, and NP-complete even if we only wish to decide whether two keys are sufficient. We also consider two variants of the problem. In the first one, symbols on a same key must be contiguous in an ordered alphabet. The second variant is a fixed-parameter version of the previous one that minimizes a well-chosen measure of ambiguity in the recognition of the words for a given number of keys. Hardness and approximability results are given.

Jean Cardinal, Stefan Langerman
Metric Structures in L 1: Dimension, Snowflakes, and Average Distortion

We study the metric properties of finite subsets of L1. The analysis of such metrics is central to a number of important algorithmic problems involving the cut structure of weighted graphs, including the Sparsest Cut Problem, one of the most compelling open problems in the field of approximation. Additionally, many open questions in geometric non-linear functional analysis involve the properties of finite subsets of L1.We present some new observations concerning the relation of L1 to dimension, topology, and Euclidean distortion. We show that every n-point subset of L1 embeds into L2 with average distortion $O(\sqrt{log n})$, yielding the first evidence that the conjectured worst-case bound of $O(\sqrt{log n})$ is valid. We also address the issue of dimension reduction in L p for p ∈ (1,2). We resolve a question left open in [1] about the impossibility of linear dimension reduction in the above cases, and we show that the example of [2,3] cannot be used to prove a lower bound for the non-linear case. This is accomplished by exhibiting constant-distortion embeddings of snowflaked planar metrics into Euclidean space.

James R. Lee, Manor Mendel, Assaf Naor
Nash Equilibria via Polynomial Equations

We consider the problem of computing a Nash equilibrium in multiple-player games. It is known that there exist games, in which all the equilibria have irrational entries in their probability distributions [19]. This suggests that either we should look for symbolic representations of equilibria or we should focus on computing approximate equilibria. We show that every finite game has an equilibrium such that all the entries in the probability distributions are algebraic numbers and hence can be finitely represented. We also propose an algorithm which computes an approximate equilibrium in the following sense: the strategies output by the algorithm are close with respect to l ∞ -norm to those of an exact Nash equilibrium and also the players have only a negligible incentive to deviate to another strategy. The running time of the algorithm is exponential in the number of strategies and polynomial in the digits of accuracy. We obtain similar results for approximating market equilibria in the neoclassical exchange model under certain assumptions.

Richard J. Lipton, Evangelos Markakis
Minimum Latency Tours and the k-Traveling Repairmen Problem

Given an undirected graph G=(V,E) and a source vertex s ∈ V, the k-traveling repairman (KTR) problem, also known as the minimum latency problem, asks for k tours, each starting at s and covering all the vertices (customers) such that the sum of the latencies experienced by the customers is minimum. Latency of a customer p is defined to be the distance (time) traveled before visiting p for the first time. Previous literature on the KTR problem has considered the version of the problem in which the repairtime of a customer is assumed to be zero for latency calculations. We consider a generalization of the problem in which each customer has an associated repairtime. In this paper, we present constant factor approximation algorithms for this problem and its variants.

Raja Jothi, Balaji Raghavachari
Server Scheduling in the Weighted ℓ p Norm

We explain how the apparent goals of the Unix CPU scheduling policy can be formalized using the weighted ℓ p norm of flows. We then show that the online algorithm, Highest Density First (HDF), and the nonclairvoyant algorithm, Weighted Shortest Elapsed Time First (WSETF), are almost fully scalable. That is, they are (1 + ε)-speed O(1)-competitive. Even for unit weights, it was known that there is no O(1)-competitive algorithm. We also give a generic way to transform an algorithm A in an algorithm B in such a way that if A is O(1)-speed O(1)-competitive with respect to some ℓ p norm of flow then B is O(1)-competitive with respect to the ℓ p norm of completion times. Further, if A is online (nonclairvoyant) then B is online (nonclairvoyant). Combining these results gives an O(1)-competitive nonclairvoyant algorithm for ℓ p norms of completion times.

Nikhil Bansal, Kirk Pruhs
An Improved Communication-Randomness Tradeoff

Two processors receive inputs X and Y respectively. The communication complexity of the function f is the number of bits (as a function of the input size) that the processors have to exchange to compute f(X,Y) for worst case inputs X and Y. The List-Non-Disjointness problem (X=(x1,...,xn), Y=(y1,...,yn), $x^{j},y^{j}\in {\rm Z}^{n}_{2}$, to decide whether $\exists_{j}x^{j}=y^{j}$) exhibits maximal discrepancy between deterministic n2 and Las Vegas (Θ(n)) communication complexity. Fleischer, Jung, Mehlhorn (1995) have shown that if a Las Vegas algorithm expects to communicate Ω(n logn) bits, then this can be done with a small number of coin tosses.Even with an improved randomness efficiency, this result is extended to the (much more interesting) case of efficient algorithms (i.e. with linear communication complexity). For any R ∈ ℕ, R coin tosses are sufficient for O(n+n2 /2R) transmitted bits.

Martin Fürer
Distributed Games and Distributed Control for Asynchronous Systems

We introduce distributed games over asynchronous transition systems to model a distributed controller synthesis problem. A game involves two teams and is not turn-based: several players of both teams may simultaneously be enabled. We define distributed strategies based on the causal view that players have of the system. We reduce the problem of finding a winning distributed strategy with a given memory to finding a memoryless winning distributed strategy in a larger distributed game. We reduce the latter problem to finding a strategy in a classical 2-players game. This allows to transfer results from the sequential case to this distributed setting.

Paul Gastin, Benjamin Lerman, Marc Zeitoun
A Simplified and Dynamic Unified Structure

The unified property specifies that a comparison-based search structure can quickly find an element nearby a recently accessed element. Iacono [Iac01] introduced this property and developed a static search structure that achieves the bound. We present a dynamic search structure that achieves the unified property and that is simpler than Iacono’s structure. Among all comparison-based dynamic search structures, our structure has the best proved bound on running time.

Mihai Bădoiu, Erik D. Demaine
Another View of the Gaussian Algorithm

We introduce here a rewrite system in the group of unimodular matrices, i.e., matrices with integer entries and with determinant equal to ± 1. We use this rewrite system to precisely characterize the mechanism of the Gaussian algorithm, that finds shortest vectors in a two–dimensional lattice given by any basis. Putting together the algorithmic of lattice reduction and the rewrite system theory, we propose a new worst–case analysis of the Gaussian algorithm. There is already an optimal worst–case bound for some variant of the Gaussian algorithm due to Vallée [16] ValGaussRevisit. She used essentially geometric considerations. Our analysis generalizes her result to the case of the usual Gaussian algorithm. An interesting point in our work is its possible (but not easy) generalization to the same problem in higher dimensions, in order to exhibit a tight upper-bound for the number of iterations of LLL–like reduction algorithms in the worst case. Moreover, our method seems to work for analyzing other families of algorithms. As an illustration, the analysis of sorting algorithms are briefly developed in the last section of the paper.

Ali Akhavi, Céline Moreira Dos Santos
Generating Maximal Independent Sets for Hypergraphs with Bounded Edge-Intersections

Given a finite set V, and integers k ≥ 1 and r ≥ 0, denote by $\mathbb{A}(k,r)$ the class of hypergraphs $\mathcal{A}\subseteq 2^{v}$ with (k,r)-bounded intersections, i.e. in which the intersection of any k distinct hyperedges has size at most r. We consider the problem MIS(A,I): given a hypergraph $\mathcal{A}$ and a subfamily $\mathcal{I}\subseteq\mathcal{I}\mathcal{(A)}$, of its maximal independent sets (MIS) $\mathcal{I}\mathcal{(A)}$, either extend this subfamily by constructing a new MIS $I \in \mathcal{I}\mathcal{(A)}\backslash \mathcal{I}$ or prove that there are no more MIS, that is $\mathcal{I} = \mathcal{I}\mathcal{(A)}$. We show that for hypergraphs $\mathcal{A}\in \mathbb{A}(k,r)$ with k + r ≤ const, problem MIS($\mathcal{A, I}$) is NC-reducible to problem MIS(A′, θ) of generating a single MIS for a partial subhypergraph A′ of $\mathcal{A}$. In particular, for this class of hypergraphs, we get an incremental polynomial algorithm for generating all MIS. Furthermore, combining this result with the currently known algorithms for finding a single maximal independent set of a hypergraph, we obtain efficient parallel algorithms for incrementally generating all MIS for hypergraphs in the classes $\mathbb{A}(1,c)$, $\mathbb{A}(c,0)$, and $\mathbb{A}(2,1)$, where c is a constant. We also show that, for $\mathcal{A}\in \mathbb{A}(k,r)$, where k + r ≤ const, the problem of generating all MIS of $\mathcal{A}$ can be solved in incremental polynomial-time with space polynomial only in the size of $\mathcal{A}$.

Endre Boros, Khaled Elbassioni, Vladimir Gurvich, Leonid Khachiyan
Rooted Maximum Agreement Supertrees

Given a set $\mathcal{T}$ of rooted, unordered trees, where each $T_{i} \in \mathcal{T}$ is distinctly leaf-labeled by a set Λ(T i ) and where the sets Λ(T i ) may overlap, the maximum agreement supertree problem (MASP) is to construct a distinctly leaf-labeled tree Q with leaf set $\Lambda(Q)\subseteq \bigcup_{Ti\epsilon\mathcal{T}}\Lambda(Ti)$ such that |Λ(Q)| is maximized and for each $T_{i}\in \mathcal{T}$, the topological restriction of T i to Λ(Q) is isomorphic to the topological restriction of Q to Λ(T i ). Let $n = |\bigcup{T_{i}\in\mathcal{T}}\bigwedge(T_{i})|, k=|\mathcal{T}|, and D=maxt_{i}\in \mathcal{T}\{deg(T_{i}\}$. We first show that MASP with k = 2 can be solved in $O(\sqrt{D}n {log}(2n/D))$ time, which is O(nlogn) when D = O(1) and O(n1.5) when D is unrestricted. We then present an algorithm for MASP with D = 2 whose running time is polynomial if k = O(1). On the other hand, we prove that MASP is NP-hard for any fixed k ≥ 3 when D is unrestricted, and also NP-hard for any fixed D ≥ 2 when k is unrestricted even if each input tree is required to contain at most three leaves. Finally, we describe a polynomial-time (n/log n)-approximation algorithm for MASP.

Jesper Jansson, Joseph H. -K. Ng, Kunihiko Sadakane, Wing-Kin Sung
Complexity of Cycle Length Modularity Problems in Graphs

The even cycle problem for both undirected [Tho88] and directed [RST99] graphs has been the topic of intense research in the last decade. In this paper, we study the computational complexity of cycle length modularity problems. Roughly speaking, in a cycle length modularity problem, given an input (undirected or directed) graph, one has to determine whether the graph has a cycle C of a specific length (or one of several different lengths), modulo a fixed integer. We denote the two families (one for undirected graphs and one for directed graphs) of problems by (S,m)-UC and (S,m)-DC, where m ∈ ℕ and S ⊆ {0,1, ..., m − 1}. (S,m)-UC (respectively, (S,m)-DC) is defined as follows: Given an undirected (respectively, directed) graph G, is there a cycle in G whose length, modulo m, is a member of S? In this paper, we fully classify (i.e., as either polynomial-time solvable or as NP-complete) each problem (S,m)-UC such that 0 ∈ S and each problem (S,m)-DC such that 0 ∉ S. We also give a sufficient condition on S and m for the following problem to be polynomial-time computable: (S,m)-UC such that 0 ∉ S.

Edith Hemaspaandra, Holger Spakowski, Mayur Thakur
Procedural Semantics for Fuzzy Disjunctive Programs on Residuated Lattices

In the paper, we present a procedural semantics for fuzzy disjunctive programs – sets of graded implications of the form:$(h_1 \vee ... \vee h_n \longleftarrow b_1 \& ... \& b_m, c)~~~~~~~(n > 0, m \geq 0)$where h i , b j are atoms and c a truth degree from a complete residuated lattice$L = (L, \leq, \vee, \wedge, *, \Longrightarrow, 0, 1).$A graded implication can be understood as a means of the representation of incomplete and uncertain information; the incompleteness is formalised by the consequent disjunction of the implication, while the uncertainty by its truth degree. We generalise the results for Boolean lattices in [3] to the case of residuated ones. We take into consideration the non-idempotent triangular norm ⋆, instead of the idempotent ∧, as a truth function for the strong conjunction &. In the end, the coincidence of the proposed procedural semantics and the generalised declarative, fixpoint semantics from [4] will be reached.

Dušan Guller
A Proof System and a Decision Procedure for Equality Logic

Equality Logic with uninterpreted functions is used for proving the equivalense or refinement between systems (hardware verification, compiler translation, etc). Current approaches for deciding this type of formulas use a transformation of an equality formula to the propositional one of larger size, and then any standard SAT checker can be applied. We give an approach for deciding satisfiability of equality logic formulas (E-SAT) in conjunctive normal form. Central in our approach is a single proof rule called ER. For this single rule we prove soundness and completeness. Based on this rule we propose a complete procedure for E-SAT and prove its correctness. Applying our procedure on a variation of the pigeon hole formula yields a polynomial complexity contrary to earlier approaches to E-SAT.

Olga Tveretina, Hans Zantema
Approximating the Expressive Power of Logics in Finite Models

We present a probability logic (essentially a first order language extended with quantifiers that count the fraction of elements in a model that satisfy a first order formula) which, on the one hand, captures uniform circuit classes such as AC0 and TC0 over arithmetic models, namely, finite structures with linear order and arithmetic relations, and, on the other hand, their semantics, with respect to our arithmetic models, can be closely approximated by giving interpretations of their formulas on finite structures where all relations (including the order) are restricted to be “modular” (i.e. to act subject to an integer modulo). In order to give a precise measure of the proximity between satisfaction of a formula in an arithmetic model and satisfaction of the same formula in the “approximate” model, we define the approximate formulas and work on a notion of approximate truth. We also indicate how to enhance the expressive power of our probability logic in order to capture polynomial time decidable queries.There are various motivations for this work. As of today, there is not known logical description of any computational complexity class below NP which does not requires a built–in linear order. Also, it is widely recognized that many model theoretic techniques for showing definability in logics on finite structures become almost useless when order is present. Hence, if we want to obtain significant lower bound results in computational complexity via the logical description we ought to find ways of by-passing the ordering restriction. With this work we take steps towards understanding how well can we approximate, without a true order, the expressive power of logics that capture complexity classes on ordered structures.

Argimiro Arratia, Carlos E. Ortiz
Arithmetic Circuits for Discrete Logarithms

We introduce a new model of “generic discrete log algorithms” based on arithmetic circuits. It is conceptually simpler than previous ones, is actually applicable to the natural representations of the popular groups, and we can derive upper and lower bounds that differ only by a constant factor, namely 10.

Joachim von zur Gathen
On the Competitiveness of AIMD-TCP within a General Network

This paper presents a new mathematical model of AIMD (Additive Increase Multiplicative Decrease) TCP for general networks that we believe is better than those previously used when it is driven by bottleneck capacities. Extending the paper by Edmonds, Datta, and Dymond that solves the single bottleneck case, we view AIMD as a distributed scheduling algorithm and prove that with extra resources, it is competitive against the optimal global algorithm in minimizing the average flow time of the jobs.

Jeff Edmonds
Gathering Non-oblivious Mobile Robots

We study the Gathering Problem, where we want to gather a set of n autonomous mobile robots at a point in the plane. This point is not fixed in advance. The robots are very weak, in the sense that they have no common coordinate system, no identities, no central coordination, no means of direct communication, and no synchronization. Each robot can only sense the positions of the other robots, perform a deterministic algorithm, and then move towards a destination point. It is known that these simple robots cannot gather if they have no additional capabilities. In this paper, we show that the Gathering Problem can be solved if the robots are non-oblivious, i.e., if they are equipped with memory.

Mark Cieliebak
Bisecting and Gossiping in Circulant Graphs

Circulant graphs are popular network topologies that arise in distributed computing. In this paper, we show that, for circulant graphs, a simple condition for isomorphism, combined with lattices reduction algorithms, can be used to develop efficient distributed algorithms. We improve the known upper bounds on the vertex-bisection (respectively the edge-bisection) width of circulant graphs. Our method is novel and provides a polynomial-time algorithm to partition the set of vertices (respectively the set of edges) to obtain these bounds and the respective sets. By exploiting the knowledge of the bisection width of this topology, we introduce generic distributed algorithms to solve the gossip problem in these networks. We present lower and upper bounds of the number of rounds in the vertex-disjoint and the edge-disjoint paths communication models when the number of nodes is prime.

Bernard Mans, Igor Shparlinski
Multiple Mobile Agent Rendezvous in a Ring

We study the rendezvous search problem for k ≥ 2 mobile agents in an n node ring. Rather than using randomized algorithms or different deterministic algorithms to break the symmetry that often arises in this problem, we investigate how the mobile agents can use identical stationary tokens to break symmetry and solve the rendezvous problem. After deriving the conditions under which identical stationary tokens can be used to break symmetry, we present several solutions to the rendezvous search problem. We derive the lower bounds of the memory required for mobile agent rendezvous and discuss the relationship between rendezvous and leader election for mobile agents.

Paola Flocchini, Evangelos Kranakis, Danny Krizanc, Nicola Santoro, Cindy Sawchuk
Global Synchronization in Sensornets

Time synchronization is necessary in many distributed systems, but achieving synchronization in sensornets, which combine stringent precision requirements with severe resource constraints, is particularly challenging. This challenge has been met by the recent Reference-Broadcast Synchronization (RBS) proposal, which provides on-demand pairwise synchronization with low overhead and high precision. In this paper we introduce a model of the basic RBS synchronization paradigm. Within the context of this model we characterize the optimally precise clock synchronization algorithm and establish its global consistency. In the course of this analysis we point out unexpected connections between optimal clock synchronization, random walks, and resistive networks, and present a polynomial-time approximation scheme for the problem of calculating the effective resistance in a network based on min-cost flow. We also sketch a polynomial-time algorithm for finding a schedule of data acquisition giving the optimal trade-off between energy consumption and precision of clock synchronization. We also discuss synchronization in the presence of clock skews. In ongoing work we are adapting our synchronization algorithm for execution in a network of seismic sensors that requires global clock consistency.

Jeremy Elson, Richard M. Karp, Christos H. Papadimitriou, Scott Shenker
Backmatter
Metadaten
Titel
LATIN 2004: Theoretical Informatics
herausgegeben von
Martín Farach-Colton
Copyright-Jahr
2004
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-24698-5
Print ISBN
978-3-540-21258-4
DOI
https://doi.org/10.1007/b95852