Skip to main content

2011 | Buch

Fundamentals of Computation Theory

18th International Symposium, FCT 2011, Oslo, Norway, August 22-25, 2011. Proceedings

herausgegeben von: Olaf Owe, Martin Steffen, Jan Arne Telle

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 18th International Symposium Fundamentals of Computation Theory, FCT 2011, held in Oslo, Norway, in August 2011. The 28 revised full papers presented were carefully reviewed and selected from 78 submissions. FCT 2011 focused on algorithms, formal methods, and emerging fields, such as ad hoc, dynamic and evolving systems; algorithmic game theory; computational biology; foundations of cloud computing and ubiquitous systems; and quantum computation.

Inhaltsverzeichnis

Frontmatter
The Rewriting Logic Semantics Project: A Progress Report

Rewriting logic is an executable logical framework well suited for the semantic definition of languages. Any such framework has to be judged by its effectiveness to bridge the existing gap between language definitions on the one hand, and language implementations and language analysis tools on the other. We give a progress report on how researchers in the rewriting logic semantics project are narrowing the gap between theory and practice in areas such as: modular semantic definitions of languages; scalability to real languages; support for real time; semantics of software and hardware modeling languages; and semantics-based analysis tools such as static analyzers, model checkers, and program provers.

José Meseguer, Grigore Roşu
Impugning Randomness, Convincingly

John organized a state lottery, and his wife won the main prize. One may feel that the event of her winning isn’t particularly random, but is it possible to convincingly impugn the alleged randomness, in cases like this, in a fair court of law? We develop an approach to do just that.

We start with a principle that bridges between probability theory and the real world. The principle is known under different names in the history of probability theory. It is often formulated thus: events of sufficiently small probability do not happen. This formulation of the principle is too liberal, however. Events of small probability happen all the time. The common way to tighten the principle is to restrict attention to predicted events of sufficiently small probability. Further, we better restrict attention to a single probabilistic experiment. If you repeat the experiment sufficiently many times then there is a good chance that the predicted event of small probability will happen.

Yuri Gurevich
Kernelization: An Overview

Kernelization is a mathematical framework for the study of polynomial time pre-processing. Over the last few years Kernelization has received considerable attention. In this talk I will survey the recent developments in the field, and highlight some of the interesting research directions.

Daniel Lokshtanov
Almost Transparent Short Proofs for NPℝ

We study probabilistically checkable proofs (PCPs) in the real number model of computation as introduced by Blum, Shub, and Smale. Our main result is NP

 = PCP

(

O

(log

n

),

polylog

(

n

)), i.e., each decision problem in NP

is accepted by a verifier that generates

O

(log

n

) many random bits and reads

polylog

(

n

) many proof components. This is the first non-trivial characterization of NP

by real PCP

-classes. As a byproduct this result implies as well a characterization of real nondeterministic exponential time via NEXP

 = PCP

(

poly

(

n

),

poly

(

n

)).

Klaus Meer
The Effect of Homogeneity on the Complexity of k-Anonymity

The NP-hard

k

-

Anonymity

problem asks, given an

n

×

m

-matrix

M

over a fixed alphabet and an integer

s

 > 0, whether

M

can be made

k

-anonymous by suppressing (blanking out) at most

s

entries. A matrix

M

is said to be

k

-anonymous if for each row

r

in

M

there are at least

k

–1 other rows in

M

which are identical to

r

. Complementing previous work, we introduce two new “data-driven” parameterizations for

k

-

Anonymity

—the number

${t_{\textrm{in}}}$

of different input rows and the number

$t_{\textrm{out}}$

of different output rows—both modeling aspects of data homogeneity. We show that

k

-

Anonymity

is fixed-parameter tractable for the parameter

${t_{\textrm{in}}}$

, and it is NP-hard even for

${t_{\textrm{out}}} = 2$

and alphabet size four. Notably, our fixed-parameter tractability result implies that

k

-

Anonymity

can be solved in

linear time

when

${t_{\textrm{in}}}$

is a constant. Our results also extend to some interesting generalizations of

k

-

Anonymity

.

Robert Bredereck, André Nichterlein, Rolf Niedermeier, Geevarghese Philip
On the Optimal Compression of Sets in PSPACE

We show that if DTIME[2

O

(

n

)

] is not included in DSPACE[2

o

(

n

)

], then, for every set

B

in PSPACE, all strings

x

in

B

of length

n

can be represented by a string

compressed

(

x

) of length at most log(|

B

= 

n

|) + 

O

(log

n

), such that a polynomial-time algorithm, given

compressed

(

x

), can distinguish

x

from all the other strings in

B

= 

n

. Modulo the

O

(log

n

) additive term, this achieves the information-theoretical optimum for string compression.

Marius Zimand
Computational Randomness from Generalized Hardcore Sets

The seminal hardcore lemma of Impagliazzo states that for any mildly-hard Boolean function

f

, there is a subset of input, called the hardcore set, on which the function is extremely hard, almost as hard as a random Boolean function. This implies that the output distribution of

f

given a random input looks like a distribution with some statistical randomness. Can we have something similar for hard functions with several output bits? Can we say that the output distribution of such a general function given a random input looks like a distribution containing several bits of randomness? If so, one can simply apply any statistical extractor to extract computational randomness from the output of

f

. However, the conventional wisdom tells us to apply extractors with some additional

reconstruction

property, instead of just any extractor. Does this mean that there is no analogous hardcore lemma for general functions?

We show that a general hard function does indeed have some kind of hardcore set, but it comes with the price of a security loss which is proportional to the number of output values. More precisely, consider a hard function

f

:{0, 1}

n

 → [

V

] = {1,…,

V

} such that any circuit of size

s

can only compute

f

correctly on at most

$\frac{1}{L}(1-\gamma)$

fraction of inputs, for some

L

 ∈ [1,

V

 − 1] and

γ

 ∈ (0,1). Then we show that for some

I

 ⊆ [

V

] with |

I

| = 

L

 + 1, there exists a hardcore set

H

I

 ⊆ 

f

− 1

(

I

) with density

$\gamma/{V \choose L+1}$

such that any circuit of some size

s

′ can only compute

f

correctly on at most

$\frac{1+\epsilon}{L+1}$

fraction of inputs in

H

I

. Here,

s

′ is smaller than

s

by some poly(

V

,1/

ε

,log(1/

γ

)) factor, which results in a security loss of such a factor. We show that it is basically impossible to guarantee a much larger hardcore set or a much smaller security loss. Finally, we show how our hardcore lemma can be used for extracting computational randomness from general hard functions.

Chia-Jung Lee, Chi-Jen Lu, Shi-Chun Tsai
Data Reduction for Graph Coloring Problems

This paper studies the kernelization complexity of graph coloring problems, with respect to certain structural parameterizations of the input instances. We are interested in how well polynomial-time data reduction can provably shrink instances of coloring problems, in terms of the chosen parameter. It is well known that deciding 3-colorability is already NP-complete, hence parameterizing by the requested number of colors is not fruitful. Instead, we pick up on a research thread initiated by Cai (DAM, 2003) who studied coloring problems parameterized by the modification distance of the input graph to a graph class on which coloring is polynomial-time solvable; for example parameterizing by the number

k

of vertex-deletions needed to make the graph chordal. We obtain various upper and lower bounds for kernels of such parameterizations of

q

-

Coloring

, complementing Cai’s study of the time complexity with respect to these parameters.

Bart M. P. Jansen, Stefan Kratsch
Hunting Distributed Malware with the κ-Calculus

The defense of computer systems from malicious software attacks, such as viruses and worms, is a key aspect of computer security. The analogy between malicious software and biological infections suggested us to use the

κ

-calculus, a formalism originally developed for the analysis of biological systems, for the formalization and analysis of malicious software. By modeling the different actors involved in a malicious code attack in the

κ

-calculus and by simulating their behavior, it is possible to extract important information that can drive in the choice of the defense technique to apply.

Mila Dalla Preda, Cinzia Di Giusto
Edge-Matching Problems with Rotations

Edge-matching problems, also called puzzles, are abstractions of placement problems with neighborhood conditions. Pieces with colored edges have to be placed on a board such that adjacent edges have the same color. The problem has gained interest recently with the (now terminated) Eternity II puzzle, and new complexity results. In this paper we consider a number of settings which differ in size of the puzzles and the manipulations allowed on the pieces. We investigate the effect of allowing rotations of the pieces on the complexity of the problem, an aspect that is only marginally treated so far. We show that some problems have polynomial time algorithms while others are NP-complete. Especially we show that allowing rotations in one-row puzzles makes the problem NP-hard. We moreover show that many commonly considered puzzles can be emulated by simple puzzles with quadratic pieces, so that one can restrict oneself to investigating those.

Martin Ebbesen, Paul Fischer, Carsten Witt
On the Link between Strongly Connected Iteration Graphs and Chaotic Boolean Discrete-Time Dynamical Systems

Chaotic functions are characterized by sensitivity to initial conditions, transitivity, and regularity. Providing new functions with such properties is a real challenge. This work shows that one can associate with any Boolean network a continuous function, whose discrete-time iterations are chaotic if and only if the iteration graph of the Boolean network is strongly connected. Then, sufficient conditions for this strong connectivity are expressed on the interaction graph of this network, leading to a constructive method of chaotic function computation. The whole approach is evaluated in the chaos-based pseudo-random number generation context.

Jacques M. Bahi, Jean-Francois Couchot, Christophe Guyeux, Adrien Richard
A New Bound for 3-Satisfiable Maxsat and Its Algorithmic Application

Let

F

be a CNF formula with

n

variables and

m

clauses.

F

is

t-satisfiable

if for any

t

clauses in

F

, there is a truth assignment which satisfies all of them. Lieberherr and Specker (1982) and, later, Yannakakis (1994) proved that in each 3-satisfiable CNF formula at least

$\frac{2}{3}$

of its clauses can be satisfied by a truth assignment. Yannakakis’s proof utilizes the fact that

$\frac{2}{3}m$

is a lower bound on the expected number of clauses satisfied by a random truth assignment over a certain distribution. A CNF formula

F

is called

expanding

if for every subset

X

of the variables of

F

, the number of clauses containing variables of

X

is not smaller than |

X

|. In this paper we strengthen the

$\frac{2}{3}m$

bound for expanding 3-satisfiable CNF formulas by showing that for every such formula

F

at least

$\frac{2}{3}m + \rho n$

clauses of

F

can be satisfied by a truth assignment, where

ρ

( > 0.0019) is a constant. Our proof uses a probabilistic method with a sophisticated distribution for truth values. We use the bound

$\frac{2}{3}m + \rho n$

and results on matching autarkies to obtain a new lower bound on the maximum number of clauses that can be satisfied by a truth assignment in any 3-satisfiable CNF formula.

We use our results above to show that the following parameterized problem is fixed-parameter tractable and, moreover, has a kernel with a linear number of variables. In 3

-S-MaxSat-AE

, we are given a 3-satisfiable CNF formula

F

with

m

clauses and asked to determine whether there is an assignment which satisfies at least

$\frac{2}{3}m + k$

clauses, where

k

is the parameter. Note that Mahajan and Raman (1999) asked whether 2

-S-MaxSat-AE

, the corresponding problem for 2-satisfiable formulas, is fixed-parameter tractable. Crowston and the authors of this paper proved in [9] that 2

-S-MaxSat-AE

is fixed-parameter tractable and, moreover, has a kernel with a linear number of variables. 2

-S-MaxSat-AE

appears to be easier than 3

-S-MaxSat-AE

and, unlike this paper, [9] uses only deterministic combinatorial arguments.

Gregory Gutin, Mark Jones, Anders Yeo
On Memoryless Quantitative Objectives

In two-player games on graph, the players construct an infinite path through the game graph and get a reward computed by a payoff function over infinite paths. Over weighted graphs, the typical and most studied payoff functions compute the limit-average or the discounted sum of the rewards along the path. Besides their simple definition, these two payoff functions enjoy the property that memoryless optimal strategies always exist.

In an attempt to construct other simple payoff functions, we define a class of payoff functions which compute an (infinite) weighted average of the rewards. This new class contains both the limit-average and the discounted sum functions, and we show that they are the only members of this class which induce memoryless optimal strategies, showing that there is essentially no other simple payoff functions.

Krishnendu Chatterjee, Laurent Doyen, Rohit Singh
Principal Types for Nominal Theories

We define rank 1 polymorphic types for nominal rewrite rules and equations. Typing environments type atoms, variables, and function symbols, and since we follow a Curry-style approach there is no need to fully annotate terms with types. Our system has principal types, and we give rule and axiom formats to guarantee preservation of types under both rewriting and equality reasoning. This is non-trivial because substitution does not avoid capture, so a substituted symbol can—if we are not careful—appear in inconsistent typing contexts.

Elliot Fairweather, Maribel Fernández, Murdoch J. Gabbay
Modifying the Upper Bound on the Length of Minimal Synchronizing Word

A word

w

is called synchronizing (recurrent, reset, magic, directable) word of deterministic finite automaton (DFA) if

w

sends all states of the automaton to a unique state. In 1964 Jan Černy found a sequence of

n

-state complete DFA possessing a minimal synchronizing word of length (

n

 − 1)

2

. He conjectured that it is an upper bound on the length of such words for complete DFA. Nevertheless, the best upper bound (

n

3

 − 

n

)/6 was found almost 30 years ago.

We reduce the upper bound on the length of the minimal synchronizing word to

n

(7

n

2

 + 6

n

 − 16)/48.

An implemented algorithm for finding synchronizing word with restricted upper bound is described. The work presents the distribution of all synchronizing automata of small size according to the length of an almost minimal synchronizing word.

A. N. Trahtman
Online Maximum k-Coverage

We study an online model for the maximum

k

-vertex-coverage problem, where given a graph

G

 = (

V

,

E

) and an integer

k

, we ask for a subset

A

 ⊆ 

V

, such that |

A

| = 

k

and the number of edges covered by

A

is maximized. In our model, at each step

i

, a new vertex

v

i

is revealed, and we have to decide whether we will keep it or discard it. At any time of the process, only

k

vertices can be kept in memory; if at some point the current solution already contains

k

vertices, any inclusion of a new vertex in the solution must entail the definite deletion of another vertex of the current solution (a vertex not kept when revealed is definitely deleted). We propose algorithms for several natural classes of graphs (mainly regular and bipartite), improving on an easy

$\frac{1}{2}$

-competitive ratio. We next settle a set-version of the problem, called maximum

k

-(set)-coverage problem. For this problem we present an algorithm that improves upon former results for the same model for small and moderate values of

k

.

Giorgio Ausiello, Nicolas Boria, Aristotelis Giannakos, Giorgio Lucarelli, Vangelis Th. Paschos
Coloring Graphs without Short Cycles and Long Induced Paths

The girth of a graph

G

is the length of a shortest cycle in

G

. For any fixed girth

g

 ≥ 4 we determine a lower bound ℓ(

g

) such that every graph with girth at least

g

and with no induced path on ℓ(

g

) vertices is 3-colorable. In contrast, we show the existence of an integer ℓ such that testing for 4-colorability is

NP

-complete for graphs with girth 4 and with no induced path on ℓ vertices.

Petr A. Golovach, Daniël Paulusma, Jian Song
Succinct Algebraic Branching Programs Characterizing Non-uniform Complexity Classes

We study characterizations of algebraic complexity classes by branching programs of possibly exponential size, using a succinctness condition to replace the usual one based on uniformity. We obtain characterizations of VPSPACE, the class corresponding to computations in polynomial space, and observe that algebraic polynomial space can be seen as constant algebraic space with auxiliary polynomial space Boolean computations. We also obtain the first examples of natural complete polynomials for VPSPACE, in particular showing that polynomials like the determinant, the permanent or the Hamiltonian become VPSPACE-complete when the matrix is succinctly encoded. Using the same techniques we also characterize VNP. In general, we argue that characterizations by branching programs apply to different classes, both Boolean and algebraic, with or without uniformity, and thus provide a general and uniform technique in these different settings.

Guillaume Malod
LIFO-Search on Digraphs: A Searching Game for Cycle-Rank

We consider the extension of the last-in-first-out graph searching game of Giannopoulou and Thilikos to digraphs. We show that all common variations of the game require the same number of searchers, and the minimal number of searchers required is one more than the cycle-rank of the digraph. We also obtain a tight duality theorem, giving a precise min-max characterization of obstructions for cycle-rank.

Paul Hunter
Polynomial Kernels for Proper Interval Completion and Related Problems

Given a graph

G

 = (

V

,

E

) and a positive integer

k

, the

Proper Interval Completion

problem asks whether there exists a set

F

of at most

k

pairs of (

V

×

V

) ∖ 

E

such that the graph

H

 = (

V

,

E

 ∪ 

F

) is a proper interval graph. The

Proper Interval Completion

problem finds applications in molecular biology and genomic research [11]. First announced by Kaplan, Tarjan and Shamir in FOCS ’94, this problem is known to be FPT [11], but no polynomial kernel was known to exist. We settle this question by proving that

Proper Interval Completion

admits a kernel with

O

(

k

5

) vertices. Moreover, we prove that a related problem, the so-called

Bipartite Chain Deletion

problem admits a kernel with

O

(

k

2

) vertices, completing a previous result of Guo [10].

Stéphane Bessy, Anthony Perez
Parameterized Complexity of Vertex Deletion into Perfect Graph Classes

Vertex deletion problems are at the heart of parameterized complexity. For a graph class

$\ensuremath{\mathcal{F}} $

, the

$\ensuremath{\mathcal{F}} $

-

Deletion

problem takes as input a graph

G

and an integer

k

. The question is whether it is possible to delete at most

k

vertices from

G

such that the resulting graph belongs to

$\ensuremath{\mathcal{F}} $

. Whether

Perfect Deletion

is fixed-parameter tractable, and whether

Chordal Deletion

admits a polynomial kernel, when parameterized by

k

, have been stated as open questions in previous work. We show that

Perfect Deletion

(

k

) and

Weakly Chordal Deletion

(

k

) are

W

[2]-hard. In search of positive results, we study restricted variants such that the deleted vertices must be taken from a specified set

X

, which we parameterize by |

X

|. We show that for

Perfect Deletion

and

Weakly Chordal Deletion

, although this restriction immediately ensures fixed parameter tractability, it is not enough to yield polynomial kernels, unless NP ⊆ coNP/poly. On the positive side, for

Chordal Deletion

, the restriction enables us to obtain a kernel with

$\mathcal{O}(|X|^4)$

vertices.

Pinar Heggernes, Pim van’t Hof, Bart M. P. Jansen, Stefan Kratsch, Yngve Villanger
Constructive Dimension and Hausdorff Dimension: The Case of Exact Dimension

The present paper generalises results by Lutz and Ryabko. We prove a martingale characterisation of exact Hausdorff dimension. On this base we introduce the notion of exact constructive dimension of (sets of) infinite strings.

Furthermore, we generalise Ryabko’s result on the Hausdorff dimension of the set of strings having asymptotic Kolmogorov complexity ≤ 

α

to the case of exact dimension.

Ludwig Staiger
Dag Realizations of Directed Degree Sequences

We consider the following graph realization problem. Given a sequence

$S:={a_1 \choose b_1},\dots,{a_n \choose b_n}$

with

$a_i,b_i\in \mathbb{Z}_0^+$

, does there exist an acyclic digraph (a dag, no parallel arcs allowed)

G

 = (

V

,

A

) with labeled vertex set

V

: = {

v

1

,…,

v

n

} such that for all

v

i

 ∈ 

V

indegree and outdegree of

v

i

match exactly the given numbers

a

i

and

b

i

, respectively? The complexity status of this problem is open, while a generalization, the

f

-factor dag problem can be shown to be NP-complete. In this paper, we prove that an important class of sequences, the so-called opposed sequences, admit an

O

(

n

 + 

m

) realization algorithm, where

n

and

$m = \sum_{i=1}^n a_i = \sum_{i=1}^n b_i$

denote the number of vertices and arcs, respectively. For an opposed sequence it is possible to order all non-source and non-sink tuples such that

a

i

 ≤ 

a

i

 + 1

and

b

i

 ≥ 

b

i

 + 1

. Our second contribution is a realization algorithm for general sequences which significantly improves upon a naive exponential-time algorithm. We also investigate a special and fast realization strategy “lexmax”, which fails in general, but succeeds in more than 97% of all sequences with 9 tuples.

Annabell Berger, Matthias Müller-Hannemann
A Coinductive Calculus for Asynchronous Side-Effecting Processes

We present an abstract framework for concurrent processes in which atomic steps have generic side effects, handled according to the principle of monadic encapsulation of effects. Processes in this framework are potentially infinite resumptions, modelled using final coalgebras over the monadic base. As a calculus for such processes, we introduce a concurrent extension of Moggi’s monadic metalanguage of effects. We establish soundness and completeness of a natural equational axiomatisation of this calculus. Moreover, we identify a corecursion scheme that is explicitly definable over the base language and provides flexible expressive means for the definition of new operators on processes, such as parallel composition. As a worked example, we prove the safety of a generic mutual exclusion scheme using a verification logic built on top of the equational calculus.

Sergey Goncharov, Lutz Schröder
Hardness, Approximability, and Exact Algorithms for Vector Domination and Total Vector Domination in Graphs

We consider two graph optimization problems called vector domination and total vector domination. In vector domination one seeks a small subset

S

of vertices of a graph such that any vertex outside

S

has a prescribed number of neighbors in

S

. In total vector domination, the requirement is extended to all vertices of the graph. We prove that these problems cannot be approximated to within a factor of

c

log

n

, for suitable constants

c

, unless every problem in NP is solvable in slightly super-polynomial time. We also show that two natural greedy strategies have approximation factors

O

(logΔ(

G

)), where Δ(

G

) is the maximum degree of the graph

G

. We also provide exact polynomial time algorithms for several classes of graphs. Our results extend, improve, and unify several results previously known in the literature.

Ferdinando Cicalese, Martin Milanič, Ugo Vaccaro
Enumeration of Minimal Dominating Sets and Variants

In this paper, we are interested in the enumeration of minimal dominating sets in graphs. A polynomial delay algorithm with polynomial space in split graphs is presented. We then introduce a notion of maximal extension (a set of edges added to the graph) that keeps invariant the set of minimal dominating sets, and show that graphs with extensions as split graphs are exactly the ones having chordal graphs as extensions. We finish by relating the enumeration of some variants of dominating sets to the enumeration of minimal transversals in hypergraphs.

Mamadou Moustapha Kanté, Vincent Limouzy, Arnaud Mary, Lhouari Nourine
Specification Patterns and Proofs for Recursion through the Store

Higher-order store

means that code can be stored on the mutable heap that programs manipulate, and is the basis of flexible software that can be changed or re-configured at runtime. Specifying such programs is challenging because of

recursion through the store

, where new (mutual) recursions between code are set up on the fly. This paper presents a series of formal specification patterns that capture increasingly complex uses of recursion through the store. To express the necessary specifications we extend the separation logic for higher-order store given by Schwinghammer

et al.

(CSL, 2009), adding parameter passing, and certain recursively defined families of assertions. Finally, we apply our specification patterns and rules to an example program that exploits many of the possibilities offered by higher-order store; this is the first larger case study conducted with logical techniques based on work by Schwinghammer

et al.

(CSL, 2009), and shows that they are practical.

Nathaniel Charlton, Bernhard Reus
Sub-computabilities

Every recursively enumerable set of integers (r.e. set) is enumerable by a primitive recursive function. But if the enumeration is required to be one-one, only a proper subset of all r.e. sets qualify. Starting from a collection of total recursive functions containing the primitive recursive functions, we thus define a

sub-computability

as an enumeration of the r.e. sets that are themselves one-one enumerable by total functions of the given collection. Notions similar to the classical computability ones are introduced and variants of the classical theorems are shown. We also introduce sub-reducibilities and study the related completeness notions. One of the striking results is the existence of natural (recursive) sets which play the role of low (non-recursive) solutions to Post’s problem for these sub-reducibilities. The similarity between sub-computabilities and (complete) computability is surprising, since there are so many missing r.e. sets in sub-computabilities. They can be seen as toy models of computability.

Fabien Givors, Gregory Lafitte
Functions That Preserve p-Randomness

We show that polynomial-time randomness (p-randomness) is preserved under a variety of familiar operations, including addition and multiplication by a nonzero polynomial-time computable real number. These results follow from a general theorem: If

I

 ⊆ ℝ is an open interval,

f

:

I

 → ℝ is a function, and

r

 ∈ 

I

is p-random, then

f

(

r

) is p-random provided

1

f

is p-computable on the dyadic rational points in

I

, and

2

f

varies sufficiently at

r

, i.e., there exists a real constant

C

 > 0 such that either

$$ (\forall x\in I-\{r\})\left[\frac{f(x) - f(r)}{x-r} \ge C\right] $$

or

$$ (\forall x\in I-\{r\})\left[\frac{f(x) - f(r)}{x-r} \le -C\right]\;. $$

Our theorem implies in particular that any analytic function about a p-computable point whose power series has uniformly p-computable coefficients preserves p-randomness in its open interval of absolute convergence. Such functions include all the familiar functions from first-year calculus.

Stephen A. Fenner
Reactive Turing Machines

We propose reactive Turing machines (RTMs), extending classical Turing machines with a process-theoretical notion of interaction. We show that every effective transition system is simulated modulo branching bisimilarity by an RTM, and that every computable transition system with a bounded branching degree is simulated modulo divergence-preserving branching bisimilarity. We conclude from these results that the parallel composition of (communicating) RTMs can be simulated by a single RTM. We prove that there exist universal RTMs modulo branching bisimilarity, but these essentially employ divergence to be able to simulate an RTM of arbitrary branching degree. We also prove that modulo divergence-preserving branching bisimilarity there are RTMs that are universal up to their own branching degree. Finally, we establish a correspondence between RTMs and the process theory

$\ensuremath{\mathrm{\ensuremath{\mathrm{TCP}}_{\ensuremath{\ensuremath{\mathalpha{\tau}}}}}}$

.

Jos C. M. Baeten, Bas Luttik, Paul van Tilburg
Virtual Substitution for SMT-Solving

SMT-solving aims at deciding satisfiability for the existential fragment of a first-order theory. A SAT-solver handles the logical part of a given problem and invokes an embedded theory solver to check consistency of theory constraints. For efficiency, the theory solver should be able to work

incrementally

and generate

infeasible subsets

. Currently available decision procedures for

real algebra

– the first-order theory of the reals with addition and multiplication – do not exhibit these features. In this paper we present an adaptation of the virtual substitution method, providing these abilities.

Florian Corzilius, Erika Ábrahám
Backmatter
Metadaten
Titel
Fundamentals of Computation Theory
herausgegeben von
Olaf Owe
Martin Steffen
Jan Arne Telle
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-22953-4
Print ISBN
978-3-642-22952-7
DOI
https://doi.org/10.1007/978-3-642-22953-4

Premium Partner