Skip to main content

2012 | Buch

Theory and Applications of Models of Computation

9th Annual Conference, TAMC 2012, Beijing, China, May 16-21, 2012. Proceedings

herausgegeben von: Manindra Agrawal, S. Barry Cooper, Angsheng Li

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 9th International Conference on Theory and Applications of Models of Computation, TAMC 2012, held in Beijing, China, in May 2012. The conference was combined with the Turing Lectures 2012, dedicated to celebrating Alan Turing’s unique impact on mathematics, computing, computer science, informatics, morphogenesis, philosophy, and the wider scientific world. Eight Turing Lectures were given at the TAMC 2012. The 40 revised full papers presented together with invited talks were carefully reviewed and selected from 86 submissions. The papers address 4 special sessions at TAMC 2012 which were algorithms and information in networks, complexity and cryptography, models of computing and networking, programming and verification.

Inhaltsverzeichnis

Frontmatter

Turing Lectures 2012

On the Impact of Turing Machines

Turing contributed a simple model of computation that has become the definition of computable. A function is considered to be computable if and only if it is computable on Turing’s model of computation. Since our notion of computable is informal and Turing’s model gives a precise definition of computable, we cannot prove the two equivalent. However, for every mathematical definition of computable that has been proposed, a proof has been developed that any function computable by the proposed model is also computable by Turing’s model.

Turing’s model is very simple, it consists of an infinite tape made up of cells, each cell capable of holding one of a finite set of symbols, along with a control with a finite number of states and a tape head by which the finite control can scan the tape and read the content of the cell scanned. A move consists of reading the contents of the scanned cell and depending on the internal state of the finite state control, writing a new symbol in the cell, moving the read head one cell right or one cell left, and changing the internal state of the finite control to a new state.

Although logicians had their own models of computable, Turing’s model made the notion of computable accessible to a larger community. The impact of a mathematical model that corresponded to a physical device and allowed one to picture and more fully understand the notion of computability accelerated the science of computability in a way which many do not appreciate and is the function of this talk.

One of the major advances came when Hartmanis and Stearns used the Turing model to define complexity classes. This then gave a formal definition to the intuitive notion of polynomial time algorithms. It helped led to asymptotic complexity as a way to compare performance of algorithms.

Another major advance was that the Turing model lead to the notion of an instantaneous description of a computation and a valid computation. An instantaneous description is a string that completely describes a computation at one instance of time. A valid computation is a sequence of successive instantaneous description.

Once a valid computation was represented by a string of symbols it was quickly recognized that a valid computation of a Turing machine could be expressed as the intersection of two context-free languages and hence the question whether the intersection was empty was undecidable. Many other problems arising in computer science were quickly shown to be undecidable. In the mid sixties the language ALGOL was invented and was described by a context-free grammar. However, as people soon noticed that what a program did sometimes depended on the compiler used. It was quickly discovered that the context-free grammar describing ALGOL was ambiguous. When researcher set out to write an algorithm to determine if a context-free grammar was ambiguous they quickly discovered that this problem was also undecidable.

One of the major discoveries of this century was when Stephan Cook proved that every problem in polynomial time could be reduced to the problem of satisfying a formula in conjunctive normal form. This lead to the notion of NP-complete problems and that many problems such as integer programming, finding the maximal clique, and many others were really all equivalent.

Although Turing’s model was very simple it was that simplicity that lead to major advances in computer science.

John Hopcroft
From Turing Machine to Morphogenesis: Forming and Informing Computation

Information is complicated. It shares its existence with the material world around us. Language has enabled us to describe relationships governing embodied information, without losing the sense of mystery. Newton replaced the soft descriptive touch with the predictive precision of mathematical computation. And action at a distance converted the informational complexity of mechanical interaction into the mathematical tidiness of computational relations over number systems. The paradigm toughened and it was nearly 400 years later that Alan Turing stepped into this scientific setting, and gave it a logical form and that became a catch-all for some, and a reductio ad absurdum for those who failed to matched it to the wider realities of the natural universe. Alan Turing subscribed to both views, and his involvement changed the way we engage with them for ever. This article is an Alan Turing Centenary tracing of part of the story.

S. Barry Cooper
Theory of Computation as an Enabling Tool for the Sciences

Researchers in the theory of computation are increasingly adopting a computational worldview that is radiating out to a wide circle of scientific and technological fields, recognizing that central phenomena of these fields are often computational in nature. Over the past decade we have applied this viewpoint to physics, molecular biology and economics. Connections are also developing to evolutionary biology, machine learning, social choice, social network analysis, nanotechnology, cognitive science and astronomy. To maximize the effectiveness of this outreach to the sciences, the theory of computation must join forces with the fields of massive data analysis and combinatorial optimization.

Richard M. Karp
Interaction and Collective Intelligence on the Internet

Network interconnection, information interoperability, and crowds interaction on the Internet could inspire better computation models than Turing machine, since that human plays an important factor in Internet computing, so that the human-machine and machine-machine interactions have evolved to be the kernel of Internet computing. Internet has not been simply equivalent to a virtual huge computer, or a set of computers. On the Internet, human’s behaviors are uncertain, the interactions and influence among people are also uncertain. These uncertainties cannot be described by Turing machine and traditional interaction machine. As a new computation platform, Internet computing requires new theories and methods. By combining topology in mathematics with the field theory in physics, we propose the topological potential approach, which set up a virtual field by the topological space to reflect individual activities, local effects and preferential attachments. This approach can be used to research the emergence of collective intelligence. Here, we introduce three case studies to illustrate the analysis on the collective intelligence on the Internet and discuss some potential applications of the topological potential approach.

Deyi Li, Liwei Huang
What Computers Do: Model, Connect, Engage

Every 30 years there is a new wave of things that computers do. Around 1950 they began to

model

events in the world (simulation), and around 1980 to

connect

people (communication). Since 2010 they have begun to

engage

with the physical world in a non-trivial way (embodiment—giving them bodies). Today there are sensor networks like the Inrix traffic information system, robots like the Roomba vacuum cleaner, and cameras that can pick out faces and even smiles. But these are just the beginning. In a few years we will have cars that drive themselves, glasses that overlay the person you are looking at with their name and contact information, telepresence systems that make most business travel unnecessary, and other applications as yet unimagined.

Computer systems are built on the physical foundation of hardware (steadily improving according to Moore’s law) and the intellectual foundations of algorithms, abstraction and probability. Good systems use a few basic methods: approximate, incrementally change, and divide and conquer. Latency, bandwidth, availability and complexity determine performance. In the future systems will deal with uncertainty much better than today, and many of them will be safety critical and hence much more dependable.

Butler Lampson
R-Calculus: A Logical Inference System for Scientific Discovery

A scientific theory must stand the verification by mans observation and experiments. It will be refuted by facts whenever its logical consequence contradicts some facts supported by observations and experiments. Thus, the process of scientific discovery is one of revising the theory according to the facts. The revision consists of the following steps:

1

discarding the laws of the theory which lead to the contradictions in such a way that the remaining part will be the maximum subset of the theory,

2

generating the facts supported by the experiments to create new laws for a new theory,

3

bringing forth the new scientific theory by merging the new laws with the remaining part of the old theory.

It is based on the scientists intuition and insight that the laws contradicting the experiments are deleted, while their intuition and insight are supported by implicit logical reasoning and analysis. Russells work shows us that for a study concerning logical analysis and reasoning, a formal inference system of logical connectives and quantifiers can be built up to do the study. R-calculus is such a formal inference system and it is capable of deriving and deleting the laws which are in conflict with the facts. Some examples are demonstrated to show how to use the calculus.

Wei Li
Quantum Computing: A Great Science in the Making

In recent years, the scientific world has seen much excitement over the development of quantum computing, and the ever increasing possibility of building real quantum computers. What’s the advantage of quantum computing? What are the secrets in the atoms that could potentially unleash such enormous power, to be used for computing and information processing? In this talk, we will take a look at quantum computing, and make the case that we are witnessing a great science in the making.

Andrew Chi-Chih Yao
The Convergence of Social and Technological Networks

The growth of social media and on-line social networks has opened up a set of fascinating new challenges and directions for the field of computing. Some of the basic issues around these developments are the design of information systems in the presence of complex social feedback effects, and the emergence of a growing research interface between computing and the social sciences.

In this talk, we will review two key sets of challenges that arise in designing and analyzing on-line social systems. The first is to understand how information flows through these systems, and how the behavior of individuals is affected by the behavior of their neighbors in the network. The second is to understand the subtle processes by which individuals on these systems evaluate and form opinions about each other, and the ways in which these evaluations create incentives that drive behavior.

Jon Kleinberg

Invited Lectures

Principles of Network Computing

In the new century, the study of networks is being developed rapidly. Traditional algorithms based on the classical graph theory have not been able to cope with large scaled networks due to their inefficiency. In this paper, we review the research on the question why a huge network such as the www-network is efficiently computable, and investigate the principles of network computing.

Networks cannot be fully and exactly computed due to both their nature and their scales. The best possibility of network computing could be just locally testable graph properties, in sparse graph models. We review the progress of the study of graph property test, in particular, local test of conductance of graphs, which is closely related to the basic network structural cells – small communities.

In the past decade, an avalanche of research has shown that many real networks, independent of their age, function, and scope, converge to similar architectures, which is probably the most surprising discovery of modern network theory. In many ways, there is a need to understand the dynamics of the processes that take place in networks. We propose a new local mechanism by introducing one more dimension for each node in a network and define a new model of networks, the homophily model, from which we are able to prove the homophily theorem that implies the homophily law of networks. The homophily law ensures that real world networks satisfies the small community phenomenon, and that nodes within a small community share some remarkable common features.

Yicheng Pan
The Small Community Phenomenon in Networks: Models, Algorithms and Applications

We survey a recent new line of research on

the small community phenomenon

in networks, which characterizes the intuition and observation that in a broad class of networks, a significant fraction of nodes belong to some small communities. We propose the formal definition of this phenomenon as well as the definition of communities, based on which we are able to both study the community structure of network models, i.e., whether a model exhibits the small community phenomenon or not, and design new models that embrace this phenomenon in a natural way while preserving some other typical network properties such as the small diameter and the power law degree distribution. We also introduce the corresponding community detection algorithms, which not only are used to identify true communities and confirm the existence of the small community phenomenon in real networks but also have found other applications, e.g., the classification of networks and core extraction of networks.

Pan Peng
Vertex-Pursuit in Hierarchical Social Networks

Hierarchical social networks appear in a variety of contexts, such as the on-line social network Twitter, the social organization of companies, and terrorist networks. We examine a dynamic model for the disruption of information flow in hierarchical social networks by considering the vertex-pursuit game Seepage played in directed acyclic graphs (DAGs). In Seepage, agents attempt to block the movement of an intruder who moves downward from the source node to a sink. We propose a generalized stochastic model for DAGs with given expected total degree sequence. Seepage is analyzed rigorously in stochastic DAGs in both the cases of a regular and power law degree sequence.

A. Bonato, D. Mitsche, P. Prałat
A Structural Approach to Prophecy Variables

Verifying the implementation of concurrent objects essentially proves the fine-grained implementation of object methods refines the corresponding abstract atomic operations. To simplify the specifications and proofs, we usually need auxiliary history and prophecy variables to record historical events and to predict future events, respectively. Although the meaning of history variables is obvious, the semantics of prophecy variables and the corresponding auxiliary code is tricky and has never been clearly spelled out operationally.

In this paper, we propose a new language construct, future blocks, that allows structural use of prophecy variables to refer to events in the future. The semantics of the construct is simple and easy to understand, without using any form of oracle or backward reasoning. Our language also separates auxiliary states from physical program states. With careful syntactic constraints, it ensures the use of history and prophecy variables would not affect the behaviors of the original program, which justifies the verification method based on the use of auxiliary variables.

Zipeng Zhang, Xinyu Feng, Ming Fu, Zhong Shao, Yong Li
An Assume/Guarantee Based Compositional Calculus for Hybrid CSP

Hybrid CSP (HCSP) extends CSP to describe interacting continuous and discrete dynamics. The concurrency with synchronous communications, timing constructs, interrupts, differential equations, and so on, make the behavior of HCSP difficult to specify and verify. In this paper, we propose a Hoare style calculus for reasoning about HCSP. The calculus includes Duration Calculus formulas to record process execution history and reason about real-time properties and continuous evolution, and dedicated predicate symbols to specify communication traces and readiness of process actions so that the composite constructs of HCSP can be handled compositionally by using assume/guarantee reasoning.

Shuling Wang, Naijun Zhan, Dimitar Guelev
Automatic Verification of Real-Time Systems with Rich Data: An Overview

We present an overview of the results of the project “Beyond Timed Automata” of the Collaborative Research Center AVACS (Automatic Verification and Analysis of Complex Systems) during the period 2008–2011, which advances the automatic verification of high-level specifications of systems exhibiting the three dimensions of process behavior, complex infinite data, and continuous real-time—beyond the capabilities of Timed Automata.

Ernst-Rüdiger Olderog
Program Analysis Using Quantifier-Elimination Heuristics
(Extended Abstract)

Software is being employed for life-critical, safety-critical, infrastructure-critical and economically critical applications. Our daily lives rely heavily on proper functioning of software in gadgets we directly or indirectly use-airplanes, flight control, high speed trains, cars, cell-phones, medical devices and instruments, banks, and what not. Malfunctioning of a program can have very severe consequences-costing lives (e.g. Therac-25 [13], Patriot missile) and money (e.g. Ariane 5, malfunctioning of economic transactions, problems in stock exchanges) [14]. Validation and verification of software have become even more and more important. Given that full verification of software has been found increasingly difficult to achieve because of lack of rigorous and complete specifications on one hand as well as difficulty of verification systems/theorem provers to address the increasing complexity of software despite considerable advances in automated reasoning techniques, ensuring absence of various types of bugs becomes a critical first step in ensuring reliability.

Deepak Kapur
Electron Tomography and Multiscale Biology

Electron tomography (ET) is an emerging technology for the three dimensional imaging of cellular ultrastructure. In combination with other techniques, it can provide three dimensional reconstructions of protein assemblies, correlate 3D structures with functional investigations at the light microscope level and provide structural information which extends the findings of genomics and molecular biology.

Realistic physical details are essential for the task of modeling over many spatial scales. While the electron microscope resolution can be as low as a fraction of a nm, a typical 3D reconstruction may just cover 1/10

15

of the volume of an optical microscope reconstruction. In order to bridge the gap between those two approaches, the available spatial range of an ET reconstruction has been expanded by various techniques. Large sensor arrays and wide-field camera assemblies have increased the field dimensions by a factor of ten over the past decade, and new techniques for serial tomography and montaging make possible the assembly of many three-dimensional reconstructions.

The number of tomographic volumes necessary to incorporate an average cell down to the protein assembly level is of the order 10

4

, and given the imaging and algorithm requirements, the computational problem lays well in the exascale range. Tomographic reconstruction can be made parallel to a very high degree, and their associated algorithms can be mapped to the simplified processors comprising, for example, a graphics processor unit. Programming this on a GPU board yields a large speedup, but we expect that many more orders of magnitude improvement in computational capabilities will still be required in the coming decade. Exascale computing will raise a new set of problems, associated with component energy requirements (cost per operation and costs of data transfer) and heat dissipation issues. As energy per operation is driven down, reliability decreases, which in turn raises difficult problems in validation of computer models (is the algorithmic approach faithful to physical reality), and verification of codes (is the computation reliably correct and replicable). Leaving aside the hardware issues, many of these problems will require new mathematical and algorithmic approaches, including, potentially, a re-evaluation of the Turing model of computation.

Albert F. Lawrence, Séastien Phan, Mark Ellisman

Contributed Papers

Constant-Time Approximation Algorithms for the Knapsack Problem

In this paper, we give a constant-time approximation algorithm for the knapsack problem. Using weighted sampling, with which we can sample items with probability proportional to their profits, our algorithm runs with query complexity

O

(

ε

− 4

log

ε

− 1

), and it approximates the optimal profit with probability at least 2/3 up to error at most an

ε

-fraction of the total profit. For the subset sum problem, which is a special case of the knapsack problem, we can improve the query complexity to

O

(

ε

− 1

log

ε

− 1

).

Hiro Ito, Susumu Kiyoshima, Yuichi Yoshida
Lower Bounds of Shortest Vector Lengths in Random NTRU Lattices

Finding the shortest vector of a lattice is one of the most important problems in computational lattice theory. For a random lattice, one can estimate the length of the shortest vector using the Gaussian heuristic. However, no rigorous proof can be provided for some classes of lattices, as the Gaussian heuristic may not hold for them. In this paper, we propose a general method to estimate lower bounds of the shortest vector lengths for random integral lattices in certain classes, which is based on the incompressibility method from the theory of Kolmogorov complexity. As an application, we can prove that for a random NTRU lattice, with an overwhelming probability, the ratio between the length of the shortest vector and the length of the target vector, which corresponds to the secret key, is at least a constant, independent of the rank of the lattice.

Jingguo Bi, Qi Cheng
Polynomial Time Construction of Ellipsoidal Approximations of Zonotopes Given by Generator Descriptions

We adapt Goffin’s Algorithm for construction of the Löwner-John ellipsoid for a full-dimensional zonotope given by the generator description.

Michal Černý, Miroslav Rada
Hardness and Approximation of the Asynchronous Border Minimization Problem
(Extended Abstract)

We study a combinatorial problem arising from the microarrays synthesis. The objective of the BMP is to place a set of sequences in the array and to find an embedding of these sequences into a common supersequence such that the sum of the “border length” is minimized. A variant of the problem, called P-BMP, is that the placement is given and the concern is simply to find the embedding.

Approximation algorithms have been proposed for the problem [21] but it is unknown whether the problem is NP-hard or not. In this paper, we give a comprehensive study of different variations of BMP by presenting NP-hardness proofs and improved approximation algorithms. We show that P-BMP, 1D-BMP, and BMP are all NP-hard. In contrast with the result in [21] that 1D-P-BMP is polynomial time solvable, the interesting implications include (i) the array dimension (1D or 2D) differentiates the complexity of P-BMP; (ii) for 1D array, whether placement is given differentiates the complexity of BMP; (iii) BMP is NP-hard regardless of the dimension of the array. Another contribution of the paper is improving the approximation for BMP from

O

(

n

1/2

log

2

n

) to

O

(

n

1/4

log

2

n

), where

n

is the total number of sequences.

Alexandru Popa, Prudence W. H. Wong, Fencol C. C. Yung
Asymptotic Limits of a New Type of Maximization Recurrence with an Application to Bioinformatics

We study the asymptotic behavior of a new type of maximization recurrence, defined as follows. Let

k

be a positive integer and

p

k

(

x

) a polynomial of degree

k

satisfying

p

k

(0) = 0. Define

A

0

 = 0 and for

n

 ≥ 1, let

$A_{n} = \max\nolimits_{0 \leq i < n} \{ A_{i} + n^{k} \, p_{k}(\frac{i}{n}) \}$

. We prove that

$\lim_{n \rightarrow \infty} \frac{A_{n}}{n^k} \,=\, \sup \{ \frac{p_k(x)}{1-x^k}: 0 \leq x <1\}$

. We also consider two closely related maximization recurrences

S

n

and

S

n

, defined as

S

0

 = 

S

0

 = 0, and for

n

 ≥ 1,

$S_{n} = \max\nolimits_{0 \leq i < n} \{ S_{i} + \frac{i(n-i)(n-i-1)}{2} \}$

and

$S'_{n} = \max\nolimits_{0 \leq i < n} \{ S'_{i} + {n-i \choose 3} + 2i { n-i \choose 2} + (n-i){ i \choose 2} \}$

. We prove that

$\lim\nolimits_{n \rightarrow \infty} \frac{S_{n}}{n^3} = \frac{2\sqrt{3}-3}{6} \approx 0.077350...$

and

$\lim\nolimits_{n \rightarrow \infty} \frac{S'_{n}}{3{n \choose 3}} = \frac{2(\sqrt{3}-1)}{3} \approx 0.488033...$

, resolving an open problem from Bioinformatics about rooted triplets consistency in phylogenetic networks.

Kun-Mao Chao, An-Chiang Chu, Jesper Jansson, Richard S. Lemence, Alban Mancheron
Computing Bits of Algebraic Numbers

We initiate the complexity theoretic study of the problem of computing the bits of (real) algebraic numbers. This extends the work of Yap on computing the bits of transcendental numbers like

π

, in Logspace.

Our main result is that computing a bit of a fixed real algebraic number is in

C

=

NC

1

$\subseteq \mbox{{\sf L}}$

when the bit position has a verbose (unary) representation and in the counting hierarchy when it has a succinct (binary) representation.

Our tools are drawn from elementary analysis and numerical analysis, and include the Newton-Raphson method. The proof of our main result is entirely elementary, preferring to use the elementary Liouville’s theorem over the much deeper Roth’s theorem for algebraic numbers.

We leave the possibility of proving non-trivial lower bounds for the problem of computing the bits of an algebraic number given the bit position in binary, as our main open question. In this direction we show very limited progress by proving a lower bound for

rationals

.

Samir Datta, Rameshwar Pratap
Approximating MAX SAT by Moderately Exponential and Parameterized Algorithms

We study approximation of the

max sat

problem by moderately exponential algorithms. The general goal of the issue of moderately exponential approximation is to catch-up on polynomial inapproximability, by providing algorithms achieving, with worst-case running times importantly smaller than those needed for exact computation, approximation ratios unachievable in polynomial time. We develop several approximation techniques that can be applied to

max sat

in order to get approximation ratios arbitrarily close to 1.

Bruno Escoffier, Vangelis Th. Paschos, Emeric Tourniaire
Computing Error Distance of Reed-Solomon Codes

Under polynomial time reduction, the maximum likelihood decoding of a linear code is equivalent to computing the error distance of a received word. It is known that the decoding complexity of standard Reed-Solomon codes at certain radius is at least as hard as the discrete logarithm problem over certain large finite fields. This implies that computing the error distance is hard for standard Reed-Solomon codes. Using the Weil bound and a new sieve for distinct coordinates counting, we are able to compute the error distance for a large class of received words. This significantly improves previous results in this direction. As a corollary, we also improve the existing results on the Cheng-Murray conjecture about the complete classification of deep holes for standard Reed-Solomon codes.

Guizhen Zhu, Daqing Wan
Coordination Mechanisms for Selfish Parallel Jobs Scheduling
(Extended Abstract)

A set of parallel jobs must be scheduled in a grid, which has multi clusters that consist of many identical processors, to minimize the global objective function, the makespan. A parallel job requires several processors for simultaneously executing and it needs some unit times to finish its execution. In practice, each parallel job is owned by an independent agent, which is selfish and select a cluster to minimize its own completion time. This scenario can be represented as a coordination mechanism game, in which the players are job owners, and the strategies are the clusters, and the player’s disutility is the completion time of its job in the corresponding schedule.

In this work, we design and analyze coordination mechanisms for machines, which aim to minimize the price of anarchy. We study two classes of scheduling policies, the Bottom-Left based algorithms and the Shelf-Packing based algorithms, both in a homogeneous grid and in a heterogeneous grid. We derive upper and lower bounds on the price of anarchy of these coordination mechanisms. We also show that such games are potential games that converge in a linear number of rounds.

Deshi Ye, Guochuan Zhang
Computationally-Fair Group and Identity-Based Key-Exchange

In this work, we re-examine some fundamental group key-exchange and identity-based key-exchange protocols, specifically the Burmester-Desmedet group key-exchange protocol [7] (referred to as the BD-protocol) and the Chen-Kudla identity-based key-exchange protocol [9] (referred to as the CK-protocol). We identify some new attacks on these protocols, showing in particular that these protocols are not computationally fair. Specifically, with our attacks, an adversary can do the following damages:

It can compute the session-key output with much lesser computational complexity than that of the victim honest player, and can maliciously nullify the contributions from the victim honest players.

It can set the session-key output to be some pre-determined value, which can be efficiently and publicly computed without knowing any secrecy supposed to be held by the attacker.

We remark these attacks are beyond the traditional security models for group key-exchange and identity-based key-exchange, which yet bring some new perspectives to the literature of group and identity-based key-exchange. We then present some fixing approaches, and prove that the fixed protocols are computationally fair.

Andrew C. Yao, Yunlei Zhao
Timed Encryption with Application to Deniable Key Exchange

We propose a new notion of timed encryption, in which the security holds within time

t

while it is totally insecure after some time

T

 > 

t

. We are interested in the case where

t

and

T

are both polynomial and propose two schemes (with and without random oracles). We apply this primitive to construct a new deniable key exchange that allows two parties to securely agree on a secret while either of them can deny the fact of communication and hence avoid an undesirable trace from it. Our protocol is adaptively deniable and secrecy in the concurrent and non-eraser model that allows session state reveal attacks and eavesdropping attacks. Here a session state reveal attack in the non-eraser model means that a user can not erase his intermediate data (e.g., due to the system backup or recovery) and, when compromised, will give it to the attacker. An eavesdropping attack, one of the major concerns in deniability, allows an adversary to eavesdrop transcripts between honest users which he does not know the randomness inside. Our protocol does not assume random oracles (if the underlying timed encryption does not do so). The only price we pay is a timing restriction. However, this restriction is rather weak and it essentially asks a user to answer a message as soon as possible and can be satisfied by almost all online protocols.

Shaoquan Jiang
Online Makespan Scheduling of Linear Deteriorating Jobs on Parallel Machines

Traditional scheduling assumes that the processing time of a job is fixed. Yet there are numerous situations that the processing time increases (deteriorates) as the start time increases. Examples include scheduling cleaning or maintenance, fire fighting, steel production and financial management. Scheduling of deteriorating jobs was first introduced on a single machine by Browne and Yechiali, and Gupta and Gupta independently. In particular, lots of work has been devoted to jobs with linear deterioration. The processing time

p

j

of job

J

j

is a linear function of its start time

s

j

, precisely,

p

j

 = 

a

j

 + 

b

j

s

j

, where

a

j

is the normal or basic processing time and

b

j

is the deteriorating rate. The objective is to minimize the makespan of the schedule.

We first consider simple linear deterioration, i.e.,

p

j

 = 

b

j

s

j

. It has been shown that on

m

parallel machines, in the online-list model, LS (List Scheduling) is

$(1+b_{\rm max})^{1-\frac{1}{m}}$

-competitive. We extend the study to the online-time model where each job is associated with a release time. We show that for two machines, no deterministic online algorithm is better than (1 + 

b

max

)-competitive, implying that the problem is more difficult in the online-time model than in the online-list model. We also show that LS is

$(1+b_{\rm max})^{2(1-\frac{1}{m})}$

-competitive, meaning that it is optimal when

m

 = 2.

Sheng Yu, Jude-Thaddeus Ojiaku, Prudence W. H. Wong, Yinfeng Xu
A Surprisingly Simple Way of Reversing Trace Distance via Entanglement

Trace distance (between two quantum states) can be viewed as quantum generalization of statistical difference (between two probability distributions). On input a pair of quantum states (represented by quantum circuits), how to construct another pair, such that their trace distance is large (resp. small) if the original trace distance is small (resp. large)? That is, how to reverse trace distance? This problem originally arose in the study of statistical zero-knowledge quantum interactive proof. We discover a surprisingly simple way to do this job. In particular, our construction has two interesting features: first, entanglement plays a key role underlying our construction; second, strictly speaking, our construction is non-black-box.

Jun Yan
Constructions for Binary Codes Correcting Asymmetric Errors from Function Fields

Binary asymmetric error-correcting codes play an important role in communication systems modeled by the binary symmetric channel. In this paper, we study binary asymmetric error-correcting codes and give a general construction for binary asymmetric error-correcting codes. The construction makes an improvement on the lower bounds of binary asymmetric error-correcting codes in [17].

Jun Zhang, Fang-Wei Fu
Stopping Set Distributions of Algebraic Geometry Codes from Elliptic Curves

The stopping sets and stopping set distribution of a binary linear code play an important role in the iterative decoding of the linear code over a binary erasure channel. In this paper, we study stopping sets and stopping distributions of some residue algebraic geometry (AG) codes. For the simplest AG code, i.e., generalized Reed-Solomon code, it is easy to determine all the stopping sets. Then we consider AG codes from elliptic curves. We use the group structure of rational points of elliptic curves to present a complete characterization of stopping sets. Then the stopping sets, the stopping set distribution and the stopping distance of the AG code from an elliptic curve are reduced to the search, computing and decision versions of the subset sum problem in the group of rational points of the elliptic curve, respectively.

Jun Zhang, Fang-Wei Fu, Daqing Wan
Energy-Efficient Network Routing with Discrete Cost Functions

Energy consumption is an important issue in the design and use of networks. In this paper, we explore energy savings in networks via a rate adaptation model. This model can be represented by a cost-minimization network routing problem with discrete cost functions. We formulate this problem as an integer program, which is proved to be NP-hard. Then a constant approximation algorithm is developed. In our proposed method, we first transform the program into a continuous-cost network routing problem, and then we approximate the optimal solution by a two-step rounding process. We show by analysis that, for uniform demands, our method provides a constant approximation for the uniform network routing problem with discrete costs. A bicriteria network routing problem is also developed so that a trade-off can be made between energy consumption and network delay. Analytical results for this latter model are also presented.

Lin Wang, Antonio Fernández Anta, Fa Zhang, Chenying Hou, Zhiyong Liu
An Algorithmic View on Multi-Related-Segments: A Unifying Model for Approximate Common Interval

A set of genes that are proximately located on multiple chromosomes often implies their origin from the same ancestral genomic segment or their involvement in the same biological process. Among the numerous studies devoted to model and infer these gene sets, the recently introduced

approximate common interval

(ACI) models capture gene loss events in addition to the gene insertion, duplication and inversion events already incorporated by earlier models. However, the computational tractability of the corresponding problems remains open in most of the cases. In this contribution, we propose an algorithmic study of a unifying model for ACI, namely

Multi-related-segments

, and demonstrate that capturing gene losses induces intractability in many cases.

Xiao Yang, Florian Sikora, Guillaume Blin, Sylvie Hamel, Romeo Rizzi, Srinivas Aluru
The Worst Case Behavior of Randomized Gossip

This paper considers the quasi-random rumor spreading model introduced by Doerr, Friedrich, and Sauerwald in [SODA 2008], hereafter referred to as the

list-based

model. Each node is provided with a cyclic list of all its neighbors, chooses a random position in its list, and from then on calls its neighbors in the order of the list. This model is known to perform asymptotically at least as well as the random phone-call model, for many network classes. Motivated by potential applications of the list-based model to live streaming, we are interested in its worst case behavior.

Our first main result is the design of an

O

(

m

 + 

n

log

n

)-time algorithm that, given any

n

-node

m

-edge network

G

, and any source-target pair

s

,

t

 ∈ 

V

(

G

), computes the maximum number of rounds it may take for a rumor to be broadcast from

s

to

t

in

G

, in the list-based model. This algorithm yields an

O

(

n

(

m

 + 

n

log

n

))-time algorithm that, given any network

G

, computes the maximum number of rounds it may take for a rumor to be broadcast from any source to any target, in the list-based model. Hence, the list-based model is computationally easy to tackle in its basic version.

The situation is radically different when one is considering variants of the model in which nodes are aware of the status of their neighbors, i.e., are aware of whether or not they have already received the rumor, at any point in time. Indeed, our second main result states that, unless

$\mbox{P}=\mbox{NP}$

, the worst case behavior of the list-based model with the additional feature that every node is perpetually aware of which of its neighbors have already received the rumor cannot be approximated in polynomial time within a

$(\frac{1}{n})^{\frac{1}{2}-\epsilon}$

multiplicative factor, for any

ε

 > 0. As a byproduct of this latter result, we can show that, unless

$\mbox{P}=\mbox{NP}$

, there are no PTAS enabling to approximate the worst case behavior of the list-based model, whenever every node perpetually keeps track of the subset of its neighbors which have sent the rumor to it so far.

H. Baumann, P. Fraigniaud, H. A. Harutyunyan, R. de Verclos
Holographic Algorithms on Domain Size k > 2

An essential problem in the design of holographic algorithms is to decide whether the required signatures can be realized by matchgates under a suitable basis. For domain size two, [1,3] characterized all functions directly realizable as matchgate signatures without a basis transformation, and [7] gave a polynomial time algorithm for the realizability problem for symmetric signatures under basis transformations. We generalize this to arbitrary domain size

k

. Specifically, we give a polynomial time algorithm for the realizability problem on domain size

k

 ≥ 3. Using this, one can decide whether suitable signatures for a holographic algorithms on domain size

k

are realizable and if so, to find a suitable linear basis to realize these signatures by an efficient algorithm.

Zhiguo Fu, Jin-Yi Cai
A Refined Exact Algorithm for Edge Dominating Set

We present an

O

*

(1.3160

n

)-time algorithm for the edge dominating set problem in an

n

-vertex graph, which improves previous exact algorithms for this problem. The algorithm is analyzed by using the “Measure and Conquer method.” We design new branching rules based on conceptually simple local structures, called “clique-producing vertices/cycles,” which significantly simplify the algorithm and its running time analysis, attaining an improved time bound at the same time.

Mingyu Xiao, Hiroshi Nagamochi
Finite Automata over Structures
(Extended Abstract)

We introduce a finite automata model for performing computations over an arbitrary structure

$\mathcal S$

. The automaton processes sequences of elements in

$\mathcal S$

. While processing the sequence, the automaton tests atomic relations, performs atomic operations of the structure

$\mathcal S$

, and makes state transitions. In this setting, we study several problems such as closure properties, validation problem and emptiness problems. We investigate the dependence of deciding these problems on the underlying structures and the number of registers of our model of automata. Our investigation demonstrates that some of these properties are related to the existential first order fragments of the underlying structures.

Aniruddh Gandhi, Bakhadyr Khoussainov, Jiamou Liu
Deterministic Distributed Data Aggregation under the SINR Model

Given a set of nodes

$\mathcal{V}$

, where each node has some data value, the goal of data aggregation is to compute some aggregate function in the fewest timeslots possible. Aggregate functions compute the aggregated value from the data of all nodes; common examples include

maximum

or

average

. We assume the realistic physical (SINR) interference model and no knowledge of the network structure or the number of neighbors of any node; our model also uses physical carrier sensing. We present a

distributed

protocol to compute an aggregate function in

O

(

D

 + Δlog

n

) timeslots, where

D

is the diameter of the network, Δ is the maximum number of neighbors within a given radius and

n

is the total number of nodes. Our protocol contributes an exponential improvement in running time compared to that in [18].

Nathaniel Hobbs, Yuexuan Wang, Qiang-Sheng Hua, Dongxiao Yu, Francis C. M. Lau
Tensor Rank and Strong Quantum Nondeterminism in Multiparty Communication

In this paper we study quantum nondeterminism in multiparty communication. There are three (possibly) different types of nondeterminism in quantum computation: i) strong, ii) weak with classical proofs, and iii) weak with quantum proofs. Here we focus on the first one. A strong quantum nondeterministic protocol accepts a correct input with positive probability, and rejects an incorrect input with probability 1. In this work we relate strong quantum nondeterministic multiparty communication complexity to the rank of the communication tensor in the Number-On-Forehead and Number-In-Hand models. In particular, by extending the definition proposed by de Wolf to

nondeterministic tensor-rank

(

nrank

), we show that for any boolean function

f

, 1) in the Number-On-Forehead model, the cost is upper-bounded by the logarithm of

nrank

(

f

); 2) in the Number-In-Hand model, the cost is lower-bounded by the logarithm of

nrank

(

f

). This naturally generalizes previous results in the field and relates for the first time the concept of (high-order) tensor rank to quantum communication. Furthermore, we show that strong quantum nondeterminism can be exponentially stronger than classical multiparty nondeterministic communication. We do so by applying our results to the matrix multiplication problem.

Marcos Villagra, Masaki Nakanishi, Shigeru Yamashita, Yasuhiko Nakashima
Speed Scaling Problems with Memory/Cache Consideration

We study the speed scaling problems with memory/cache consideration. Each job needs some time for its memory operation when it is fetched from the memory/cache. Two models are investigated, the non-cache model and the with-cache model. The objective is to minimize the energy consumption while satisfying the time constraints of the jobs. The non-cache model is a variant of the ideal model where each job

i

needs a fixed

c

i

time for its memory operation. The with-cache model further considers the case that the cache (a memory device with much faster accessing time but limited space) is provided. The uniform with-cache model is a special case when all

c

i

values are the same. We prove that the optimal solution of the non-cache model can be computed in polynomial time. For the with-cache model, we show that it is NP-complete to compute the optimal solution. For the aligned jobs (where later released jobs do not have earlier deadlines) in the uniform with-cache model, we derive an

O

(

n

4

) time algorithm to compute the optimal schedule. For the general jobs for with-cache model with resource augmentation where the memory operation time speeds up by at most

s

times, we propose a

$(2\alpha \frac{s}{s-1})^{\alpha}/2$

-approximation algorithm.

Weiwei Wu, Minming Li, He Huang, Enhong Chen
On the Amount of Nonconstructivity in Learning Formal Languages from Positive Data

Nonconstructive computations by various types of machines and automata have been considered by e.g., Karp and Lipton [18] and Freivalds [9, 10]. They allow to regard more complicated algorithms from the viewpoint of more primitive computational devices. The amount of nonconstructivity is a quantitative characterization of the distance between types of computational devices with respect to solving a specific problem.

This paper studies the amount of nonconstructivity needed to learn classes of formal languages from positive data. Different learning types are compared with respect to the amount of nonconstructivity needed to learn indexable classes and recursively enumerable classes, respectively, of formal languages from positive data. Matching upper and lower bounds for the amount of nonconstructivity needed are shown.

Sanjay Jain, Frank Stephan, Thomas Zeugmann
Computing in the Fractal Cloud: Modular Generic Solvers for SAT and Q-SAT Variants

Abstract geometrical computation can solve hard combinatorial problems efficiently: we showed previously how Q-SAT —the satisfiability problem of quantified boolean formulae— can be solved in bounded space and time using instance-specific signal machines and fractal parallelization. In this article, we propose an approach for constructing a particular

generic

machine for the same task. This machine deploys the Map/Reduce paradigm over a discrete fractal structure. Moreover our approach is

modular

: the machine is constructed by combining modules. In this manner, we can easily create generic machines for solving satifiability variants, such as SAT, #SAT, MAX-SAT.

Denys Duchier, Jérôme Durand-Lose, Maxime Senot
Online Optimization of Busy Time on Parallel Machines
(Extended Abstract)

We consider the following online scheduling problem in which the input consists of

n

jobs to be scheduled on identical machines of bounded capacity

g

(the maximum number of jobs that can be processed simultaneously on a single machine). Each job is associated with a release time and a completion time between which it is supposed to be processed. When a job is released, the online algorithm has to make decision without changing it afterwards. We consider two versions of the problem. In the minimization version, the goal is to minimize the total busy time of machines used to schedule all jobs. In the resource allocation maximization version, the goal is to maximize the number of jobs that are scheduled under a budget constraint given in terms of busy time. This is the first study on online algorithms for these problems. We show a rather large lower bound on the competitive ratio for general instances. This motivates us to consider special families of input instances for which we show constant competitive algorithms. Our study has applications in power aware scheduling, cloud computing and optimizing switching cost of optical networks.

Mordechai Shalom, Ariella Voloshin, Prudence W. H. Wong, Fencol C. C. Yung, Shmuel Zaks
Bisection (Band)Width of Product Networks with Application to Data Centers

The bisection width of interconnection networks has always been important in parallel computing, since it bounds the amount of information that can be moved from one side of a network to another, i.e., the bisection bandwidth. The problem of finding the exact bisection width of the multidimensional torus was posed by Leighton and has remained open for 20 years. In this paper we provide the exact value of the bisection width of the torus, as well as of several

d

-dimensional classical parallel topologies that can be obtained by the application of the Cartesian product of graphs. To do so, we first provide two general results that allow to obtain upper and lower bounds on the bisection width of a product graph as a function of some properties of its factor graphs. We also apply these results to obtain bounds for the bisection bandwidth of a

d

-dimensional BCube network, a recently proposed topology for data centers.

Jordi Arjona Aroca, Antonio Fernández Anta
Implicit Computation of Maximum Bipartite Matchings by Sublinear Functional Operations

The maximum bipartite matching problem, an important problem in combinatorial optimization, has been studied for a long time. In order to solve problems for very large structured graphs in reasonable time and space, implicit algorithms have been investigated. Any object to be manipulated is binary encoded and problems have to be solved mainly by functional operations on the corresponding Boolean functions. OBDDs are a popular data structure for Boolean functions, therefore, OBDD-based algorithms have been used as an heuristic approach to handle large input graphs. Here, two OBDD-based maximum bipartite matching algorithms are presented, which are the first ones using only a sublinear number of operations (with respect to the number of vertices of the input graph) for a problem unknown to be in NC, the complexity class that contains all problems computable in deterministic polylogarithmic time with polynomially many processors. Furthermore, the algorithms are experimentally evaluated.

Beate Bollig, Marc Gillé, Tobias Pröger
A Game-Theoretic Approach for Balancing the Tradeoffs between Data Availability and Query Delay in Multi-hop Cellular Networks

In this paper, the selfish caching problem in multi-hop cellular networks (MCNs) is formulated as a non-cooperative game: data caching game. Towards balancing the tradeoffs between data availability and query delay in MCNs, an incentive mechanism based upon a payment model is set up for data caching game. The data caching game is proved to be a potential game. Thus, for the game, pure Nash equilibra can be obtained and the best response dynamics converge to a pure Nash equilibrium. Moreover, by properly setting the payoff distribution rule, caching proxies have incentive to or not to cache the same data to some extent. Thereby, the performance of data availability and query delay can be tuned in an adjustable-way, which results in a desirable service performance that service provider intends to achieve.

Jin Li, Weiyi Liu, Kun Yue
Proving Liveness Property under Strengthened Compassion Requirements

Deductive rules are useful for proving properties with fairness constraints and there have been many studies on such rules with justice and compassion constraints. This paper focuses on system specifications with strengthened compassion that impose constraints on transitions involving states and their successors. A deductive rule for proving liveness properties under strengthened compassion is presented, and proofs of the soundness and the relative completeness of the rule are also presented.

Teng Long, Wenhui Zhang
Realizing Monads in Interaction Nets via Generic Typed Rules

Interaction net systems

are a model of computation based on graph rewriting. We extend interaction rules with

generic rules

, thus adding a form of higher-order functions. In addition, we propose a simple type system in order to appropriately restrict the matching of generic rules. Finally, we show how the combination of these features, i.e., generic typed rules, can be used to model impure functions in interaction nets via monads in an intuitive and simple manner.

Eugen Jiresch, Bernhard Gramlich
Towards an Axiomatization of Simple Analog Algorithms

We propose a formalization of analog algorithms, extending the framework of abstract state machines to continuous-time models of computation.

Olivier Bournez, Nachum Dershowitz, Evgenia Falkovich
Multiple Usage of Random Bits in Finite Automata

Finite automata with random bits written on a separate 2-way readable tape can recognize languages not recognizable by probabilistic finite automata. This shows that repeated reading of random bits by finite automata can have big advantages over one-time reading of random bits.

Rūsiņš Freivalds
Minimum Certificate Dispersal with Tree Structures

Given an

n

-vertex graph

G

 = (

V

,

E

) and a set

R

 ⊆ {{

x

,

y

}|

x

,

y

 ∈ 

V

} of requests, we consider to assign a set of edges to each vertex in

G

so that for every request {

u

,

v

} in

R

the union of the edge sets assigned to

u

and

v

contains a path from

u

to

v

.

The Minimum Certificate Dispersal Problem

(MCD) is defined as one to find an assignment that minimizes the sum of the cardinality of the edge set assigned to each vertex, which is originally motivated by the design of secure communications in distributed computing. This problem has been shown to be LOGAPX-hard for general directed topologies of

G

and

R

. In this paper, we consider the complexity of MCD for more practical topologies of

G

and

R

, that is, when

G

or

R

forms an (undirected) tree; tree structures are frequently adopted to construct efficient communication networks. We first show that MCD is still APX-hard when

R

is a tree, even a star. We then explore the problem from the viewpoint of the

maximum degree

Δ of the tree: MCD for tree request set with constant Δ is solvable in polynomial time, while that with Δ = Ω(

n

) is 2.78-approximable in polynomial time but hard to approximate within 1.01 unless P=NP. As for the structure of

G

itself, we show that if

G

is a tree, the problem can be solved in

O

(

n

1 + 

ε

|

R

|), where

ε

is an arbitrarily small positive constant number.

Taisuke Izumi, Tomoko Izumi, Hirotaka Ono, Koichi Wada
Improved FPT Algorithms for Rectilinear k-Links Spanning Path

Given

n

points in ℝ

d

and a positive integer

k

, the Rectilinear

k

-Links Spanning Path problem is to find a piecewise linear path through these

n

points having at most

k

line-segments (Links) where these line-segments are axis-parallel. This problem is known to be NP-complete when

d

 ≥ 3, we first prove that it is also NP-complete in 2-dimensions. Under the assumption that one line-segment in the spanning path covers all the points on the same line, we propose a new FPT algorithm with running time

O

(

d

k

 + 1

2

k

k

2

 + 

d

k

n

), which greatly improves the previous best result and is the first FPT algorithm that runs in

O

*

(2

O

(

k

)

). When

d

 = 2, we further improve this result to

O

(3.24

k

k

2

 + 1.62

k

n

). For the Rectilinear

k

-Bends TSP problem, the NP-completeness proof in 2-dimensions and FPT algorithms are also given.

Jianxin Wang, Jinyi Yao, Qilong Feng, Jianer Chen
FPT Results for Signed Domination

A function

f

:

v

 → { − 1, + 1} defined on the vertices of a graph

G

is a signed dominating function if the sum of its function values over any closed neighborhood is at least one. The weight of a signed dominating function is

f

(

V

) = ∑ 

f

(

v

), over all vertices

v

 ∈ 

V

. The signed domination number of a graph

G

, denoted by

γ

s

(

G

), equals the minimum weight of a signed dominating function of

G

. The decision problem corresponding to the problem of computing

γ

s

is an important NP-complete problem derived from social network. A signed dominating set is a set of vertices assigned the value + 1 under the function

f

in the graph. In this paper, we give some fixed parameter tractable results for signed dominating set problem, specifically the kernels for signed dominating set problem on general and special graphs. These results generalize the parameterized algorithm for this problem. Furthermore we propose a parameterized algorithm for signed dominating set problem on planar graphs.

Ying Zheng, Jianxin Wang, Qilong Feng, Jianer Chen
Submodular Minimization via Pathwidth

In this paper, we present a submodular minimization algorithm based on a new relationship between minimizers of a submodular set function and pathwidth defined on submodular set functions. Given a submodular set function

f

on a finite set

V

with

n

 ≥ 2 elements and an ordered pair

s

,

t

 ∈ 

V

, let

λ

s

,

t

denote the minimum

f

(

X

) over all sets

X

with

s

 ∈ 

X

 ⊆ 

V

 − {

t

}. The pathwidth Λ(

σ

) of a sequence

σ

of all

n

elements in

V

is defined to be the maximum

f

(

V

(

σ

′)) over all nonempty and proper prefixes

σ

′ of

σ

, where

V

(

σ

′) denotes the set of elements occurred in

σ

′. The pathwidth Λ

s

,

t

of

f

from

s

to

t

is defined to be the minimum pathwidth Λ(

σ

) over all sequences

σ

of

V

which start with element

s

and end up with

t

. Given a real

k

 ≥ 

f

({

s

}), our algorithm checks whether Λ

s

,

t

 ≤ 

k

or not and computes

λ

s

,

t

(when Λ

s

,

t

 ≤ 

k

) in

O

(

n

Δ(

k

) + 1

) oracle-time, where Δ(

k

) is the number of distinct values of

f

(

X

) with

f

(

X

) ≤ 

k

overall sets

X

with

s

 ∈ 

X

 ⊆ 

V

 − {

t

}.

Hiroshi Nagamochi
A Detailed Study of the Dominating Cliques Phase Transition in Random Graphs

A subset of nodes

S

 ⊆ 

V

of a graph

G

 = (

V

,

E

) is a dominating clique if

S

is a dominating set and a clique of

G

. The phase transition of dominating cliques in Erdös-Rényi random graph model

is investigated in this paper. Lower and upper bounds on the edge probability

p

for the existence of an

r

-node dominating clique are established in this paper. We prove therein that given an

n

-node random graph

G

from

for

r

 = 

c

log

1/

p

n

with 1 ≤ 

c

 ≤ 2 it holds: (1) if

p

 > 1/2 then an

r

-clique is dominating in

G

with a high probability and, (2) if

$p \leq ( 3 - \sqrt{5})/2$

then an

r

-clique is not dominating in

G

with a high probability. The remaining range of the probability

p

is discussed with more attention. Within such a range, we provide intervals of

r

where a dominating clique existence probability is zero, positive but less than one, and one, respectively.

Martin Nehéz, Daniel Olejár, Michal Demetrian
An Application of 1-Genericity in the $\Pi^0_2$ Enumeration Degrees

Using results from the local structure of the enumeration degrees we show the existence of prime ideals of

enumeration degrees. We begin by showing that there exists a 1-generic enumeration degree

which is noncuppable—and so properly downwards

$\Sigma^0_2$

—and low

2

. The notion of

enumeration

1

-genericity

appropriate to positive reducibilities is introduced and a set

A

is defined to be

symmetric enumeration

1

-generic

if both

A

and

$\ensuremath{\overline{A}} $

are enumeration 1-generic. We show that, if a set is 1-generic then it is symmetric enumeration 1-generic, and we prove that for any

enumeration 1-generic set

B

the class

$\{\, X \,\mid \, \;\ensuremath{\negmedspace\leq_{\ensuremath{\mathrm{e}} }\negmedspace}\; B \,\}$

is uniform

. Thus, picking 1-generic

(from above) and defining

it follows that every

only contains

sets. Since

is properly

$\Sigma^0_2$

we deduce that

contains no

$\Delta^0_2$

sets and so is itself properly

.

Liliana Badillo, Charles M. Harris
Backmatter
Metadaten
Titel
Theory and Applications of Models of Computation
herausgegeben von
Manindra Agrawal
S. Barry Cooper
Angsheng Li
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-29952-0
Print ISBN
978-3-642-29951-3
DOI
https://doi.org/10.1007/978-3-642-29952-0