Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 10th Conference on Computability in Europe, CiE 2014, held in Budapest, Hungary, in June 2014. The 42 revised papers presented were carefully reviewed and selected from 78 submissions and included together with 15 invited papers in this proceedings. The conference had six special sessions: computational linguistics, bio-inspired computation, history and philosophy of computing, computability theory, online algorithms and complexity in automata theory.



Computability and Categoricity of Ultrahomogeneous Structures

This paper investigates the effective categoricity of ultrahomogeneous structures. It is shown that any computable ultrahomogeneous structure is


categorical. A structure

${\mathcal A}$

is said to be

weakly ultrahomogeneous

if there is a finite (


) set of elements






such that

${\mathcal A}$

becomes ultrahomogeneous when constants representing these elements are added to the language. Characterizations are obtained for the weakly ultrahomogeneous linear orderings, equivalence structures, and injection structures, and compared with characterizations of the computably categorical and


categorical structures.

Francis Adams, Douglas Cenzer

Parameterized Inapproximability of Target Set Selection and Generalizations

In this paper, we consider the

Target Set Selection

problem: given a graph and a threshold value

for each vertex


of the graph, find a minimum size vertex-subset to “activate” s.t. all the vertices of the graph are activated at the end of the propagation process. A vertex


is activated during the propagation process if at least

of its neighbors are activated. This problem models several practical issues like faults in distributed networks or word-to-mouth recommendations in social networks. We show that for any functions




this problem cannot be approximated within a factor of




) in




) ·




time, unless






], even for restricted thresholds (namely constant and majority thresholds). We also study the cardinality constraint maximization and minimization versions of the problem for which we prove similar hardness results.

Cristina Bazgan, Morgan Chopin, André Nichterlein, Florian Sikora

How can Grammatical Inference Contribute to Computational Linguistics?

Grammatical Inference refers to the process of learning grammars and languages from data. Although there are clear connections between Grammatical Inference and Computational Linguistics, there have been a poor interaction between these two fields. The goals of this article are: i) To introduce Grammatical Inference to computational linguists; ii) To explore how Grammatical Inference can contribute to Computational Linguistics.

Leonor Becerra-Bonache

Algorithms and Their Explanations

By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of explanations’. Analogously to the gradient of abstractions, a gradient of explanations is a sequence of discrete levels of explanation each one refining the previous, varying formalisation, and thus providing progressive evidence for hidden information. Because of this sequential and coherent uncovering of the information that explains a level of abstraction—the heapsort algorithm in our guiding example—the notion of gradient of explanations allows to precisely classify purposes in writing software according to the informal criterion of depth’, and to give a precise meaning to the notion of ‘concreteness’.

Marco Benini, Federico Gobbo

Gene Tree Correction by Leaf Removal and Modification: Tractability and Approximability

The reconciliation of a gene tree and a species tree is a well-known method to understand the evolution of a gene family in order to identify which evolutionary events (speciations, duplications and losses) occurred during gene evolution. Since reconciliation is usually affected by errors in the gene trees, they have to be preprocessed before the reconciliation process. A method to tackle with this problem aims to correct a gene tree by removing the minimum number of leaves (Minimum Leaf Removal). In this paper we show that Minimum Leaf Removal is not approximable within factor




, where


is the number of leaves of the species tree and


 > 0 is a constant. Furthermore, we introduce a new variant of the problem, where the goal is the correction of a gene tree with the minimum number of leaf modifications. We show that this problem, differently from the removal version, is


[1]-hard, when parameterized by the number of leaf modifications.

Stefano Beretta, Riccardo Dondi

Uniform Schemata for Proof Rules

Motivated by the desire to facilitate the implementation of interactive proof systems with rich sets of proof rules, we present a uniform system of rule schemata to generate proof rules for different styles of logical calculi. The system requires only one schema for each logical operator to generate introduction and elimination rules in natural deduction and sequent calculus style. In addition, the system supports program extraction from proofs by generating realizers for the proof rules automatically.

Ulrich Berger, Tie Hou

Graph Polynomials Motivated by Gene Rearrangements in Ciliates

Gene rearrangements within the process of gene assembly in ciliates can be represented using a 4-regular graph. Based on this observation, Burns et al. [Discrete Appl. Math., 2013] propose a graph polynomial abstracting basic features of the assembly process, like the number of segments excised. We show that this

assembly polynomial

is essentially


a single variable case of the

transition polynomial

by Jaeger and


a special case of the

bracket polynomial

introduced for simple graphs by Traldi and Zulli.

Robert Brijder, Hendrik Jan Hoogeboom

On the Equivalence of Automata for KAT-expressions

Kleene algebra with tests (


) is a decidable equational system for program verification that uses both Kleene and Boolean algebras. In spite of

elegance and success in providing theoretical solutions for several problems, not many efforts have been made towards obtaining tractable decision procedures that could be used in practical software verification tools. The main drawback of the existing methods relies on the explicit use of all possible assignments to boolean variables. Recently, Silva introduced an automata model that extends Glushkov’s construction for regular expressions. Broda et al. extended also Mirkin’s equation automata to KAT expressions and studied the state complexity of both algorithms. Contrary to other automata constructions from KAT expressions, these two constructions enjoy the same descriptional complexity behaviour as their counterparts for regular expressions, both in the worst case as well as in the average case. In this paper, we generalize, for these automata, the classical methods of subset construction for nondeterministic finite automata, and the Hopcroft and Karp algorithm for testing deterministic finite automata equivalence. As a result, we obtain a decision procedure for


equivalence where the extra burden of dealing with boolean expressions avoids the explicit use of all possible assignments to the boolean variables. Finally, we specialize the decision procedure for testing


expressions equivalence without explicitly constructing the automata, by introducing a new notion of derivative and a new method of constructing the equation automaton.

Sabine Broda, António Machiavelo, Nelma Moreira, Rogério Reis

Algorithmic Randomness for Infinite Time Register Machines

A concept of randomness for infinite time register machines (


s), resembling Martin-Löf-randomness, is defined and studied. In particular, we show that for this notion of randomness, computability from mutually random reals implies computability and that an analogue of van Lambalgen’s theorem holds.

Merlin Carl

Constraint Logic Programming for Resolution of Relative Time Expressions

Translating time expression into absolute time points or durations is a challenge for natural languages processing such as text mining and text understanding in general. We present a constraint logic language




) tailored to text usages concerned with time and calendar. It provides a simple and flexible formalism to express relationships between different time expressions in a text, thereby giving a recipe for resolving them into absolute time. A constraint solver is developed which, as opposed to some earlier approaches, is independent of the order in which temporal information is introduced, and it can give meaningful output also when no exact reference time is available.

Henning Christiansen

Maximal Parallelism in Membrane Systems with Generated Membrane Boundaries

In membrane systems, the maximal parallelism is a useful tool for modeling real biotic/chemical interactions. After all, there are many attempts to relax maximal parallelism at the definition level, e.g. minimal parallelism, bounded parallelism etc., or even at the system level as the metabolic


system. By the help of topological means, membrane computations and maximal parallelism can be controlled. Besides, in natural processes, the events represented by communication rules take place in the vicinity of the membranes. The authors, motivated by natural phenomena, propose a framework in which the abstract notion of boundaries along membranes is modeled. In this paper, behaviors of communication rules restricted to these membrane boundaries are presented, in particular, showing how these restrictions affect the maximal parallelism.

Zoltán Ernő Csajbók, Tamás Mihálydeák

Learnability Thesis Does Not Entail Church’s Thesis

We consider the notion of intuitive learnability and its relation to intuitive computability. We briefly discuss the Church’s Thesis. We formulate the Learnability Thesis. Further we analyse the proof of the Church’s Thesis presented by M. Mostowski. We indicate which assumptions of the Mostowski’s argument implicitly include that the Church’s Thesis holds. The impossibility of this kind of argument is strengthened by showing that the Learnability Thesis does not imply the Church’s Thesis. Specifically, we show a


interpretation of intuitive computability under which intuitively learnable sets are exactly algorithmically learnable but intuitively computable sets form a proper superset of recursive sets.

Marek Czarnecki, Michał Tomasz Godziszewski, Dariusz Kalociński

Phase Transitions Related to the Pigeonhole Principle

Since Jeff Paris introduced them in the late seventies [Par78], densities turned out to be useful for studying independence results. Motivated by their simplicity and surprising strength we investigate the combinatorial complexity of two such densities which are strongly related to the pigeonhole principle. The aim is to miniaturise Ramsey’s Theorem for 1-tuples. The first principle uses an unlimited amount of colours, whereas the second has a fixed number of two colours. We show that these principles give rise to Ackermannian growth. After parameterising these statements with respect to a function


 : ℕ → ℕ, we investigate for which functions


Ackermannian growth is still preserved.

Michiel De Smet, Andreas Weiermann

Generic Parallel Algorithms

We develop a nature-inspired generic programming language for parallel algorithms, one that works for all data structures and control structures. Any parallel algorithm satisfying intuitively-appealing postulates can be modeled by a collection of cells, each of which is an abstract state machine, augmented with the ability to spawn new cells. All cells run the same algorithm and communicate via a shared global memory.

Nachum Dershowitz, Evgenia Falkovich

Isomorphisms of Non-Standard Fields and Ash’s Conjecture

Cohesive sets play an important role in computability theory. Here we use cohesive sets to build nonstandard versions of the rationals. We use Koenigsmann’s work on Hilbert’s Tenth Problem to establish that these nonstandard fields are rigid. As a consequence we obtain results about automorphisms of the lattices of computably enumerable vector spaces arising in the context of Ash’s conjecture.

Rumen Dimitrov, Valentina Harizanov, Russell Miller, K. J. Mourad

Modeling Life as Cognitive Info-computation

This article presents a naturalist approach to cognition understood as a network of info-computational, autopoietic processes in living systems. It provides a conceptual framework for the unified view of cognition as evolved from the simplest to the most complex organisms, based on new empirical and theoretical results. It addresses three fundamental questions:

what cognition is, how cognition works and what cognition does at different levels of complexity of living organisms

. By explicating the info-computational character of cognition, its evolution, agent-dependency and generative mechanisms we can better understand its life-sustaining and life-propagating role. The info-computational approach contributes to rethinking cognition as a process of natural computation in living beings that can be applied for cognitive computation in artificial systems.

Gordana Dodig-Crnkovic

Deciding the Borel Complexity of Regular Tree Languages

We show that it is decidable whether a given a regular tree language belongs to the class

${\bf \Delta^0_2}$

of the Borel hierarchy, or equivalently whether the Wadge degree of a regular tree language is countable.

Alessandro Facchini, Henryk Michalewski

Chemical Production and Molecular Computing in Addressable Reaction Compartments

Biological systems employ compartmentalisation in order to orchestrate a multitude of biochemical processes by simultaneously enabling “data hiding” and modularisation. In this paper, we present recent research projects that embrace compartmentalisation as an organisational programmatic principle in synthetic biological and biomimetic systems. In these systems, artificial vesicles and synthetic minimal cells are envisioned as nanoscale reactors for programmable biochemical synthesis and as chassis for molecular information processing. We present P systems, brane calculi, and the recently developed


calculus as formal frameworks providing data hiding and modularisation and thus enabling the representation of highly complicated hierarchically organised compartmentalised reaction systems. We demonstrate how compartmentalisation can greatly reduce the complexity required to implement computational functionality, and how addressable compartments permit the scaling-up of programmable chemical synthesis.

Harold Fellermann, Natalio Krasnogor

Visual Modelling of Complex Systems: Towards an Abstract Machine for PORGY

PORGY is a visual modelling tool, where a system is defined by a strategic graph program. In this paper, we provide an operational semantics for strategic graph programs by means of an abstract machine. The semantics specifies the valid transformation steps, providing a link between the model and its implementation in PORGY.

Maribel Fernández, Hélène Kirchner, Ian Mackie, Bruno Pinaud

Fixed Points and Attractors of Reaction Systems

We investigate the computational complexity of deciding the occurrence of many different dynamical behaviours in reaction systems, with an emphasis on biologically relevant problems (


, existence of fixed points and fixed point attractors). We show that the decision problems of recognising these dynamical behaviours span a number of complexity classes ranging from






${\Pi_2^{\rm P}}$

-completeness with several intermediate problems being either





Enrico Formenti, Luca Manzoni, Antonio E. Porreca

Fit-Preserving Data Refinement of Mass-Action Reaction Networks

The systematic development of large biological models can benefit from an iterative approach based on a refinement process that gradually adds more details regarding the reactants and/or reactions of the model. We focus here on data refinement, where the species of the initial model are substituted with several subspecies in the refined one, each with its own individual behavior in the model. In this context, we distinguish between

structural refinement

, where the aim is to generate meaningful refined reactions, and

quantitative refinement

, where one looks for a data fit at least as good as that of the original model. The latter generally requires refitting the model and additional experimental data, a computationally expensive process. A

fit-preserving refinement

, i.e. one that captures the same species dynamics as the original model, can serve as a suitable alternative or as initialization for parameter estimation routines. We focus in this paper on the problem of finding all numerical setups that yield fit-preserving refinements of a given model and formulate a sufficient condition for it. Our result suggests a straightforward, computationally efficient automation of the quantitative model refinement process. We illustrate the use of our approach through a discussion of the Lotka-Volterra model for prey-predator dynamics.

Cristian Gratie, Ion Petre

On Maximal Block Functions of Computable η-like Linear Orderings

We prove the existence of a computable


-like linear ordering

such that, for any




: ℚ → ℕ ∖ {0} and linear ordering


does not have order type


 = ∑ {




) |


 ∈ ℚ }.

Charles M. Harris

Lossiness of Communication Channels Modeled by Transducers

We provide an automata-theoretic approach to analyzing an abstract channel modeled by a transducer and to characterizing its lossy rates. In particular, we look at related decision problems and show the boundaries between the decidable and undecidable cases.

Oscar H. Ibarra, Cewei Cui, Zhe Dang, Thomas R. Fischer

Predicate Characterizations in the Polynomial-Size Hierarchy


polynomial-size hierarchy

is the hierarchy of ‘minicomplexity’ classes which correspond to

two-way alternating finite automata

with polynomially many states and finitely many alternations. It is defined by analogy to the

polynomial-time hierarchy

of standard complexity theory, and it has recently been shown to be strict above its first level.

It is well-known that, apart from their definition in terms of polynomial-time

alternating Turing machines

, the classes of the polynomial-time hierarchy can also be characterized in terms of polynomial-time


, polynomial-time

oracle Turing machines

, and


of second-order logic. It is natural to ask whether analogous alternative characterizations are possible for the polynomial-size hierarchy, as well.

Here, we answer this question affirmatively for predicates. Starting with the first level of the hierarchy, we experiment with several natural ways of defining what a ‘polynomial-size predicate’ should be, so that existentially quantified predicates of this kind correspond to polynomial-size

two-way nondeterministic finite automata

. After reaching an appropriate definition, we generalize to every level of the hierarchy.

Christos A. Kapoutsis

Function Spaces for Second-Order Polynomial Time

In the context of second-order polynomial-time computability, we prove that there is no general function space construction. We proceed to identify restrictions on the domain or the codomain that do provide a function space with polynomial-time function evaluation containing all polynomial-time computable functions of that type.

As side results we show that a polynomial-time counterpart to admissibility of a representation is not a suitable criterion for natural representations, and that the Weihrauch degrees embed into the polynomial-time Weihrauch degrees.

Akitoshi Kawamura, Arno Pauly

Complexity of Operation Problems

The operation problem for several classes of automata and other language descriptors is addressed: Fix an operation on formal languages. Given a class of automata (or other language descriptors), is the application of this operation to the given class still a language represented by a device of that class? In particular, several aspects of complexity in connection with these problems are considered. Is the problem decidable or not? What is the computational complexity of the decision procedure, or what is its precise level in the arithmetic hierarchy? What is the blow-up of the size of the resulting device, if it exists, in terms of the sizes of the given ones? Otherwise, is there a so-called non-recursive trade-off between the representation by devices combined with the operation and the representation by just one device? We present some selected results on the computational and descriptional complexity of operation problems and draw attention to the overall picture and some of the main ideas involved.

Martin Kutrib

A Computational Model of XACML–Based Access Control Management in Distributed Networks

In this paper, we propose a novel approach to enforcing eXtensible Access Control Markup Language (XACML) policy specifications in distributed environments. Our approach is based on a formal language theoretic construction, a variant of networks of parallel language processors. The language processors form teams, send and receive information through component and team level filters. The hierarchical nature of the network supports multiple levels of nesting. Consequently, different security needs can be defined at varying levels of granularity. We use various context conditions for filtering information, thus controlling information flow. Our theoretical contributions include establishing the connection between the growth of the number of strings at the components of the networks and the growth functions of developmental systems.

Katalin Anna Lázár

Early Machine Translation

Integration and Transfers between Computing and the Language Sciences

Early Machine Translation was devised as a war technology originating in war sciences, and was intended to provide mass translations for the strategic purposes of the Cold war. Linguistics, which did not belong to war sciences, did not play any role at the beginning of Machine Translation. However, thanks to machine translation, the language sciences have been engaged in the process of the second mathematization of language which can be called the computational mathematization of language. In my paper, I propose to examine how linguistics integrated such a technology and entered into the second mathematization by doing a comparative study of two European traditions, the British tradition and the Russian tradition.

Jacqueline Léon

Lines Missing Every Random Point

We prove that there is, in every direction in Euclidean space, a line that misses every computably random point. We also prove that there exist, in every direction in Euclidean space, arbitrarily long line segments missing every double exponential time random point.

Jack H. Lutz, Neil Lutz

Natural Descriptions and Anthropic Bias: Extant Problems In Solomonoff Induction

According to some advocates, algorithmic information theory (a branch of theoretical computer science) promises to underwrite an ultimate formal theory of comprehensible patterns. The arguments have an intuitive appeal when expressed in terms of well-known computer languages, and can both inspire and explain practical results in machine learning. The theory of Solomonoff induction, which combines algorithmic information theory and Bayesian inference, has been suggested as a solution to the philosophical problem of induction and an idealisation of the scientific method; an extension of it forms part of a proposed mathematical theory of intelligence.

Unfortunately, the philosophical import of algorithmic information theory is undermined by its dependence on an arbitrary choice of language (reference machine). While the choice of reference machine is irrelevant in the infinite limit, I observe that considered over finite sets there are infinitely many reference machines which give arbitrary evaluations of simplicity. I also explain why, regardless of how much data has been observed, infinitely many reference machines will always give every conceivable “best guess” answer to finite questions in Solomonoff induction.

Finally, I argue that algorithmic information theory is philosophically incomplete because it pretends to a “God’s-eye view” and ignores relevant information in the structure of the observer. This issue has been raised before, but given relatively little focus. The question of anthropic bias - how to take the existence of the reasoner into account when reasoning - is still a subject of major disagreement in Bayesian inference, and is likely to be so in algorithmic information theory as well.

Simon McGregor

An Efficient Algorithm for the Equation Tree Automaton via the k-C-Continuations

Champarnaud and Ziadi, and Khorsi

et al.

show how to compute the equation automaton of word regular expression




-C-Continuations. Kuske and Meinecke extend the computation of the equation automaton to a regular tree expression

over a ranked alphabet Σ and produce a

time and space complexity algorithm, where


is the maximal rank of a symbol occurring in Σ and

is the size of

. In this paper, we give a full description of the algorithm based on the acyclic minimization of Revuz. Our algorithm, which is performed in an

time and space complexity, where |


| is the number of states of the produced automaton, is more efficient than the one obtained by Kuske and Meinecke.

Ludovic Mignot, Nadia Ouali Sebti, Djelloul Ziadi

On the Effectiveness of Symmetry Breaking

Symmetry breaking involves coloring the elements of a structure so that the only automorphism which respects the coloring is the identity. We investigate how much information we would need to be able to compute a 2-coloring of a computable finite-branching tree under the predecessor function which eliminates all automorphisms except the trivial one; we also generalize to


-colorings for fixed


and for variable



Russell Miller, Reed Solomon, Rebecca M. Steiner

On the Ramseyan Factorization Theorem

We study, in the context of reverse mathematics, the strength of Ramseyan factorization theorem (

${\rm RF}^{s}_{k}$

), a Ramsey-type theorem used in automata theory. We prove that

${\rm RF}^s_k$

is equivalent to

${\rm RT}^2_2$

for all




 ≥ 2,






. We also consider a weak version of Ramseyan factorization theorem and prove that it is in between ADS and CAC.

Shota Murakami, Takeshi Yamazaki, Keita Yokoyama

The Complexity of Satisfaction Problems in Reverse Mathematics

Satisfiability problems play a central role in computer science and engineering as a general framework for studying the complexity of various problems. Schaefer proved in 1978 that truth satisfaction of propositional formulas given a language of relations is either NP-complete or tractable. We classify the corresponding satisfying assignment construction problems in the framework of Reverse Mathematics and show that the principles are either provable over



or equivalent to



. We formulate also a Ramseyan version of the problems and state a different dichotomy theorem. However, the different classes arising from this classification are not known to be distinct.

Ludovic Patey

An Early Completion Algorithm: Thue’s 1914 Paper on the Transformation of Symbol Sequences

References to Thue’s 1914 paper on string transformation systems are based mainly on a small section of that work defining Thue systems. A closer study of the remaining parts of that paper highlight a number of important themes in the history of computing: the transition from algebra to formal language theory, the analysis of the “computational power” (in a pre-1936 sense) of rules, and the development of algorithms to generate rule-sets.

James F. Power

Metric-Driven Grammars and Morphogenesis (Extended Abstract)

Expansion of space, rather than the progress of time, drives many developmental processes in plants. Metric-driven grammars provide a formal method for specifying and simulating such processes. We illustrate their operation using cell division patterns, phyllotactic patterns, and several aspects of leaf development.

Przemyslaw Prusinkiewicz, Brendan Lane, Adam Runions

Hyperprojective Hierarchy of qcb0-Spaces

We extend the Luzin hierarchy of qcb


-spaces introduced in [ScS13] to all countable ordinals, obtaining in this way the hyperprojective hierarchy of qcb


-spaces. We generalize all main results of [ScS13] to this larger hierarchy. In particular, we extend the Kleene-Kreisel continuous functionals of finite types to the continuous functionals of countable types and relate them to the new hierarchy. We show that the category of hyperprojective qcb


-spaces has much better closure properties than the category of projective qcb


-space. As a result, there are natural examples of spaces that are hyperprojective but not projective.

Matthias Schröder, Victor Selivanov

Online Bin Packing: Old Algorithms and New Results

In the bin packing problem we are given an instance consisting of a sequence of items with sizes between 0 and 1. The objective is to pack these items into the smallest possible number of bins of unit size.




algorithms are simple online algorithms introduced in early seventies, when it was also shown that their asymptotic approximation ratio is equal to 1.7. We present a simple proof of this bound and survey recent developments that lead to the proof that also the absolute approximation ratio of these algorithms is exactly 1.7. More precisely, if the optimum needs


bins, the algorithms use at most




bins and for each value of


, there are instances that actually need so many bins. We also discuss bounded-space bin packing, where the online algorithm is allowed to keep only a fixed number of bins open for future items. In this model, a variant of


also has asymptotic approximation ratio 1.7, although it is possible that the bound is significantly smaller if also the offline solution is required to satisfy the bounded-space restriction.

Jiří Sgall

Pluralism Ignored: The Church-Turing Thesis and Philosophical Practice

The Church-Turing thesis is widely stated in terms of three equivalent models of computation (Turing machines, the lambda calculus, and rewrite systems), and it says that the intuitive notion of a computable function is what is defined by any one of these models. Despite this well-established equivalence, the philosophical literature concentrates almost exclusively on the Turing machine model. We argue that this has been to the detriment of the philosophy of computation, and specifically that it ignores two issues: firstly, equivalence in the Church-Turing sense is extensional equivalence, whereas many of the delicate issues in the philosophy of mind, and in theoretical computer science, are to do with fine-grained intensional equivalence of algorithms. Secondly, real computers are not in any meaningful sense Turing machines: they are nondeterministic, their memory may fail to be in a determinate state due to cache coherence issues, and the boundaries between inside and outside are ill-defined and permeable. We explore the philosophical significance of these issues and give some examples.

G. Graham White

The FPGA-Based High-Performance Computer RIVYERA for Applications in Bioinformatics

Bioinformatics specifies a wide field of applications with generally long runtimes or huge amounts of data to be processed – or even both. Typically, large computing clusters or special computing platforms are harnessed to solve problems in this field in reasonable time. One such platform is represented by the FPGA-based high-performance computer RIVYERA, which was intentionally developed for problems in cryptanalysis. On the basis of three easy examples taken from our current research field, we show how RIVYERA can be applied to different kinds of problems regarding bioinformatics. RIVYERA is able to significantly speed up the process of exact sequence alignment using the Smith-Waterman [1] algorithm, querying protein sequence databases using BLASTp [2], and running genome-wide association studies (GWAS) using iLOCI [3] or similar methods based on contingency tables. Likewise, energy savings with RIVYERA are in the same order as runtime reductions compared to standard PCs or computing clusters.

Lars Wienbrandt

Exploiting Membrane Features to Compute

P systems are a computational model inspired by the functioning of the cell and based upon the notion of cellular membrane. We show how different features of P systems with active membranes, a variant of the basic model where membranes can be multiplied by division, can be used to approach various problems in computation theory.

Claudio Zandron

Short Lists with Short Programs in Short Time – A Short Proof

Bauwens, Mahklin, Vereshchagin and Zimand [1] and Teutsch [6] have shown that given a string


it is possible to construct in polynomial time a list containing a short description of it. We simplify their technique and present a shorter proof of this result, which also achieves better values for the main parameters.

Marius Zimand


Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!