Skip to main content
Top

2012 | Book

Perspectives of Systems Informatics

8th International Andrei Ershov Memorial Conference, PSI 2011, Novosibirsk, Russia, June 27-July 1, 2011, Revised Selected Papers

Editors: Edmund Clarke, Irina Virbitskaite, Andrei Voronkov

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book contains thoroughly refereed and revised papers from the 8th International Andrei Ershov Memorial Conference on Perspectives of System Informatics, PSI 2011, held in Akademgorodok, Novosibirsk, Russia, in June/July 2011. The 18 revised full papers and 10 revised short papers presented were carefully reviewed and selected from 60 submissions. The volume also contains 5 invited papers covering a range of hot topics in computer science and informatics. The papers are organized in topical sections on foundations of program and system development and analysis, partial evaluation, mixed computation, abstract interpretation, compiler construction, computer models and algorithms for bioinformatics, programming methodology and software engineering, information technologies, knowledge-based systems, and knowledge engineering.

Table of Contents

Frontmatter
Petri Net Distributability

A Petri net is distributed if, given an allocation of transitions to (geographical) locations, no two transitions at different locations share a common input place. A system is distributable if there is some distributed Petri net implementing it.

This paper addresses the question of which systems can be distributed, while respecting a given allocation. The paper states the problem formally and discusses several examples illuminating – to the best of the authors’ knowledge – the current status of this work.

Eike Best, Philippe Darondeau
Connector Algebras, Petri Nets, and BIP

In the area of component-based software architectures, the term

connector

has been coined to denote an entity (e.g. the communication network, middleware or infrastructure) that regulates the interaction of independent components. Hence, a rigorous mathematical foundation for connectors is crucial for the study of coordinated systems. In recent years, many different mathematical frameworks have been proposed to specify, design, analyse, compare, prototype and implement connectors rigorously. In this paper, we overview the main features of three notable frameworks and discuss their similarities, differences, mutual embedding and possible enhancements. First, we show that Sobocinski’s nets with boundaries are as expressive as Sifakis et al.’s BI(P), the BIP component framework without priorities. Second, we provide a basic algebra of connectors for BI(P) by exploiting Montanari et al.’s tile model and a recent correspondence result with nets with boundaries. Finally, we exploit the tile model as a unifying framework to compare BI(P) with other models of connectors and to propose suitable enhancements of BI(P).

Roberto Bruni, Hernán Melgratti, Ugo Montanari
Models of Provenance
(Abstract)

As more and more information is available to us on the Web, the understanding of its provenance – its source and derivation – is essential to the trust we place in that information. Provenance has become especially important to scientific research, which now relies on information that has been repeatedly copied, transformed and annotated.

Peter Buneman
End-to-End Guarantees in Embedded Control Systems
(Abstract)

Software implementations of controllers for physical subsystems form the core of many modern safety-critical systems such as aircraft flight control and automotive engine control. In the model-based approach to the design and implementation of these systems, the control designer starts with a mathematical model of the system and argues that key properties – such as stability and performance – are met for the model. Then, the model is compiled into code, where extensive simulations are used to informally justify that the implementation faithfully captures the model.

Rupak Majumdar
Mining Precise Specifications
(Abstract)

Recent advances in software validation and verification make it possible to widely automate the check whether a specification is satisfied. This progress is hampered, though, by the persistent difficulty of writing specifications. Are we facing a “specification crisis”? By mining specifications from existing systems, we can alleviate this burden, reusing and extending the knowledge of 60 years of programming, and bridging the gap between formal methods and real-world software. But mining specifications has its challenges:We need good usage examples to learn expected behavior; we need to cope with the approximations of static and dynamic analysis; and we need specifications that are readable and relevant to users. In this talk, I present the state of the art in specification mining, its challenges, and its potential, up to a vision of seamless integration of specification and programming.

Andreas Zeller
Detecting Entry Points in Java Libraries

When writing a Java library, it is very difficult to hide functionality that is intended not to be used by clients. The visibility concept of Java often forces the developer to expose implementation details. Consequently, we find a high number of public classes and methods in many Java libraries. Thus, client programmers must rely on documentation in order to identify the

entry points

of the library, i.e. the methods originally intended to be used by clients.

In this paper, we introduce a new metric, called the

Method Weight

, that assists in detecting entry points. Applying this metric on some well-known open-source Java libraries considerably supported the process of identifying their entry points. Furthermore, the metric provides a classification criterion to distinguish libraries with focused functionality from plain collections of utility classes.

Thomas Baar, Philipp Kumar
Static Analysis of Run-Time Modes in Synchronous Process Network

For modeling modern streaming-oriented applications, Process Networks (PNs) are used to describe systems with changing behavior, which must be mapped on a concurrent architecture to meet the performance and energy constraints of embedded devices. Finding an optimal mapping of Process Networks to the constrained architecture presumes that the behavior of the Process Network is statically known. In this paper we present a static analysis for synchronous PNs that extracts different run-time modes by using polyhedral abstraction. The result is a Mealy machine whose states describe different run-time modes and the edges among them represent transitions. This machine can be used to guide optimizing backend mappings from PNs to concurrent architectures.

Michael Beyer, Sabine Glesner
Compositional Methods in Characterization of Timed Event Structures

A logic characteristic formulas up to the timed testing preorders are constructed for model of timed event structures with discrete internal actions. Such logic formulas can be used for deciding a problem of recognizing timed testing relations. Timed event structures can be considered as a composition of their parts. And to simplify construction of characteristic formula we can try to use characteristic formulas of parts. In the paper we use compositional methods for construction of the characteristic formulas in a model of timed event structures with discrete internal actions.

Elena Bozhenkova
Algorithmic Debugging of SQL Views

We present a general framework for debugging systems of correlated SQL views. The debugger locates an erroneous view by navigating a suitable computation tree. This tree contains the computed answer associated with every intermediate relation, asking the user whether this answer is expected or not. The correctness and completeness of the technique is proven formally, using a general definition of SQL operational semantics. The theoretical ideas have been implemented in an available tool which includes the possibility of employing trusted specifications for reducing the number of questions asked to the user.

Rafael Caballero, Yolanda García-Ruiz, Fernando Sáenz-Pérez
Timed Transition Systems with Independence and Marked Scott Domains: An Adjunction

The intention of this paper is to introduce a timed extension of transition systems with independence, and to study its categorical interrelations with other timed ”true-concurrent” models. In particular, we show the existence of a chain of coreflections leading from a category of the model of timed transition systems with independence to a category of a specially defined model of marked Scott domains. As an intermediate semantics we use a model of timed event structures, able to properly capture the dependencies and conflict among events which arise in the presence of time delays of the events.

Roman Dubtsov
An Extensible System for Enhancing Social Conference Experience

We combine Auto-ID and Web technologies in an extensible on-site event support system for enhancing experience of conference organizers as well as participants. Our system enables users to authenticate themselves using RFID badges and to access interactive Web-based services via a touchscreen terminal. The developed services aim at supporting social interactions of participants, and thus validate the promising usage directions of the combination of offline social networks and the online social Web. Technically, we have investigated employment of Web 2.0 technologies in social, sensor and mobile technologies enabled systems at conferences and events. This paper gives an overview of the overall system and its evaluation via a user survey and usage log data collected during the Extended Semantic Web Conference (ESWC) 2010 and similar international conferences, altogether with several hundred participants.

Michael A. H. Fried, Anna Fensel, Federico Michele Facca, Dieter Fensel
Exponential Acceleration of Model Checking for Perfect Recall Systems

We revise the model checking algorithm for combination of Computation Tree Logic and Propositional Logic of Knowledge in finite multiagent systems with a perfect recall synchronous semantics. A new approach is based on data structures that are exponentially smaller than the structures used in the previous versions of the model checking algorithm. It reduces the time complexity of the algorithm exponentially.

Natalia O. Garanina
Bootstrapping Compiler Generators from Partial Evaluators

This paper shows that bootstrapping of compiler generators from program specializers is a viable alternative to the third Futamura projection. To practically validate the technique, a novel partial evaluation-based compiler generator was designed and implemented for a recursive flowchart language. Three-step bootstrapping was found to be faster and to produce the same compiler generator that Gomard and Jones produced two decades ago by double self-application. Compiler-generator bootstrapping has distinct properties that are not present in the classic three Futamura projections, such as the ability to turn a specializer into a compiler generator in one step without self-application. Up to now, the approach of hand-writing compiler generators has only been used to avoid difficulties when specializing strongly-typed languages, not as a first step towards compiler-generator bootstrapping.

Robert Glück
A Logic Characteristic for Timed Extensions of Partial Order Based Equivalences

The intention of the paper is to provide a uniform logic characteristic for timed extensions of partial order based equivalences (pomset trace equivalence, history preserving bisimulation and hereditary history preserving bisimulation) in the setting of timed event structures. For this purpose, we use open maps based characterizations of the equivalences, provided in [10], and the logics of path assertions from [6].

Natalya S. Gribovskaya
Proving the Correctness of Unfold/Fold Program Transformations Using Bisimulation

This paper shows that a bisimulation approach can be used to prove the correctness of unfold/fold program transformation algorithms. As an illustration, we show how our approach can be use to prove the correctness of positive supercompilation (due to Sørensen et al). Traditional program equivalence proofs show the original and transformed programs are contextually equivalent, i.e., have the same termination behaviour in all closed contexts. Contextual equivalence can, however, be difficult to establish directly.

Gordon and Howe use an alternative approach: to represent a program’s behaviour by a labelled transition system whose bisimilarity relation is a congruence that coincides with contextual equivalence. Labelled transition systems are well-suited to represent global program behaviour.

On the other hand, unfold/fold program transformations use generalization and folding, and neither is easy to describe contextually, due to use of non-local information. We show that weak bisimulation on labelled transition systems gives an elegant framework to prove contextual equivalence of original and transformed programs. One reason is that folds can be seen in the context of corresponding unfolds.

Geoff W. Hamilton, Neil D. Jones
Secure Multi-execution in Haskell

Language-based information-flow security has emerged as a promising technology to guarantee confidentiality in on-line systems, where enforcement mechanisms are typically presented as run-time monitors, code transformations, or type-systems. Recently, an alternative technique, called

secure multi-execution

, has been proposed. The main idea behind this novel approach consists on running a program multiple times, once for each security level, using special rules for I/O operations. Compared to run-time monitors and type-systems, secure multi-execution does not require to inspect the full code of the application (only its I/O actions). In this paper, we propose the core of a library to provide

non-interference

through secure-multi execution. We present the code of the library as well as a running example for Haskell. To the best of our knowledge, this paper is the first work to consider secure-multi execution in a functional setting and provide this technology as a library.

Mauro Jaskelioff, Alejandro Russo
Towards an Open Framework for C Verification Tools Benchmarking

The paper presents a twofold verification system that aimes to be an open platform for experimentation with various verification techniques as well as an industrial-ready domain specific verification tool for Linux device drivers. We describe the architecture of the verification system and discuss a perspective to build an open benchmarking suite on top of it.

Alexey Khoroshilov, Vadim Mutilin, Eugene Novikov, Pavel Shved, Alexander Strakh
Solving Coverability Problem for Monotonic Counter Systems by Supercompilation

We put the program transformation method known as

supercompilation

in the context of works on counter systems, well-structured transition systems, Petri nets, etc. Two classic versions of the supercompilation algorithm are formulated for counter systems, using notions and notation adopted from the literature on transition systems.

A procedure to solve the coverability problem for a counter system by iterative application of a supercompiler to the system along with initial and target sets of states, is presented. Its correctness for monotonic counter systems provided the target set is upward-closed and the initial set has a certain form, is proved.

The fact that a supercompiler can solve the coverability problem for a lot of practically interesting counter systems has been discovered by A. Nemytykh and A. Lisitsa when they performed experiments on verification of cache-coherence protocols and other models by means of the Refal supercompiler SCP4, and since then theoretical explanation why this was so successful has been an open problem. Here the solution for the monotonic counter systems is given.

Andrei V. Klimov
Multi-result Supercompilation as Branching Growth of the Penultimate Level in Metasystem Transitions

The paper deals with some aspects of metasystem transitions in the context of supercompilation. We consider the manifestations of the law of

branching growth of the penultimate level

in the case of higher-level supercompilation and argue that this law provides some useful hints regarding the ways of constructing metasystems by combining supercompilers. In particular we show the usefulness of multi-result supercompilation for proving the equivalence of expressions and in two-level supercompilation.

Ilya Klyuchnikov, Sergei A. Romanenko
Symbolic Loop Bound Computation for WCET Analysis

We present an automatic method for computing tight upper bounds on the iteration number of special classes of program loops. These upper bounds are further used in the WCET analysis of programs. To do so, we refine program flows using SMT reasoning and rewrite multi-path loops into single-path ones. Single-path loops are further translated into a set of recurrence relations over program variables. Recurrence relations are solved and iteration bounds of program loops are derived from the computed closed forms. For solving recurrences we deploy a pattern-based recurrence solving algorithm and compute closed forms only for a restricted class of recurrence equations. However, in practice, these recurrences describe the behavior of a large set of program loops. Our technique is implemented in the r-TuBound tool and was successfully tried out on a number of challenging WCET benchmarks.

Jens Knoop, Laura Kovács, Jakob Zwirchmayr
GoRRiLA and Hard Reality

We call a

theory problem

a conjunction of theory literals and a

theory solver

any system that solves theory problems. For implementing efficient theory solvers one needs benchmark problems, and especially hard ones. Unfortunately, hard benchmarks for theory solvers are notoriously difficult to obtain. In this paper we present two tools:

Hard Reality

for generating theory problems from real-life problems with non-trivial boolean structure and

GoRRiLA

for generating random theory problems for linear arithmetic. Using GoRRiLA one can generate problems containing only a few variables, which however are difficult for all state-of-the-art solvers we tried. Such problems can be useful for debugging and evaluating solvers on small but hard problems. Using Hard Reality one can generate hard theory problems which are similar to problems found in real-life applications, for example, those taken from SMT-LIB [2].

Konstantin Korovin, Andrei Voronkov
Reachability in One-Dimensional Controlled Polynomial Dynamical Systems

In this paper we investigate a case of the reachability problem in controlled o-minimal dynamical systems. This problem can be formulated as follows. Given a controlled o-minimal dynamical system initial and target sets, find a finite choice of time points and control parameters applied at these points such that the target set is reachable from the initial set. We prove that the existence of a finite control strategy is decidable and construct a polynomial complexity algorithm which generates finite control strategies for one-dimensional controlled polynomial dynamical systems. For this algorithm we also show an upper bound on the numbers of switches in finite control strategies.

Margarita Korovina, Nicolai Vorobjov
Insertion Modeling System

The paper relates to practical aspects of insertion modeling. Insertion modeling system is an environment for the development of insertion machines, used to represent insertion models of distributed systems. The architecture of insertion machines and insertion modeling system IMS is presented. Insertion machine for program verification is specified as an example, and as a starting point of ‘verifiable programming’ project.

Alexander A. Letichevsky, Olexandr A. Letychevskyi, Vladimir S. Peschanenko
Decidability of Modular Logics for Concurrency

The modular logics we approach in this paper are logics for concurrent systems that reflect both behavioral and structural properties of systems. They combine dynamic modalities that express behavioural properties of a system with polyadic modalities that join properties of subsystems. Spatial and Separation Logics are examples of modular logics. Being the complex algebraic-coalgebraic semantics, these logics have interesting decidability properties. In this paper we provide a taxonomy of the decision problems for a class of modular logics with semantics based on CCS processes.

Radu Mardare
Partial Evaluation of Janus Part 2: Assertions and Procedures

We continue earlier work on partial evaluation of the reversible language Janus. In this paper, we improve the handling of assertions and extend the partial evaluation method to handle procedure calls, which were omitted in the previous work. The partial evaluator now handles the full Janus language.

Torben Ægidius Mogensen
Scalable Parallel Interval Propagation for Sparse Constraint Satisfaction Problems

Multi-core processors have been broadly available to the public in the last five years. Parallelism has become a common design feature for computational intensive algorithms. In this paper we present a parallel implementation of an algorithm called interval constraint propagation for solution of constraint satisfaction problems over real numbers. Unlike existing implementations of this algorithm, our implementation scales well to many CPU cores with shared memory for sparse constraint satisfaction problems. We present scalability data for a quad-core processor on a number of benchmarks for non-linear constraint solvers.

Evgueni Petrov
A Combined Technique for Automatic Detection of Backward Binary Compatibility Problems

This paper discusses the problem of ensuring backward compatibility between new and old versions of software components (e.g., libraries). Linux environment is considered as the main example. Breakage of the compatibility in a new version of a software component may result in crashing or incorrect behavior (at binary level) or inability to build (at source level) of applications targeted at a previous version of the component when the applications are used with the new version of the component. The paper describes typical issues that cause backward compatibility problems at binary level and presents a new method for automatic detection of such issues during component development (focusing on changes in structure of interfaces). C/C++ language is used as the main example. Unlike the existing means, the suggested method can verify a broad spectrum of backward compatibility problems by comparing function signatures and type definitions obtained from the component’s header files in addition to analyzing symbols in the component’s binaries. Also, this paper describes an automated checker tool that implements the suggested method with some results of its practical usage.

Andrey Ponomarenko, Vladimir Rubanov
Weighted Lumpability on Markov Chains

This paper reconsiders Bernardo’s T-lumpability on continuous-time Markov chains (CTMCs). This notion allows for a more aggressive state-level aggregation than ordinary lumpability. We provide a novel structural definition of (what we refer to as) weighted lumpability, prove some elementary properties, and investigate its compatibility with linear real-time objectives. The main result is that the probability of satisfying a deterministic timed automaton specification coincides for a CTMC and its weigthed lumped analogue. The same holds for metric temporal logic formulas.

Arpit Sharma, Joost-Pieter Katoen
Development of the Computer Language Classification Knowledge Portal

During the semicentennial history of Computer Science and Information Technologies, several thousands of computer languages have been created. The computer language universe includes languages for different purposes: programming languages, specification languages, modeling languages, languages for knowledge representation, etc. In each of these branches of computer languages it is possible to track several approaches (imperative, declarative, object-oriented, etc.), disciplines of processing (sequential, non-deterministic, parallel, distributed, etc.), and formalized models, such as Turing machines or logic inference machines. Computer language paradigms are the basis for classification of the computer languages. They are based on joint attributes which allow us to differentiate branches in the computer language universe. Currently the number of essentially different paradigms is close to several dozens. The study and precise specification of computer language paradigms (including new ones) are called to improve the choice of appropriate computer languages for new Software projects and information technologies. This position paper presents an approach to computer languages paradigmatization (i. e. paradigm specification) and classification that is based on a unified approach to formal semantics, and an open wiki-like ontology for pragmatics, formal syntax and informal “style”.

Nikolay V. Shilov, Alexander A. Akinin, Alexey V. Zubkov, Renat I. Idrisov
Justified Terminological Reasoning

Justification logics are epistemic logics that include explicit justifications for an agent’s knowledge. In the present paper, we introduce a justification logic

$\mathcal{JALC}$

over the description logic

$\mathcal{ALC}$

. We provide a deductive system and a semantics for our logic and we establish soundness and completeness results. Moreover, we show that our logic satisfies the so-called internalization property stating that it internalizes its own notion of proof. We then sketch two applications of

$\mathcal{JALC}$

: (i) the justification terms can be used to generate natural language explanations why an

$\mathcal{ALC}$

statement holds and (ii) the terms can be used to study data privacy issues for description logic knowledge bases.

Thomas Studer
Implementing Conflict Resolution

The

conflict resolution

method, introduced by the authors in [4] is a new method for solving systems of linear inequalities over the rational and real numbers. This paper investigates various heuristics for optimisation of the method and presents experimental evaluation. The method and heuristics are evaluated against various benchmarks and compared to other methods, such as the Fourier-Motzkin elimination method and the simplex method.

Konstantin Korovin, Nestan Tsiskaridze, Andrei Voronkov
Symbolic Tree Transducers

Symbolic transducers are useful in the context of web security as they form the foundation for sanitization of potentially malicious data. We define Symbolic Tree Transducers as a generalization of Regular Transducers as finite state input-output tree automata with logical constraints over a parametric background theory. We examine key closure properties of Symbolic Tree Transducers and we develop a composition algorithm and an equivalence decision procedure for linear single-valued transducers.

Margus Veanes, Nikolaj Bjørner
Probabilistic Concepts in Formal Contexts

We generalize the main notions of Formal Concept Analysis with the ideas of the semantic probabilistic inference. We demonstrate that under standard restrictions, our definitions completely correspond to the original notions of Formal Concept Analysis. From the point of view of applications, we propose a method of recovering concepts in formal contexts in presence of noise on data.

Alexander Demin, Denis Ponomaryov, Evgeny Vityaev
Backmatter
Metadata
Title
Perspectives of Systems Informatics
Editors
Edmund Clarke
Irina Virbitskaite
Andrei Voronkov
Copyright Year
2012
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-29709-0
Print ISBN
978-3-642-29708-3
DOI
https://doi.org/10.1007/978-3-642-29709-0

Premium Partner