Skip to main content
Top

2007 | Book

Perspectives of Systems Informatics

6th International Andrei Ershov Memorial Conference, PSI 2006, Novosibirsk, Russia, June 27-30, 2006. Revised Papers

Editors: Irina Virbitskaite, Andrei Voronkov

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This volume contains the ?nal proceedings of the Sixth International Andrei Ershov Memorial Conference on Perspectives of System Informatics (PSI 2006), held in Akademgorodok (Novosibirsk, Russia), June 27-30, 2006. The conference was held to honour the 75th anniversary of a member of the Russian Academy of Sciences Andrei Ershov (1931–1988) and his outsta- ing contributions towards advancing informatics. The role of Andrei Ershov in the establishment and development of the theory of programming and systems programming in our country cannot be overestimated. Andrei was one of the founders of the Siberian Computer Science School. He guided and took active part in the development of the programming system ALPHA and the mul- language system BETA, and authored some of the most remarkable results in the theoryofprogramming. Andreiisjustly consideredoneofthefoundersofthe theory of mixed computation. In 1974 he was nominated as Distinguished F- low of the British Computer Society. In 1981 he received the Silver Core Award for services rendered to IFIP. Andrei Ershov’s brilliant speeches were always in the focus of public attention. Especially notablewerehis lectures “Aesthetic and HumanFactorinProgramming”and“Programming—TheSecondLiteracy. ”He was not only an extremely gifted scientist, teacher and ?ghter for his ideas, but also a bright and many-sided personality. He wrote poetry, translated the works of R. Kipling and other English poets, and enjoyed playing guitar and singing. Everyone who had the pleasure of knowing Andrei Ershov and working with him will always remember his great vision, eminent achievements and generous friendship.

Table of Contents

Frontmatter

Invited Papers

Separability in Conflict-Free Petri Nets

We study whether transition sequences that transform markings with multiples of a number

k

on each place can be separated into

k

sequences, each transforming one

k

-th of the original marking. We prove that such a separation is possible for marked graph Petri nets, and present an inseparable sequence for a free-choice net.

Eike Best, Javier Esparza, Harro Wimmel, Karsten Wolf
Grand Challenges of System Programming

In 1969 the Second All-Soviet Programming Conference took place here, in Akademgorodok. One of the hot issues discussed at that conference was the problem of crisis of programming, which was proposed by Andrei Ershov. Indeed, programs were becoming bulky and complicated, and were swarming with errors; the programmer’s labor efficiency was thus low, and the development process hardly manageable. The laundry list of troubles can be continued. One can recall the dramatic story of developing the OS 360, which Brooks told in his “The Mythical Man Month”. The world has changed drastically over the past years, much due to the advances in computer science. Even the mighty OS 360 is, by today’s standards, an all-average program. But have all those troubles and problems been solved? No - and they’ve kept accumulating.

Victor Ivannikov
Specifying and Verifying Programs in Spec#

Spec# is research programming system that aims to provide programmers with a higher degree of rigor than in common languages today. The Spec# language extends the object-oriented .NET language C#, adding features like non-null types, pre- and postconditions, and object invariants. The language has been designed to support an incremental path to using more specifications. Some of the new features of Spec# are checked by a static type checker, some give rise to compiler-emitted run-time checks, and all can be subjected to the Spec# static program verifier. The program verifier generates verification conditions from Spec# programs and then uses an automatic theorem prover to analyze these.

K. Rustan M. Leino
Basic Protocols: Specification Language for Distributed Systems

Verification of requirement specifications is an important stage of the software development process. Detection of inconsistency and incompleteness of requirement specifications, as well as discovering of wrong decisions at early stages of the design process decreases the cost of software quality. An approach to requirement verification has been considered in the papers [5-8]. The language of basic protocols is used there for specification of distributed concurrent systems and formalizing requirements for them.

Alexander Letichevsky
Why Automata Models Are Sexy for Testers? (Invited Talk)

Formal methods traditionally aim at verifying and proving correctness (a typical academic activity), while testing can only show the presence of errors (that is what practitioners do). Recently, there is an increasing interest in the use of formal models and methods in testing. In this talk, we first present a traditional framework of model–based testing, considering a variety of state-oriented (automata) models, such as Finite State Machines (FSM), Communicating FSM, Extended FSM, where input and output are coupled for each transition; and input/output automata (a.k.a. transition systems), where inputs are outputs are decoupled. We provide an overview of existing test derivation techniques based on automata models, while paying a special attention to the underlying testing assumptions and fault detection capability of the resulting tests.

Alexandre Petrenko

Regular Papers

An Universal Resolving Algorithm for Inverse Computation of Lazy Languages

The Universal Resolving Algorithm was originally formulated for inverse computation of tail-recursive programs. We present an extension to general recursion that improves the efficiency and termination of inverse computation because partially produced output is used to reduce the search space. In addition, we present a transformation using a new unification-based equality operator. Examples demonstrate the advantages of the new technique. We found that these extensions can also improve inverse computation in the context of functional-logic languages.

Sergei Abramov, Robert Glück, Yuri Klimov
Program Generation in the Equivalent Transformation Computation Model Using the Squeeze Method

In the equivalent transformation (ET) computation model, a specification provides background knowledge in a problem domain, a program is a set of prioritized rewriting rules, and computation consists in successive reduction of problems by rule application. As long as meaning-preserving rewriting rules, called ET rules, with respect to given background knowledge are used, correct computation results are guaranteed. In this paper, a general framework for program synthesis in the ET model is described. The framework comprises two main phases: (1) equivalent transformation of specifications, and (2) generation of a program from an obtained specification. A method for program generation in the second phase, called the squeeze method, is presented. It constructs a program by accumulation of ET rules one by one on demand, with the goal of producing a correct, efficient, and non-redundant program.

Kiyoshi Akama, Ekawit Nantajeewarawat, Hidekatsu Koike
A Versioning and Evolution Framework for RDF Knowledge Bases

We present an approach to support the evolution of online, distributed, reusable, and extendable ontologies based on the RDF data model. The approach works on the basis of atomic changes, basically additions or deletions of statements to or from an RDF graph. Such atomic changes are aggregated to compound changes, resulting in a hierarchy of changes, thus facilitating the human reviewing process on various levels of detail. These derived compound changes may be annotated with meta-information and classified as ontology evolution patterns. The introduced ontology evolution patterns in conjunction with appropriate data migration algorithms enable the automatic migration of instance data in distributed environments.

Sören Auer, Heinrich Herre
A Graphical Approach to Prove the Semantic Preservation of UML/OCL Refactoring Rules

Refactoring is a powerful technique to improve the quality of software models including implementation code. The software developer applies successively so-called refactoring rules on the current software model and transforms it into a new model. Ideally, the application of a refactoring rule preserves the semantics of the model, on which it is applied. In this paper, we present a simple criterion and a proof technique for the semantic preservation of refactoring rules that are defined for UML class diagrams and OCL constraints. Our approach is based on a novel formalization of the OCL semantics in form of graph transformation rules. We illustrate our approach using the refactoring rule

MoveAttribute

.

Thomas Baar, Slaviša Marković
On the Usage of Concrete Syntax in Model Transformation Rules

Graph transformations are one of the best known approaches for defining transformations in model-based software development. They are defined over the abstract syntax of source and target languages, described by metamodels. Since graph transformations are defined on the abstract syntax level, they can be hard to read and require an in-depth knowledge of the source and target metamodels. In this paper we investigate how graph transformations can be made much more compact and easier to read by using the concrete syntax of the source and target languages. We illustrate our approach by defining model refactorings.

Thomas Baar, Jon Whittle
TTCN-3 for Distributed Testing Embedded Software

TTCN-3 is a standardized language for specifying and executing test suites that is particularly popular for testing embedded systems. Prior to testing embedded software in a target environment, the software is usually tested in the host environment. Executing in the host environment often affects the real-time behavior of the software and, consequently, the results of real-time testing.

Here we provide a semantics for

host-based testing

with

simulated time

and a a simulated-time solution for

distributed

testing with TTCN-3.

Stefan Blom, Thomas Deiß, Natalia Ioustinova, Ari Kontio, Jaco van de Pol, Axel Rennoch, Natalia Sidorova
Chase of Recursive Queries

In this work, we present a semantic query optimization technique to improve the efficiency of the evaluation of a subset of SQL:1999 recursive queries.

Using datalog notation, we can state our main contribution as an algorithm that builds a program

P

′ equivalent to a given program

P

, when both are applied over a database

d

satisfying a set of functional dependencies. The input program

P

is a linear recursive datalog program. The new program

P

′ has less different variables and, sometimes, less atoms in rules, thus it is cheaper to evaluate. Using

coral

,

P

′ is empirically shown to be more efficient than the original program.

Nieves R. Brisaboa, Antonio Fariña, Miguel R. Luaces, José R. Paramá
Improving Semistatic Compression Via Pair-Based Coding

In the last years, new semistatic word-based byte-oriented compressors, such as Plain and Tagged Huffman and the Dense Codes, have been used to improve the efficiency of text retrieval systems, while reducing the compressed collections to 30–35% of their original size.

In this paper, we present a new semistatic compressor, called

Pair-Based End-Tagged Dense Code (PETDC)

. PETDC compresses English texts to 27–28%, overcoming the optimal 0-order prefix-free semistatic compressor (Plain Huffman) in more than 3 percentage points. Moreover, PETDC permits also random decompression, and direct searches using fast Boyer-Moore algorithms.

PETDC builds a vocabulary with both words and pairs of words. The basic idea in which PETDC is based is that, since each symbol in the vocabulary is given a codeword, compression is improved by replacing two words of the source text by a unique codeword.

Nieves R. Brisaboa, Antonio Fariña, Gonzalo Navarro, José R. Paramá
An Agent-Based Architecture for Dialogue Systems

Research in dialogue systems has been moving towards reusable and adaptable architectures for managing dialogue execution and integrating heterogeneous subsystems. In this paper we present a formalisation of A

dmp

, an agent-based architecture which supports the development of dialogue applications. It features a central data structure shared between software agents, it allows the integration of external systems, and it includes a meta-level in which heuristic control can be embedded.

Mark Buckley, Christoph Benzmüller
Encoding Mobile Ambients into the π-Calculus

We present an encoding of the mobile ambients without communication into a subset of the

π

-calculus, namely the localized sum-free synchronous

π

-calculus. We prove the operational correspondence between the two formalisms. A key idea of the encoding is the separation of the spatial structure of mobile ambients from their operational semantics. The operational semantics is given by a universal

π

-process

Ruler

which communicates with a

π

-calculus term

Structure

A

simulating the spatial structure of a mobile ambient

A

by means of channels. We consider the presented encoding as a first step toward designing a fully abstract translation of the calculus of mobile ambients into the

π

-calculus and thus developing a uniform framework for the theory of mobile computations.

Gabriel Ciobanu, Vladimir A. Zakharov
Characterizations of CD Grammar Systems Working in Competence Mode by Multicounter Machine Resources

In this paper we focus on characterizations of

cooperating distributed grammar systems

(henceforth CDGSs) working in { ≤ 

k

, = 

k

, ≥ 

k

}-competence mode,

k

 ≥ 1, defined in [1], by means of time, space and several other resources of a multicounter machine such as number of counters, reversals or 0-tests. We show that CDGSs working in { ≤ 

k

, = 

k

, ≥ 

k

}-competence mode,

k

 ≥ 1, can be simulated by

one-way nondeterministic multicounter machines

in linear time and space, with a linear-bounded number of reversals. If we impose restrictions on the capacity of each counter to perform 0-testings, we find that CDGSs working in { ≤ 1, = 1} ∪ { ≥ 

k

|

k

 ≥ 1}-competence mode can be simulated by

one-way partially blind multicounter machines

in quasirealtime. It is known from [8] that quasirealtime partially blind multicounter machines accept the family of

Petri net languages

. Consequently, various decision problems for partially blind multicounter machines are decidable. With respect to them, several decidability results, for CDGSs working in competence mode, grammars and systems with regulated rewriting, that emerge from the above simulations are presented, too.

Liliana Cojocaru
Deriving State-Based Implementations of Interactive Components with History Abstractions

The external behaviour of an interactive component refers to the communication histories on the input and output channels. The component’s implementation employs an internal state where inputs effect output and an update of the state. The black-box view is modelled by a stream processing function from input to output streams, the glass-box view by a state transition machine. We present a formal method how to implement a stream processing function with several arguments by a state transition machine in a correctness preserving way. The transformation involves two important steps, called differentiation and history abstraction. The differentiation localizes the effect of a single input on one of the input channels wrt. the previous input histories. The history abstraction introduces states as congruence classes of input histories. We extend our previous results from interactive components with one input channel to components with several input channels. The generalization employs a ‘diamond property’ for states and outputs which ensures the confluence of the resulting state transition system.

Walter Dosch, Annette Stümpel
Introducing Debugging Capabilities to Natural Semantics

Reasoning about functional programs is simpler than reasoning about their imperative counterparts. However, finding bugs in lazy functional languages has been more complex until quite recently. The reason was that not much work was done on developing practical debuggers. Fortunately, several debuggers exist nowadays. One of the easiest to use Haskell debuggers is Hood, whose behavior is based on the concept of observation of intermediate data structures. However, although using Hood can be simple when observing some structures, it is known that it can be hard to understand how it works when dealing with complex situations.

In this paper, we formalize the behavior of the Hood debugger by extending Sestoft’s natural semantics. Moreover, we also indicate how to derive an abstract machine including such debugging information. By doing so, we do not only provide a formal foundation, but we also provide an alternative method to implement debuggers. In fact, we have already implemented a prototype of the abstract machine commented in this paper.

Alberto de la Encina, Luis Llana, Fernando Rubio
Solution Strategies for Multi-domain Constraint Logic Programs

We integrated a logic programming language into

Meta-S

, a flexible and extendable constraint solver cooperation system, by treating the language evaluation mechanism resolution as constraint solver. This new approach easily yields a CLP language with support for solver cooperation that fits nicely into our cooperation framework.

Applying the strategy definition framework of

Meta-S

we define classical search strategies and more sophisticated ones and discuss their effects on an efficient evaluation of multi-domain constraint logic programs by illustrating examples.

Stephan Frank, Petra Hofstedt, Peter Pepper, Dirk Reckmann
Information-Flow Attacks Based on Limited Observations

Two formal models for description of timing attacks are presented, studied and compared with other security concepts. The models are based on a timed process algebra and on a concept of observations which make visible only a part of a system behaviour. An intruder tries to deduce some private system activities from this partial information which contains also timing of actions. To obtain realistic security characterizations some limitations on observational power of the intruder are applied. It is assumed that the intruder has only limited time window to perform the attack or time of action occurrences can be measured only with a given limited precision.

Damas P. Gruska
Verifying Generalized Soundness of Workflow Nets

We improve the decision procedure from [10] for the problem of generalized soundness of workflow nets. A workflow net is generalized sound iff every marking reachable from an initial marking with

k

tokens on the initial place terminates properly, i.e. it can reach a marking with

k

tokens on the final place, for an arbitrary natural number

k

. Our new decision procedure not only reports whether the net is sound or not, but also returns a counterexample in case the workflow net is not generalized sound. We report on experimental results obtained with the prototype we made and explain how the procedure can be used for the compositional verification of large workflows.

Kees van Hee, Olivia Oanea, Natalia Sidorova, Marc Voorhoeve
Specifying Monogenetic Specializers by Means of a Relation Between Source and Residual Programs

A specification of a class of specializers for a tiny functional language in form of natural semantics inference rules is presented. The specification defines a relation between source and residual programs with respect to an initial configuration (a set of input data). The specification expresses the idea of

what

is to be a specialized program, avoiding where possible the details of

how

a specializer builds it. In particular, it abstracts from the difference between on-line and off-line specialization.

The class of specializers specified here is limited to

monogenetic

specializers, which includes specializers based upon partial evaluation as well as restricted supercompilation. The specification captures such supercompilation notions as

configuration

,

driving

,

generalization of a configuration

, and a simple case of

splitting a configuration

.

The proposed specification is an

intensional

definition of equivalence between source and residual programs. It provides a shorter cut for proving the correctness and other properties of specializers than usual reduction to the

extensional

equivalence of programs does.

Andrei Klimov
Satisfiability of Viability Constraints for Pfaffian Dynamics

We investigate the behavior of a Pfaffian dynamical system with respect to viability constraints and invariants. For Pfaffian dynamical systems we construct an algorithm with an elementary (doubly-exponential) upper complexity bound for checking satisfiability of viability constraints. This algorithm also provides a useful tool for checking invariance properties of given sets.

Margarita Korovina, Nicolai Vorobjov
On the Importance of Parameter Tuning in Text Categorization

Text Categorization algorithms have a large number of parameters that determine their behaviour, whose effect is not easily predicted objectively or intuitively and may very well depend on the corpus or on the document representation. Their values are usually taken over from previously published results, which may lead to less than optimal accuracy in experimenting on particular corpora.

In this paper we investigate the effect of parameter tuning on the accuracy of two Text Categorization algorithms: the well-known Rocchio algorithm and the lesser-known Winnow. We show that the optimal parameter values for a specific corpus are sometimes very different from those found in literature. We show that the effect of individual parameters is corpus-dependent, and that parameter tuning can greatly improve the accuracy of both Winnow and Rocchio.

We argue that the dependence of the categorization algorithms on experimentally established parameter values makes it hard to compare the outcomes of different experiments and propose the automatic determination of optimal parameters on the train set as a solution.

Cornelis H. A. Koster, Jean G. Beney
Preconditions for Structural Synthesis of Programs

The current paper presents an extension to the logical language used in Structural Synthesis of Programs (SSP) and describes a modified synthesis algorithm to handle branching in program synthesis. The origin of the extension is from practical experience and introduces statements with preconditions to the logical language of SSP. Using these preconditions one can describe variable domain restrictions in its domain oriented models and gain higher flexibility while doing engineering modeling.

Vahur Kotkas
How to Verify and Exploit a Refinement of Component-Based Systems

In order to deal with the verification of large systems, compositional approaches postpone in part the problem of combinatorial explosion during model exploration. The purpose of the work we present in this paper is to establish a compositional framework in which the verification may proceed through a refinement-based specification and a component-based verification approaches.

First, a constraint synchronised product operator enables us an automated compositional verification of a component-based system refinement relation. Secondly, safety

LTL

properties of the whole system are checked from local safety

LTL

properties of its components. The main advantage of our specification and verification approaches is that

LTL

properties are preserved through composition and refinement.

Olga Kouchnarenko, Arnaud Lanoix
Refinements in Typed Abstract State Machines

While Abstract State Machines (ASMs) provide a general purpose development method, it is advantageous to provide extensions that ease their use in particular application areas. This paper focuses on such extensions for the benefit of a “refinement calculus” in the area of data warehouses and on-line analytical processing (OLAP). We show that providing typed ASMs helps to exploit the existing logical formalisms used in data-intensive areas to define a ground model and refinement rules. We also note that the extensions do not increase the expressiveness of ASMs, as each typed ASM will be equivalent to an “ordinary” one.

Sebastian Link, Klaus-Dieter Schewe, Jane Zhao
Report on an Implementation of a Semi-inverter

Semi-inversion is a generalisation of inversion: A semi-inverse of a program takes some of the inputs and outputs of the original program and returns the remaining inputs and outputs.

We report on an implementation of a semi-inversion method. We will show some examples of semi-inversions made by the implementation and discuss limitations and possible extensions.

Torben Ægidius Mogensen
Loop Parallelization in Multi-dimensional Cartesian Space

Loop parallelization is of great importance to automatic translation of sequential into parallel code. We have applied Diophantine equations to compute the basic dependency vector sets covering all possible non-uniform dependencies between loop iterations. To partition the resultant dependencies space into multi-dimensional tiles of suitable shape and size, a new genetic algorithm is proposed in this article. Also, a new scheme based on multi-dimensional wave-fronts is developed to convert the multi-dimensional parallelepiped tiles into parallel loops. The problem of determining optimal tiles is NP-hard. Presenting a new constraint genetic algorithm in this paper the tiling problem is for the first time solved, in Cartesian spaces of any dimensionality.

Saeed Parsa, Shahriar Lotfi
An ASM Semantics of Token Flow in UML 2 Activity Diagrams

The token flow semantics of UML 2 activity diagrams is formally defined using Abstract State Machines. Interruptible activity regions and multiplicity bounds for pins are considered for the first time in a comprehensive and rigorous way. The formalisation provides insight into problems with the UML specification, and their solutions. It also serves as a basis for an integrated environment supporting the simulation and debugging of activity diagrams.

Stefan Sarstedt, Walter Guttmann
Well-Structured Model Checking of Multiagent Systems

We address model checking problem for combination of Computation Tree Logic (CTL) and Propositional Logic of Knowledge (PLK) in finite systems with the perfect recall synchronous semantics. We have published already an (update+abstraction)-algorithm for model checking with detailed time upper bound. This algorithm reduces model checking of combined logic to model checking of CTL in a finite abstract space (that consists of some finite trees). Unfortunately, the known upper bound for size of the abstract space (i.e. number of trees) is a non-elementary function of the size of the background system. Thus a straightforward use of a model checker for CTL for model checking the combined logic seems to be infeasible. Hence it makes sense to try to apply techniques, which have been developed for infinite-state model checking. In the present paper we demonstrate that the abstract space provided with some partial order on trees is a well-structured labeled transition system where every property expressible in the propositional

μ

-Calculus, can be characterized by a finite computable set of maximal elements. We tried feasibility of this approach to model checking of the combined logic in perfect recall synchronous environment by automatic model checking a parameterized example.

N. V. Shilov, N. O. Garanina
Development of a Robust Data Mining Method Using CBFS and RSM

Data mining (DM) has emerged as one of the key features of many applications on information system. While Data Analysis (DA) represents a significant advance in the type of analytical tools currently available, there are limitations to its capability. In order to address one of the limitations on the DA capabilities of identifying a causal relationship, we propose an integrated approach, called robust data mining (RDM), which can reduce dimensionality of the large data set, may provide detailed statistical relationships among the factors and robust factor settings. The primary objective of this paper is two-fold. First, we show how DM techniques can be effectively applied into a wastewater treatment process design by applying a correlation-based feature selection (CBFS) method. This method may be far more effective than any other methods when a large number of input factors are considered on a process design procedure. Second, we then show how DM results can be integrated into a robust design (RD) paradigm based on the selected significant factors. Our numerical example clearly shows that the proposed RDM method can efficiently find significant factors and the optimal settings by reducing dimensionality.

Sangmun Shin, Yi Guo, Yongsun Choi, Myeonggil Choi, Charles Kim
Pushout: A Mathematical Model of Architectural Merger

Although there are many formal representations of architecture, actually determining what an architecture should be when systems are merged is largely based on context and human intuition. The goal of this paper is to find a

mathematical

model which supports this context and determines the architecture when the systems have been merged. A category of architectural models is presented, and the pushout in this category provides the

unique minimal

merger of two architectures by way of an abstraction of the desired intersection. We conclude by identifying deeper aspects of architectural type which should be incorporated into this theory, and how the whole model might be automated.

Andrew Solomon
A Formal Model of Data Privacy

Information systems support data privacy by constraining user’s access to public views and thereby hiding the non-public underlying data. The privacy problem is to prove that none of the private data can be inferred from the information which is made public. We present a formal definition of the privacy problem which is based on the notion of certain answer. Then we investigate the privacy problem in the contexts of relational databases and ontology based information systems.

Phiniki Stouppa, Thomas Studer
Linear Complementarity and P-Matrices for Stochastic Games

We define the first nontrivial polynomially recognizable subclass of P-matrix Generalized Linear Complementarity Problems (GLCPs) with a subexponential pivot rule. No such classes/rules were previously known. We show that a subclass of Shapley turn-based stochastic games, subsuming Condon’s simple stochastic games, is reducible to the new class of GLCPs. Based on this we suggest the new strongly subexponential combinatorial algorithms for these games.

Ola Svensson, Sergei Vorobyov

Short Papers

RapidOWL — An Agile Knowledge Engineering Methodology

The analysis of the application of the existing knowledge engineering methodologies and tools shows that they are up to now virtually not used in practice (see [13, page 16]). This stands in contrast to the often proclaimed necessity for knowledge engineering. What can be the reason for this discrepancy? Most of the existing knowledge engineering methodologies adopt techniques and apply process models from software engineering. However, in many scenarios required knowledge engineering tasks reveal specific characteristics, which an knowledge engineering methodology should be aware of. In the following, we describe briefly some specific characteristics of Knowledge Engineering important for Rapid- OWL.

Sören Auer, Heinrich Herre
BURS-Based Instruction Set Selection

Application-specific processors (ASIPs) look very promising as a platform for embedded systems since they comprise both the flexibility of a programmable device and the efficiency of application-specific hardware. A number of approaches to design an application-specific instruction sets were introduced during the last years. We apply the BURS (Bottom-Up Rewrite System) technique which is commonly used for retargetable code generation to this problem. As a result a simple algorithm is presented that generates both instruction set and assembly code from the source program; this algorithm can be used for retargetable code generation as well.

Dmitri Boulytchev
Improved Resolution-Based Method for Satisfiability Checking Formulas of the Language L

The language

L

is used for specifying finite automata, and is a fragment of a first order language with monadic predicates. Checking specification for satisfiability plays an important role in the development of reactive algorithms. Restricted syntax of this language and interpreting it over the integers make it possible to substantially improve resolution-based methods for satisfiability checking. This has been done in previous papers devoted to

R

- and

S

-resolution. In this paper, we present yet another improvement based on the restriction of the type of atoms upon which the resolution is allowed.

Anatoly Chebotarev, Sergey Krivoi
Real-Time Stable Event Structures and Marked Scott Domains: An Adjunction

Event structures constitute a major branch of models for concurrency. Their advantage is that they explicitly exhibit the interplay between concurrency and nondeterminism. In a seminal work, Winskel has shown that categories of prime and stable event structures can be related to a category of Scott domains by adjunctions. The intention of this note is to show the applicability of the theory-categorical framework to a real-time extension of stable event structures, in order to identify suitable semantical domains for the models. To that end, we first introduce a category

${\mathbf{\underline{TSES}}}$

of real-time stable event structures, and a category

${\mathbf{\underline{MDom}}}$

of a particular class of Scott domains, called marked Scott domains, and then define an adjunction between

${\mathbf{\underline{TSES}}}$

and

${\mathbf{\underline{MDom}}}$

.

R. S. Dubtsov
Streaming Networks for Coordinating Data-Parallel Programs

A new coordination language for distributed data-parallel programs is presented, call SNet. The intention of SNet is to introduce advanced structuring techniques into a coordination language: stream processing and various forms of subtyping. The talk will present the organisation of SNet, its major type inferencing algorithms and will briefly discuss the current state of implementation and possible applications.

Clemens Grelck, Sven-Bodo Scholz, Alex Shafarenko
Formal Methods in Industrial Software Standards Enforcement

The article presents an approach to development of software standards usage infrastructure. The approach is based on formalization of standards and automated conformance test derivation from the resulting formal specifications. Strong technological support of such a process in its engineering aspects makes it applicable to software standards of real-life complexity. This is illustrated by its application to Linux Standard Base. The work stands in line with goals of international initiative Grand Challenge 6: Dependable Systems Evolution [1].

Alexey Grinevich, Alexey Khoroshilov, Victor Kuliamin, Denis Markovtsev, Alexander Petrenko, Vladimir Rubanov
Visual Rules Modeling

Rules are widely recognized to play an important role in the SemanticWeb. They are a critical technology component for the early adoption and applications of knowledge-based techniques in e-business, especially enterprize integration and B2B e-commerce. This includes, in particular, markup languages for integrity and derivation rules, such as the Semantic Web Rule Language (SWRL)[5] that has recently been proposed as an extension of the Web ontology language OWL[4]. Rules also play an important role in information systems engineering, especially in the specification of functional requirements where business rules are the foundation for capturing and modeling business application logic.

Sergey Lukichev, Gerd Wagner
Security for Multithreaded Programs Under Cooperative Scheduling

Information flow exhibited by multithreaded programs is subtle because the attacker may exploit scheduler properties when deducing secret information from publicly observable outputs. Volpano and Smith have introduced a

protect

command that prevents the scheduler from observing sensitive timing behavior of protected commands and therefore prevents undesired information flows. While a useful construct,

protect

is nonstandard and difficult to implement. This paper presents a transformation that eliminates the need for

protect

under cooperative scheduling. We show that both termination-insensitive and termination-sensitive security can be enforced by variants of the transformation in a language with dynamic thread creation.

Alejandro Russo, Andrei Sabelfeld
A Fully Dynamic Algorithm for Recognizing and Representing Chordal Graphs

This paper considers the problem of recognition and representation of dynamically changing chordal graphs. The input to the problem consists of a series of modifications to be performed on a graph, where modifications can be additions or deletions of complete

r

-vertex graphs. The purpose is to maintain a representation of the graph as long as it remains a chordal graph and to detect when it ceases to be so.

Tursunbay kyzy Yrysgul
A Knowledge Portal for Cultural Information Resources: Towards an Architecture

The paper presents a concept and architecture of a specialized Internet portal providing semantic access to cultural knowledge and information resources (i.e. electronic collections). The information basis of such a portal is an ontology that supports integration of information resources relevant to the subject domain of a portal into a uniform information space and provides content-based (semantic) access to these resources. The portal is adaptable to meet the needs of a variety of users: it provides multilingual access and supports a user’s profile with personal preferences and areas of interests. The portal supports formulation of queries in terms of the chosen subject domain and ontology-driven navigation through the information space of the portal. Extension of the information space is realized by both experts and specialized subsystem searching for and automatically indexing relevant resources.

Yury Zagorulko, Jawed Siddiqi, Babak Akhgar, Olesya Borovikova
Backmatter
Metadata
Title
Perspectives of Systems Informatics
Editors
Irina Virbitskaite
Andrei Voronkov
Copyright Year
2007
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-70881-0
Print ISBN
978-3-540-70880-3
DOI
https://doi.org/10.1007/978-3-540-70881-0

Premium Partner