Skip to main content

2012 | Buch

Mathematical and Engineering Methods in Computer Science

7th International Doctoral Workshop, MEMICS 2011, Lednice, Czech Republic, October 14-16, 2011, Revised Selected Papers

herausgegeben von: Zdeněk Kotásek, Jan Bouda, Ivana Černá, Lukáš Sekanina, Tomáš Vojnar, David Antoš

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume constitutes the thoroughly refereed post-conference proceedings of the 7th International Doctoral Workshop on Mathematical and Engineering Methods in Computer Science, MEMICS 2011, held in Lednice, Czech Republic, on October 14-16, 2011.

The 13 revised full papers presented together with 6 invited talks were carefully reviewed and selected from 38 submissions. The papers address all current issues of mathematical and engineering methods in computer science, especially: software and hardware dependability, computer security, computer-aided analysis and verification, testing and diagnostics, simulation, parallel and distributed computing, grid computing, computer networks, modern hardware and its design, non-traditional computing architectures, software engineering, computational intelligence, quantum information processing, computer graphics and multimedia, signal, text, speech, and image processing, and theoretical computer science.

Inhaltsverzeichnis

Frontmatter
Rigorous System Design: The BIP Approach
Abstract
Rigorous system design requires the use of a single powerful component framework allowing the representation of the designed system at different levels of detail, from application software to its implementation. This is essential for ensuring the overall coherency and correctness. The paper introduces a rigorous design flow based on the BIP (Behavior, Interaction, Priority) component framework [1]. This design flow relies on several, tool-supported, source-to-source transformations allowing to progressively and correctly transform high level application software towards efficient implementations for specific platforms.
Ananda Basu, Saddek Bensalem, Marius Bozga, Paraskevas Bourgos, Joseph Sifakis
Natural Born Computing
Abstract
Nature is not afraid of complexity. Her solutions exploit the unpredictable and messy nature of reality. But our technology seems to be very different. Instead of exploiting its environment it is more frequently damaged by that environment. In this article I describe how we can learn from natural systems and create new technologies that exploit natural principles. I describe our investigations into the technologies of the future – devices that can adapt, be fault tolerant, and even assemble themselves. Examples of a self-repairing robot and physical self-assembling systems are shown, and I describe my systemic computer concept which aims to be the first parallel fault tolerant computer that is based on general biological systems. Through examples such as these, I argue that while we may never be able to predict exactly what a natural system may do, that does not prevent such systems from being extremely useful for us – after all, we are unpredictable natural systems ourselves.
Peter J. Bentley
Games and Markov Decision Processes with Mean-Payoff Parity and Energy Parity Objectives
Abstract
In this paper we survey results of two-player games on graphs and Markov decision processes with parity, mean-payoff and energy objectives, and the combination of mean-payoff and energy objectives with parity objectives. These problems have applications in verification and synthesis of reactive systems in resource-constrained environments.
Krishnendu Chatterjee, Laurent Doyen
Assessing System Vulnerability Using Formal Verification Techniques
Abstract
Hardware systems are becoming more and more vulnerable to soft errors caused by radiation or process variations. Design techniques to cope with these problems are built into the system. But how to verify that the final system is as resilient as expected? The paper covers modeling issues related to assessing fault tolerance and reliability. Existing approaches are reviewed that analyze transient faults on the electrical as well as the logical level. Trade-offs regarding resource requirements and quality of results are discussed and the individual advantages are highlighted.
Görschwin Fey
Information Security in a Quantum World
Abstract
It is well known that classical computationally-secure cryptosystems may be susceptible to quantum attacks, i.e., attacks by adversaries able to process quantum information. A prominent example is the RSA public key cryptosystem, whose security is based on the hardness of factoring; it can be broken using a quantum computer running Shor’s efficient factoring algorithm. In this extended abstract, we review an argument which shows that a similar problem can arise even if a cryptosystem provides information-theoretic security. As long as its security analysis is carried out within classical information theory, attacks by quantum adversaries cannot in general be excluded.
Renato Renner
Computer Memory: Why We Should Care What Is under the Hood
Abstract
The memory subsystems of contemporary computer architectures are increasingly complex – in fact, so much so that it becomes difficult to estimate the performance impact of many coding constructs, and some long known coding patterns are even discovered to be principally wrong. In contrast, many researchers still reason about algorithmic complexity in simple terms, where memory operations are sequential and of equal cost. The goal of this talk is to give an overview of some memory subsystem features that violate this assumption significantly, with the ambition to motivate development of algorithms tailored to contemporary computer architectures.
Vlastimil Babka, Petr Tůma
Frequency Prediction of Functions
Abstract
Prediction of functions is one of processes considered in inductive inference. There is a “black box” with a given total function f in it. The result of the inductive inference machine F( < f(0), f(1), ⋯ ,f(n) > ) is expected to be f(n + 1). Deterministic and probabilistic prediction of functions has been widely studied. Frequency computation is a mechanism used to combine features of deterministic and probabilistic algorithms. Frequency computation has been used for several types of inductive inference, especially, for learning via queries. We study frequency prediction of functions and show that that there exists an interesting hierarchy of predictable classes of functions.
Kaspars Balodis, Ilja Kucevalovs, Rūsiņš Freivalds
Timed Automata Approach to Verification of Systems with Degradation
Abstract
We focus on systems that naturally incorporate a degrading quality, such as electronic devices with degrading electric charge or broadcasting networks with decreasing power or quality of a transmitted signal. For such systems, we introduce an extension of linear temporal logic with quantitative constraints (Linear Temporal Logic with Degradation Constraints, or DLTL for short) that provides a user-friendly formalism for specifying properties involving quantitative requirements on the level of degradation. The syntax of DLTL resembles syntax of Metric Interval Temporal Logic (MITL) designed for reasoning about timed systems. Thus, we investigate their relation and a possibility of translating DLTL verification problem for systems with degradation into previously solved MITL verification problem for timed automata. We show, that through the mentioned translation, the DLTL model checking problem can be solved with limited, yet arbitrary, precision.
Further, we show that probability in Markov Decision Processes can be viewed as a degrading quality and DLTL as a probabilistic linear temporal logic with quantitative operators. We discuss expressiveness of DLTL as compared with expressiveness of probabilistic temporal logics.
Jiří Barnat, Ivana Černá, Jana Tůmová
Basic Operations on Binary Suffix-Free Languages
Abstract
We give a characterization of nondeterministic automata accepting suffix-free languages, and a sufficient condition on deterministic automata to accept suffix-free languages. Then we investigate the state complexity of basic operations on binary suffix-free regular languages. In particular, we show that the upper bounds on the state complexity of all the boolean operations as well as of Kleene star are tight in the binary case. On the other hand, we prove that the bound for reversal cannot be met by binary languages. This solves several open questions stated by Han and Salomaa (Theoret. Comput. Sci. 410, 2537-2548, 2009).
Roland Cmorik, Galina Jirásková
Efficient Data Representation of Large Job Schedules
Abstract
The increasing popularity of advanced schedule-based techniques designed to solve Grid scheduling problems requires the use of efficient data structures to represent the constructed job schedules. Based on our previous research in the area of advanced scheduling algorithms we have developed data representation designed to maintain large job schedules. We provide new details of the applied representation, especially about the binary heap data structure. The heap guarantees good efficiency of the crucial schedule update procedure which is used to keep the schedule consistent and up-to-date subject to dynamically changing state of the system. We prove the time complexity related to the use of such a structure and—using an experimental evaluation—we demonstrate the performance of this structure even for very large job schedules.
Dalibor Klusáček, Hana Rudová
Prefix-Free Regular Languages: Closure Properties, Difference, and Left Quotient
Abstract
We show that the class of prefix-free languages is closed under intersection, difference, concatenation, square, and the k-th power and is not closed under complement, union, symmetric difference, Kleene star, reversal, cyclic shift, shuffle, and left quotient. Then, we study the state complexity of difference and left quotient of prefix-free regular languages. In both cases we get tight bounds. In the case of difference, the tight bound is mn − m − 2n + 4 and is met by binary languages. In the case of left quotient, the tight bound is 2 n − 1. The bound is met by languages over (n − 1)-letter alphabet and cannot be met using smaller alphabets.
Monika Krausová
Noise Injection Heuristics for Concurrency Testing
Abstract
Testing of concurrent software is difficult due to the non-determinism present in scheduling of concurrent threads. Existing testing approaches tackle this problem either using a modified scheduler which allows to systematically explore possible scheduling alternatives or using random or heuristic noise injection which allows to observe different scheduling scenarios. In this paper, we experimentally compare several existing noise injection heuristics both from the point of view of coverage of possible behaviours as well as from the point of view of error discovery probability. Moreover, we also propose a new noise injection heuristics which uses concurrency coverage information to decide where to put noise and show that it can outperform the existing approaches in certain cases.
Bohuslav Křena, Zdeněk Letko, Tomáš Vojnar
Low GPU Occupancy Approach to Fast Arithmetic Coding in JPEG2000
Abstract
Arithmetic coding, and especially adaptive MQ-Coding of JPEG2000, is a serial process, which does not match specifics of GPUs as massively parallel processors well. In this paper we study and evaluate several approaches to acceleration of the MQ-Coding using commodity GPU hardware, including our proposal of a new enhanced renormalization procedure. We conclude with a “low occupancy approach” and 5.6–16× average speedup when compared to the state of the art multi-threaded CPU implementations.
Jiří Matela, Martin Šrom, Petr Holub
Using Dimensionality Reduction Method for Binary Data to Questionnaire Analysis
Abstract
In this paper we introduce a modified version of existing dimensionality reduction method for binary data, weighted logistic principal component analysis (WLPCA). We propose to fit the basis vectors of the latent natural parameter subspace in a successive procedure instead of fitting them at ones, so the vectors will be sorted by an explanation power of the data in term of model likelihood. Based on our modified WLPCA model, we present a methodology for analyzing binary (true/false) questionnaires. The purpose of the methodology is to bring the authors of questionnaires a global overview of relationships between questions based on the correlations of binary answers. In the experiment we employ our proposed model to analyze psychiatric questionnaire, namely the Junior Temperament and Character Inventory (JTCI). The results suggest that our methodology can yield interesting relationships between questions and that our modified model is better suited for such an analysis as the existing versions of the logistic principal component analysis model.
Jakub Mažgut, Martina Paulinyová, Peter Tiňo
Generalized Maneuvers in Route Planning
Abstract
We study an important practical aspect of the route planning problem in real-world road networks – maneuvers. Informally, maneuvers represent various irregularities of the road network graph such as turn-prohibitions, traffic light delays, round-abouts, forbidden passages and so on. We propose a generalized model which can handle arbitrarily complex (and even negative) maneuvers, and outline how to enhance Dijkstra’s algorithm in order to solve route planning queries in this model without prior adjustments of the underlying road network graph.
Petr Hliněný, Ondrej Moriš
STANSE: Bug-Finding Framework for C Programs
Abstract
Stanse is a free (available under the GPLv2 license) modular framework for finding bugs in C programs using static analysis. Its two main design goals are 1) ability to process large software projects like the Linux kernel and 2) extensibility with new bug-finding techniques with a minimal effort. Currently there are four bug-finding algorithms implemented within Stanse: AutomatonChecker checks properties described in an automata-based formalism, ThreadChecker detects deadlocks among multiple threads, LockChecker finds locking errors based on statistics, and ReachabilityChecker looks for unreachable code. Stanse has been tested on the Linux kernel, where it has found dozens of previously undiscovered bugs.
Jan Obdržálek, Jiří Slabý, Marek Trtík
Introducing the FPGA-Based Hardware Architecture of Systemic Computation (HAoS)
Abstract
This paper presents HAoS, the first Hardware Architecture of the bio-inspired computational paradigm known as Systemic Computation (SC). SC was designed to support the modelling of biological processes inherently by defining a massively parallel non-conventional computer architecture and a model of natural behaviour. In this work we describe a novel custom digital design, which addresses the SC architecture parallelism requirement by exploiting the inbuilt parallelism of a Field Programmable Gate Array (FPGA) and by using the highly efficient matching capability of a Ternary Content Addressable Memory (TCAM). Basic processing capabilities are embedded in HAoS, in order to minimize time-demanding data transfers, while the optional use of a CPU provides high-level processing support. We demonstrate a functional simulation-verified prototype, which takes into consideration programmability and scalability. Analysis shows that the proposed architecture provides an effective solution in terms of efficiency versus flexibility trade-off and can potentially outperform prior implementations.
Christos Sakellariou, Peter J. Bentley
A Parallel Compact Hash Table
Abstract
We present the first parallel compact hash table algorithm. It delivers high performance and scalability due to its dynamic region-based locking scheme with only a fraction of the memory requirements of a regular hash table.
Steven van der Vegt, Alfons Laarman
Four Authorization Protocols for an Electronic Payment System
Abstract
Over the recent years the Czech law has become quite liberal towards electronic payment systems dealing with low-value goods (“micropayments”) [1]. As of today, few businesses make use of the new legal facilities. This paper concerns a project aiming to support proliferation of the micropayment-based operations by creating a feature-rich, secure payment system with an open specification. More precisely, the key focus here will be payment authorization: as payments are carried out by an electronic device on a customer’s behalf, there is a clear need for prevention of its abuse in case of theft. We introduce four original authorization protocols – each suited to a different environment – that prescribe the behavior of all relevant communicating devices and hopefully allow for secure and sound authorization.
Roman Žilka, Vashek Matyáš, Libor Kyncl
Backmatter
Metadaten
Titel
Mathematical and Engineering Methods in Computer Science
herausgegeben von
Zdeněk Kotásek
Jan Bouda
Ivana Černá
Lukáš Sekanina
Tomáš Vojnar
David Antoš
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-25929-6
Print ISBN
978-3-642-25928-9
DOI
https://doi.org/10.1007/978-3-642-25929-6

Premium Partner