Skip to main content

2013 | Buch

New Results in Dependability and Computer Systems

Proceedings of the 8th International Conference on Dependability and Complex Systems DepCoS-RELCOMEX, September 9-13, 2013, Brunów, Poland

herausgegeben von: Wojciech Zamojski, Jacek Mazurkiewicz, Jarosław Sugier, Tomasz Walkowiak, Janusz Kacprzyk

Verlag: Springer International Publishing

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

DepCoS – RELCOMEX is an annual series of conferences organized by the Institute of Computer Engineering, Control and Robotics (CECR), Wrocław University of Technology, since 2006. Its idea came from the heritage of the other two cycles of events: RELCOMEX Conferences (1977 – 89) and Microcomputer Schools (1985 – 95) which were then organized by the Institute of Engineering Cybernetics, the previous name of CECR. In contrast to those preceding meetings focused on the conventional reliability analysis, the DepCoS mission is to develop a more comprehensive approach to computer system performability, which is now commonly called dependability. Contemporary technical systems are integrated unities of technical, information, organization, software and human resources. Diversity of the processes being realized in the system, their concurrency and their reliance on in-system intelligence significantly impedes construction of strict mathematical models and calls for application of intelligent and soft computing methods. The submissions included in this volume illustrate variety of problems that need to be explored in the dependability analysis: methodologies and practical tools for modeling, design and simulation of the systems, security and confidentiality in information processing, specific issues of heterogeneous, today often wireless, computer networks, or management of transportation networks.

Inhaltsverzeichnis

Frontmatter
Application Level Execution Model for Transparent Distributed Computing

Writing a distributed application involves using a number of different protocols and libraries such as CORBA, MPI, OpenMP or portable virtual machines like JVM or .NET. These are independent pieces of software and gluing them together adds complexity which can be error prone. Still, some issues such as transparent creation and synchronization of the parallel distributed threads, code replication, data communication and hardware and software platform abstraction are not yet fully addressed. For these reasons a programmer must still manually handle tasks that should be automatically and transparently done by the system. In this work we propose a novel computing model especially designed to abstract and automate the distributed computing requirements ensuring at the same time the dependability and scalability of the system. Our model is designed for a portable virtual machine suitable to be implemented both on hardware native instruction set as well as in other virtual machines like JVM or .NET to ensure its portability across hardware and software frameworks.

Razvan-Mihai Aciu, Horia Ciocarlie
Software Support of the Risk Reduction Assessment in the ValueSec Project Flood Use Case

The chapter presents information about the first stage of validation of the OSCAD tool for the risk reduction assessment within the decision support process. First, general information about risk management and risk assessment is given, and relations of the risk assessment with the flood issue are described. Basic information about the ValueSec project and its relations with risk assessment is presented. Next, the results of first experiments heading for OSCAD usage as one of the possible elements supporting the Risk Reduction Assessment (RRA) software pillar in the ValueSec project are described. The possibility of OSCAD usage for the RRA pillar was validated on the example of the so-called “flood use case” of the ValueSec project. This use case relates to the assessment and selection of flood countermeasures. The main objective of the validation is to find out if the risk assessment method implemented in OSCAD can be used for the flood issue.

Jacek Bagiński
Risk Assessment Aspects in Mastering the Value Function of Security Measures

The chapter presents the risk management approach applied in the EC FP7 ValueSec project. The security measures selection process is based on three pillars: Risk Reduction Assessment (RRA), Cost-Benefit-Analysis (CBA) and Qualitative Criteria Assessment (QCA). The ValueSec tool set, which is elaborated in the project, should be equipped with components corresponding to these pillars. The chapter overviews the researches of the project focused on the decision model elaboration and selection of existing method to be implemented, or existing tools to be integrated in the ValueSec framework. Risk management is a broad issue, especially in five of the project assumed contexts. For this reason more specialized components are allowed for the RRA pillar. Currently the project passes to the implementation and use case experimentation phase. The chapter shows the general architecture, currently implemented and the RRA component example.

Andrzej Białas
Reduction of Computational Cost in Mutation Testing by Sampling Mutants

The objective of this chapter is to explore the reduction of computational costs of mutation testing by randomly sampling mutants. Several experiments were conducted in the Eclipse environment using

MuClipse

and

CodePro

plugins and especially designed and implemented tools:

Mutants Remover

and

Console Output Analyser.

Six types of mutant’ subsets were generated and examined. Mutation score and the source code coverage were used to evaluate the effectiveness of mutation testing with subsets of mutants. The ability to detect errors introduced “on purpose” in the source code was also examined.

Ilona Bluemke, Karol Kulesza
Use of Neural Network Algorithms in Prediction of XLPE HV Insulation Properties under Thermal Aging

Some Artificial neural network algorithms have been used to predict properties of high voltage electrical insulation under thermal aging in term to reduce the aging experiment time. In this work we present a short comparison of the obtained results in the case of Cross-linked Polyethylene (XLPE). The theoretical and the experimental results are concordant. As a neural network application, we propose a new method based on Radial Basis Function Gaussian network (RBFG) trained by two algorithms: Random Optimization Method (ROM) and Back-propagation (BP).

Boukezzi Larbi, Boubakeur Ahmed
Computer Simulation Analysis of Cluster Model of Totally-Connected Flows on the Chain Mail

This article considers a cluster model of totally-connected flows and also analysis software. This flow model is initiated by the relevance of creation of appropriate methods describing the traffic behavior by physical methods, Kerner, where there are some elements in implicit form. The model was first formulated at ITSC-2011, Bugaev etc, and other publications in Russia. It combines limit conditions of leader-following model, Lighthill-Whitham hydrodynamic approach and generalized solutions with Renkine-Hugoniot conditions. Experiments on the basis of each model are carried out, results are obtained and software is developed.

Alexander P. Buslaev, Pavel M. Strusinskiy
Assessment of Network Coding Mechanism for the Network Protocol Stack 802.15.4/6LoWPAN

The protocol stack 6LoWPAN (Low power Wireless Personal Area /Networks) is minimally resources protocol that has been implemented in embedded operating systems. However, it improved mechanism for more efficient use of limited radio bandwidth. This mechanism is network coding, which is devoted to this article. The purpose of this article is to evaluate the effectiveness of network coding mechanism implemented on a network component that uses Atmel AVR Raven IEEE protocol stack 802.15.4/6LoWPAN. Implementation mechanism of network coding has been done in the environment and Contiki on the sample that is representative hardware platform Atmel AVR. As an interim step that would allow the implementation of the mechanism was to analyze the coding method for the network protocol stack 802.15.4/6LoWPAN.

Michał Byłak, Dariusz Laskowski
Reliability Analysis of Discrete Transportation Systems Using Critical States

The paper presents a resource constrained model of a discrete transportation system that can be used to simulate its operation in presence of faults. The simulation results are used to determine the initial level of resources that ensures seamless operation of the system. The simulator is also used to assess the conditional probability of system failure after reaching a specific set of reliability states. This is used to determine the set of critical states, i.e. the states when the system is still operational but the probability of failure in near future is unacceptably high. These states can be used as an indicator that the system is degrading dangerously.

Dariusz Caban, Tomasz Walkowiak
A Reference Model for the Selection of Open Source Tools for Requirements Management

The aim of this study is to build a reference model for the selection of Open Source tools for the management of customer requirements in IT projects. The construction of the reference model results from the needs of companies producing software which are also interested in streamlining the process of managing requirements using Open Source tools. This interest in Open Source tools is, in turn, a consequence of licensing costs, integration with the rest of the portfolio, and support costs. The advantage of Open Source tools is their low license cost and the ease of their adaptation, provided that there is access to a reference model for their adaptation. The problem of the IT market is the lack of such reference models for selecting Open Source tools. Therefore, the authors undertook to build such a model and to apply it in supporting the requirements development process in IT projects.

To achieve the objective, the study was divided into four main parts. The first elaborates on the issue of selecting tools in the software development cycle, indicating the need for departments creating IT systems to use appropriate tools for the given organization. The second part is devoted to the approach to the selection of tools supporting the requirements development process. The purpose of this section is to diagnose the state of IT projects and the lack of support for the requirements development process. The third (the main) part presents the idea to construct a reference model for the selection of tools to support the requirements development process. The structure and development prospects of the model are also discussed here. The fourth part is entirely devoted to examples of the application of the reference model in several IT projects.

Bartosz Chrabski, Cezary Orłowski
A Probabilistic Approach to the Count-To-Infinity Problem in Distance-Vector Routing Algorithms

Count-to-infinity problem is characteristic for routing algorithms based on the distributed implementation of the classical Bellman-Ford algorithm. In this paper a probabilistic solution to this problem is proposed. It is argued that by the use of a Bloom Filter added to the routing message the routing loops will with high probability not form. An experimental analysis of this solution for use in Wireless Sensor Networks in practice is also included.

Adam Czubak
A Quality Estimation of Mutation Clustering in C# Programs

Mutation testing tasks are expensive in time and resources. Different cost reduction methods were developed to cope with this problem. In this chapter experimental evaluation of mutation clustering is presented. The approach was applied for object-oriented and standard mutation testing of C# programs. The quality metric was used to compare different solutions. It calculates a tradeoff between mutations score accuracy and mutation costs in terms of number of mutants and number of tests. The results show a substantive decrease in number of mutants and tests while suffering a small decline of mutation score accuracy. However the outcome is not superior to other cost reduction methods, as selective mutation or mutant sampling.

Anna Derezińska
Using Virtualization Technology for Fault-Tolerant Replication in LAN

We present an architecture and an algorithm for Byzantine fault-tolerant state machine replication. Our algorithm explores the advantages of virtualization to reliably detect and tolerate faulty replicas, allowing the transformation of Byzantine faults into omission faults. Our approach reduces the total number of physical replicas from

3f+1

to

2f+1.

Our approach is based on the concept of twin virtual machines, where there are two virtual machines in each physical host, each one acting as a failure detector of its twin.

Fernando Dettoni, Lau Cheuk Lung, Aldelir Fernando Luiz
Quantification of Simultaneous-AND Gates in Temporal Fault Trees

Fault Tree Analysis has been a cornerstone of safety-critical systems for many years. It has seen various extensions to enable it to analyse dynamic behaviours exhibited by modern systems with redundant components. However, none of these extended FTA approaches provide much support for modelling situations where events have to be "nearly simultaneous", i.e., where events must occur within a certain interval to cause a failure. Although one such extension, Pandora, is unique in providing a "Simultaneous-AND" gate, it does not allow such intervals to be represented. In this work, we extend the Simultaneous-AND gate to include a parameterized interval – referred to as

pSAND

– such that the output event occurs if the input events occur within a defined period of time. This work then derives an expression for the exact quantification of pSAND for exponentially distributed events and provides an approximation using Monte Carlo simulation which can be used for other distributions.

Ernest Edifor, Martin Walker, Neil Gordon
Improving of Non-Interactive Zero-Knowledge Arguments Using Oblivious Transfer

We study non-interactive zero-knowledge (NIZK) arguments using oblivious transfer (OT) that correspond to interactive proof protocols but assuming that the prover is computationally bounded. As opposed to the single theorem NIZK proof protocols using common random string, NIZK argument protocols using OT are «multilingual» that is language

L

or the one-way function can be chosen and declared by prover in non-interactive mode. These protocols use

m

-out-of-

n

OT with public keys given by verifier to prover in the initialization phase and common element with unknown to prover and verifier pre-image. It is shown that due to usage of different verifier’s secret encryption keys the implementation of NIZK argument protocols can be simplified using a single randomizer for

p

successive elementary transactions. For systems using 1-out-of-2 OT, proposal allows increase the information rate approximately to 5

p

/(3

p

+1) times or reduce the soundness probability of NIZK arguments to the same degree. The above factor for single use NIZK is about two that corresponds to almost quadratic decreasing of soundness probability. For NIZK argument using

t

+1-out-of-2

t

OT (

t

>1), it is shown that its soundness probability for small

t

is essentially lower in comparison with soundness probability of NIZK arguments using 1-out-of-2 OT.

Alexander Frolov
Virtual Environment for Implementation and Testing Private Wide Area Network Solutions

In this paper the concept of virtual environment for implementation and testing private Wide Area Network (WAN) solutions is presented. The VMware vSphere virtualization platform is used. The paper presents the ability to reflect the structure of any given WAN topology using Vyatta software routers and VMware virtualization platform and verifies its reliability regarding data transfer. The paper includes a number of performance tests to verify the dependability of the proposed solution and provide a proof-of-concept for the network topology during the Design phase of the PPDIOO methodology, right before the Implementation phase.

Mariusz Gola, Adam Czubak
Optimization of Privacy Preserving Mechanisms in Mining Continuous Patterns

This work presents a new Apriori-like algorithm called CPMPP (Continuous Pattern Mining with Privacy Preservation) for exploring long continuous sequences in a distributed environment with privacy preservation. Given the fact that a cryptography-based technique may consume a considerable amount of time when enciphering data using a commutative cipher, a short form of locally frequent sequences has been applied to improve the system performance. In this approach, the length of locally frequent sequences does not exceed two items. In consequence, the proposed algorithm reduces the number of expensive cryptographic operations required to obtain the result. The conducted experiments show a significant performance increase with the increase of the length of generated patterns.

Marcin Gorawski, Pawel Jureczek
Technical and Program Aspects on Monitoring of Highway Flows (Case Study of Moscow City)

The article describes validation and testing of traffic monitoring technology for models creation and forecast of traffic characteristics on traffic city network (based on case study of Moscow). We discuss one of collect data method based on network of microwave radar SSHD, installed in Moscow last year. Acquiring data allow to analyze of traffic jam causes and to evaluate approximately economic losses for typical junctions of road network.

M. G. Gorodnichev, A. N. Nigmatulin
Integral Functionals of semi-Markov Processes in Reliability Problems

The possibility of calculating the reliability characteristics and parameters on the basis of on stochastic process models of an object degradation is considered in the paper. Integral functionals of semi-Markov processes, called cumulative processes, have been used for construction stochastic models of degradation.

Franciszek Grabski
Generating Repair Rules for Database Integrity Maintenance

Repair system has two essential components, which are much related to each other. When the update operation is executed, the first component is the detection of the erroneous state if any and the second component is to repair this state by finding the changes to the update operation that would repair it. Failing to have the second component, which is the repair action, will enforce the user to manually correcting and reentering an erroneous update operation. Our approach will take advantage of the integrity before the update operation, which will result on limiting the detection only to the database state after the update operation. Also the repair component will take advantage of the integrity before the update operation and integrity violation after the update operation but before the repair. The focus of this paper is to generate repairs for all first order constraints, and by using only substitution with no resolution search. Multiple constraints can be satisfied in parallel without a sequential process with no possibility of cyclic violation.

Feras Hanandeh, Yaser Quasmeh
Optimization Algorithm for the Preservation of Sensor Coverage

Sensor networks that collect traffic information for supervision and immediate intervention are largely used. The domain of real time coverage algorithms is still an issue of debate. In order to achieve good coverage, the simulation phase is of major importance. Simulation can partially validate the performance of the algorithms and their dependability. This chapter presents a novel algorithm for coverage maintenance using driving prediction and the simulated results that validate its performance. Also, issues such as driving behavior have been considered.

Codruta-Mihaela Istin, Horia Ciocarlie, Razvan Aciu
Methods for Detecting and Analyzing Hidden FAT32 Volumes Created with the Use of Cryptographic Tools

The article describes the theoretical and practical methods for detecting and analyzing hidden volumes created with the use of cryptographic tools. The presented method is based on an analysis of the differences that result from the use of a hidden volume in FAT32 file systems. The method is effective both when the password is known to the host container and in the situations when password is not known. Potential computer forensic application of this methodology varies from standard investigations to advanced analysis of network and in the cloud data storages.

Ireneusz Jóźwiak, Michał Kędziora, Aleksandra Melińska
Critical Infrastructures Safety Assessment Combining Fuzzy Models and Bayesian Belief Network under Uncertainties

The complexity of critical infrastructures (CI) and systems safety assessment calls for the need for integration of different methods that use input data of different qualimetric nature (deterministic, stochastic, linguistic). Application of one specified group of risk methods might lead to loss and/or disregard of a part of safety-related information. Bayesian Belief Network (BBN) and fuzzy logic (FL) represent a basis for development of the hybrid approach to capture all information required for safety assessment of complex dynamic system under uncertainties. Integration of FL-based methods and BBNs allows decreasing the amount of input information (measurements) required for safety assessment when these methods are used independently outside from the proposed integration framework. The processes of CI parameters’ measurement might technically difficult and expensive. Instrumentation layer’s operation might be compromised in emergency situations due to its dependence on power supply. The hybrid methods might be considered as basis for the expert system to help the operator make the decisions. The application of hybrid methods makes operator less dependent on information from instrumentation and control system (I&C). The illustrative example for Nuclear Power Plant (NPP) reactor safety assessment is considered in this chapter.

Vyacheslav Kharchenko, Eugene Brezhniev, Vladimir Sklyar, Artem Boyarchuk
Towards Evolution Methodology for Service-Oriented Systems

Modern organisations are forced to evolve their IT systems to keep up with ever-changing business requirements. Service-Oriented Architecture addresses the challenge of boosting a system’s modifiability by composing a new functionality out of existing, independent, loosely-coupled services. This makes SOA a promising design paradigm for rapidly evolving systems. However, existing development methodologies for SOA, such as IBM’s SOMA, focus more on the transition from legacy non-SOA to SOA systems, and less on their subsequent evolution. This makes the development of an evolution methodology suitable for service-oriented systems an open research problem. The presented evolution methodology comprises an evolution process and an evolution documentation model. The process is compliant with a popular ISO 20000 norm. Its artefacts have been defined in terms of the evolution documentation model. The business-driven changes are documented with architectural decisions that capture changes made to the system at various levels of scope, together with their motivation. In order to facilitate the change-making process, a set of typical change scenarios has been defined. It comprises typical sequences of architectural decisions for cases of the most important changes. The entire approach is illustrated with a real-world example of an internet payment system.

Szymon Kijas, Andrzej Zalewski
The LVA-Index in Clustering

In this work we describe the application of the

LVA-Index

in the

NBC

algorithm and discuss the results of the relevant experiments.

LVA-Index

is based on the idea of approximation vectors and the layer approach.

NBC

is considered as an efficient density-based clustering algorithm. The efficiency of

NBC

is strictly dependent on the efficiency of determining nearest neighbors. For this reason, the authors of

NBC

used the simplified implementation of the

VA-File

and the idea of layers for indexing points and determining nearest neighbors. We noticed that is possible to speed up the clustering by applying the

LVA-Index

which provides the means for determining nearest neighbors faster. The results of the experiments prove that incorporating the

LVA-Index

into the

NBC

improves the efficiency of clustering.

Piotr Lasek
Three Different Approaches in Pedestrian Dynamics Modeling – A Case Study

In this work different approaches to crowd dynamics modeling are compared in terms of efficiency and accuracy. The authors analyze and test applicability of some characteristic microscopic models including: Generalized Centrifugal Force Model, Social Distances, as well as macroscopic model represented by hydrodynamic approach. Models were compared on a real life test case, for which precise empirical results were obtained, to find sufficient balance between dependability of results and computational effort.

Robert Lubaś, Janusz Miller, Marcin Mycek, Jakub Porzycki, Jarosław Wąs
The End-To-End Rate Adaptation Application for Real-Time Video Monitoring

In modern advanced monitoring solutions, video data are shared across IP networks. Such systems are used for connection with remote offices or other locations and provide information from multiple video sensors. However, a transmission of data from multiple cameras may lead to the degradation of video quality and even to lack of transmission abilities especially in heterogeneous networks. This paper proposes a concept of application with end-to-end rate adaptation for video monitoring systems ensuring accessibility and retainability of video service even in the limited capacity of transmission system.

P. Lubkowski, Dariusz Laskowski
Discrete Transportation Systems Quality Performance Analysis by Critical States Detection

The paper presents the analysis of discrete transportation systems (

DTS

) performance. The formal model of the transportation system is presented. It takes into consideration functional and reliability aspects. Monte Carlo simulation is used for estimating the system quality metric. The quality of the system is assessed in three levels, as: operational, critical and failed. The proposed solution allows to predict the system quality within the short time horizon. The paper includes numerical results for real mail distribution system.

Jacek Mazurkiewicz, Tomasz Walkowiak
An Expanded Concept of the Borrowed Time as a Mean of Increasing the Average Speed Isotropy on Regular Grids

In this work it is investigated how the proper choice of the underlying grid for computer simulations in complex systems can increase its observed isotropy; thus increasing dependability of the obtained results. The square lattice with both the von Neumann and the Moore neighborhood as well as the hexagonal lattice are being considered. The average speed isotropy is examined with regard to different length scales. The concept of the borrowed time is reintroduced and expanded for the square lattice with the Moore neighborhood. It is shown that such treatment can decrease the anisotropy of the average speed without increasing complexity of the calculations.

Marcin Mycek
Freshness Constraints in the RT Framework

The work is focused on "time-domain" nonmonotonicity in the RT Framework, which is a Trust Management model. The freshness constraints propagation model that allows for credential revocation is presented. The solution introduces a freshness constraints that turn nonmonotonicity to be temporally monotonic. The freshness graph and propagation rules of freshness constraints are defined. The proposed solution allows policy authors to flexibly define constraints on policy level. Finally, the given model is evaluated against the real-life sample scenario.

Wojciech Pikulski, Krzysztof Sacha
Transformational Modeling of BPMN Business Process in SOA Context

A transformational method for modeling business process is introduced in the following paper. Business process is described in Business Process Modeling Notation by using special constraints that embeds process in a SOA context. The transformational method supports a human designer by means of LOTOS language formalization.

Andrzej Ratkowski
Algorithmic and Information Aspects of the Generalized Transportation Problem for Linear Objects on a Two Dimensional Lattice

This work is about logistic science and transportation problem. In first chapter author describe modern logistic and about its place in any knowledge domains. Second chapter is about common transportation problem and its modern particularity. Third chapter is about generalized transportation problem for two-dimensional space. In forth chapter author describe his solution of generalized transportation problem for linear objects. Fifth part is about futures of the work and transportation logistic at all.

Anton Shmakov
Reliability Assessment of Supporting Satellite System EGNOS

The paper presents the issues related to satellite navigation systems which can be supported by the EGNOS system. It provides basic information on the structure and operation of these systems, especially the EGNOS system. This enabled an analysis in terms of reliability of both the satellite navigation systems, as well as EGNOS. The results of values of probabilities ​​staying the system in the highlighted states were achieved.

Mirosław Siergiejczyk, Adam Rosiński, Karolina Krzykowska
Vbam – Byzantine Atomic Multicast in LAN Based on Virtualization Technology

This work presents a BFT Atomic Multicast Protocol (Vbam) whose algorithm manages to implement a reliable consensus service with only

2f + 1

servers using only common technologies, such as virtualization and data sharing abstractions. In order to achieve these goals, we chose to adopt a hybrid model, which means it has different assumptions between components regarding synchrony, and two different local area networks (LANs), a payload LAN and a separated LAN where message ordering happens.

Marcelo Ribeiro Xavier Silva, Lau Cheuk Lung, Leandro Quibem Magnabosco, Luciana de Oliveira Rech
An Approach to Automated Verification of Multi-Level Security System Models

In the paper the approach to the multi-level security (MLS) system models verification is presented. In the work the MlsML profile was developed with possibility of the confidentiality or integrity verification on the base of Bell- LaPadula or Biba models. The Bell-LaPadula and Biba models are formalized together with scenarios that represent possible run-time instances. Properties of the security policy model are expressed as constrains in OCL language. The feasibility of the proposed approach by applying it to a non-trivial example is demonstrated.

Andrzej Stasiak, Zbigniew Zieliński
A Web Service-Based Platform for Distributed Web Applications Integration

Web applications built according to the MVC pattern are growing in popularity. At the same time web services: either SOAP or REST-based are also gaining momentum, but their adoption in web applications is usually limited to a smaller scope and with little management of their behaviour and state. The approach presented in this work does not enforce cloud-based solutions; however, it is highly advised to refer to web services as not only web interfaces to other monolithic web applications but try to perceive them as preferably independent, stateless software components, easily scalable and deployable in the cloud. This potentially could lead to dramatic reduction of the cost of software services with simultaneous increase of performance on-demand. The presented platform confronts the discrepancies between MVC pattern and service adoption in web applications, and is a proposition that increases performance of both service-based applications as well as their design methods. It introduces mechanisms for increasing the scope of service integration and also web application integration, at the same time making it manageable through specialized tools.

Paweł Stelmach, Łukasz Falas
Universal Platform for Composite Data Stream Processing Services Management

Constant delivery of data and information through streaming methods is growing in popularity. Streaming is widely used for video and sensor data delivery, usually directly to the client. In this work we propose architecture of the platform for composition of several distributed streaming intermediaries, able to process the data stream on-line. Features of the platform consist of ability to create a composite stream on demand as well as update, delete or read its current state. Works on the platform are an on-going research effort and current work, presented in this work, focuses on separation of platform functionality into distributed software components for performance optimization. This distribution allows for optimizing each component’s behaviour regarding its usage characteristics. As an effect the platform’s streaming service management functionality is offered as a stateless service.

Paweł Stelmach, Patryk Schauer, Adam Kokot, Maciej Demkiewicz
Proposal of Cost-Effective Tenant-Based Resource Allocation Model for a SaaS System

Software-as-a-Service (SaaS) is a software distribution paradigm in cloud computing and represents the highest, software layer in the cloud stack. Since most cloud services providers charge for the resource use it is important to create resource efficient applications. One of the ways to achieve that is multi-tenant architecture of SaaS applications. It allows the application for efficient self-managing of the resources. In this paper the influence of tenant-based resource allocation model on cost-effectiveness of SaaS systems is investigated. The tenant-based resource allocation model is one of the methods to tackle under-optimal resource utilization. When compared to traditional resource scaling it can reduce the costs of running SaaS systems in cloud environments. The more tenant-oriented the SaaS systems are the more benefits that model can provide.

Wojciech Stolarz, Marek Woda
Automatic Load Testing of Web Application in SaaS Model

Necessity of monitoring in combination with the actual complexity of the e-services creates a need for constructing systems for active monitoring of various types of web services. Usually those systems are high-availability services, that require on one hand ingenious software solutions and on the other hand reliable hardware architecture. The created systems need to be flexible enough to satisfy customers requirements. This paper introduces an example solution of a system, that implement functional monitor of services provided in SaaS model. The provided system allows to check certain functionalities or whole service by running functional/load tests scenarios that are automatically generated, based on specially prepared user model.

Emil Stupiec, Tomasz Walkowiak
Implementing Salsa20 vs. AES and Serpent Ciphers in Popular-Grade FPGA Devices

Salsa20 is a 256-bit stream cipher that has been proposed to eSTREAM, ECRYPT Stream Cipher Project, and is considered to be one of the most secure and relatively fastest proposals. This paper discusses hardware implementations of this cipher in two organizations – a fully unrolled, pipelined dataflow path and an iterative loop – in low-cost Field Programmable Gate Arrays, and compares them with equivalent realizations of the AES and Serpent block ciphers. The results demonstrate potential of the algorithm when it is implemented in the specific FPGA environment and evaluate its effectiveness in contemporary popular programmable devices.

Jarosław Sugier
On Testing Wireless Sensor Networks

Testing wireless sensor networks (WSNs) is not a trivial task, due to the massively-parallel communication between many independently running processors. Distributed way of operation does not allow step-bystep execution and typical debugging, so other techniques have to be used, such as detailed packet logging and analysis. We provide a 2-way WSNto- TCP proxy architecture which extends the typical Base Station software with packet sniffing and packet sending capabilities. This allows writing and executing WSN test scenarios and automatic test assessment by using typical client-server applications written in any programming or scripting languages. An example of such protocol testing is also shown.

Tomasz Surmacz, Bartosz Wojciechowski, Maciej Nikodem, Mariusz Słabicki
Towards Precise Architectural Decision Models

One of the modern approaches for documenting software architecture is to show the architectural design decisions that led an architect to the final form of software architecture. However, decisions that have been made in such a process may need to be changed during further evolution and maintenance of the software architecture. The main reasons for these changes are new or changed requirements. In our team we have developed a graphical modelling notation for documenting architectural decisions, called Maps of Architectural Decisions, that can support the process of making changes in the software architecture. In this work we define a formal background for the controlled process of making changes in architectural decision models that are documented using that notation.

Marcin Szlenk
Slot Selection Algorithms for Economic Scheduling in Distributed Computing with High QoS Rates

In this work, we address the problem of slot selection and co-allocation for parallel jobs in distributed computing with non-dedicated resources. A single slot is a time span that can be assigned to a task, which is a part of a job. The job launch requires a co-allocation of a specified number of slots starting synchronously. The challenge is that slots associated with different CPU nodes of distributed computational environments may have arbitrary start and finish points that do not match. Some existing algorithms assigns a job to the first set of slots matching the resource request without any optimization (the first fit type), while other algorithms are based on an exhaustive search. In this paper, algorithms for effective slot selection of linear complexity are studied and compared with known decisions. The proposed algorithms allow overall increase in the quality of service (QoS) for each of the considered rates: job start time, finish time, runtime, CPU usage time and total cost of job execution.

Victor Toporkov, Anna Toporkova, Alexey Tselishchev, Dmitry Yemelyanov
K-Induction Based Verification of Real-Time Safety Critical Systems

Nowadays, safety critical systems are often complex, real-time systems requiring formal methods to prove the correctness of their behavior. This work presents a framework that supports modeling and model checking such systems. We adapted an existing formalism to provide better modeling and model checking support. Using this formalism, we extended a k-induction based model checking approach: we defined a procedure to handle both safety and liveness properties, and developed methods to find invariants. We implemented a toolchain for this workflow and evaluated our methods in an industrial case study.

Tamás Tóth, András Vörös, István Majzik
Native Support for Modbus RTU Protocol in Snort Intrusion Detection System

The paper addresses the problem of intrusion detection in industrial networks. A novel approach to processing non-IP protocols in Snort Intrusion Detection System is presented, based on Snort Data Acquisition Module (DAQ). An example implementation for industry-standard Modbus RTU protocol is presented, which allows Snort to natively process Modbus RTU frames, without need to use external programs or hardware and without modification of Snort code. The structure of implementation and frame processing path is outlined. The solution is compared against existing attempts to process Modbus family protocols in Snort IDS. Results of tests in an virtualised environment are given, together with indications of future work.

Wojciech Tylman
SCADA Intrusion Detection Based on Modelling of Allowed Communication Patterns

This work presents a network intrusion detection system (NIDS) for SCADA developed as an extension to Snort NIDS, a popular open-source solution targeted at intrusion detection in Internet. The concept of anomaly-based intrusion detection and its applicability in the specific situation of industrial network traffic is discussed. The idea of modelling allowed communication patterns for Modbus RTU protocol is explained and the system concept, utilising n-gram analysis of packet contents, statistical analysis of selected packet features and a Bayesian Network as data fusion component is presented. The implementation details are outlined, including the concept of building the system as a preprocessor for the Snort NIDS. The chapter is concluded by results of test conducted in simulated environment.

Wojciech Tylman
System for Estimation of Patient’s State – Discussion of the Approach

This chapter presents work on a diagnosis support system with continuous estimation of a sudden cardiac arrest risk. The concept is based on the analysis of the signals received on line from a medical monitor, possibly augmented by clinical data. The need for such solution is motivated and existing approaches towards quantitative estimation of patient’s health are examined. An overview of the system and its building blocks are presented, which allows to define the requirements for the hardware platform and operating system. A discussion follows, explaining the choice made. The design of communication path between the medical monitor decided upon and the discussed system is explained. The prototyping microprocessor board being used is presented, together with detailed description of steps which lead to the development of a fully-featured Linux distribution for this board.

Wojciech Tylman, Tomasz Waszyrowski, Andrzej Napieralski, Marek Kamiński, Zbigniew Kulesza, Rafał Kotas, Paweł Marciniak, Radosław Tomala, Maciej Wenerski
Dependability Aspects of Autonomic Cooperative Computing Systems

As the number of globally interconnected devices has become substantial, the resulting systems are getting prone to configuration issues and dependability becomes one of their key characteristics. Following the rationale behind self-management in autonomic computing, the main trend is to put emphasis on the ability of a computer system to self-configure, self-optimise, self-heal, and self-protect without any explicit need for external human intervention. This is crucial for complexity reasons, as a complete automation appears to be the only reasonable and justified way forward. This paper advocates for the possibility of adjusting the level of dependability with the aid of Autonomic Cooperative Behaviour, expressed through cooperative data processing and computing. In particular, devices may improve the related system robustness by sharing their computational resources for the purposes of cooperative data transmission. Autonomic system design, in turn, is expected to guarantee system stability and scalability, especially in the case of very large set-ups composed of a significant number of devices exposing such Autonomic Cooperative Behaviour simultaneously. For this reason, a properly adjusted overlay autonomic network architecture needs to be employed so the overall system may be controlled by specific Decision Making Entities interacting among themselves and operating with the aid of control loops.

Michał Wódczak
Life Cycle Cost through Reliability

This work presents a novel approach using dependability of sub-assemblies to compute the life cycle cost of complex products (i.e. products that are technologically more complex than average or have a higher life expectancy) This work is composed of two parts, first a retrospective on life cycle cost and its challenges, then a description of the approach using reliability as the key element. This approach combines the usage of already well-known components or sub-assemblies alongside new, innovative ones in order to compute the life cycle cost of the system. This work is currently conducted as a PhD thesis and as industrial support.

Manac’h Yann-Guirec, Benfriha Khaled, Aoussat Améziane
Verification of Infocommunication System Components for Modeling and Control of Saturated Traffic in Megalopolis

The article contains the results of experimental research on testing and verification of the elements of infocommunication system, developed in the Scientific and Educational Center on Intelligent Monitoring and Transport Control Systems of Moscow Technical University of Communication and Informatics (MTUCI) and Mathematical Modeling Department of Moscow State Automobile & Road Technical University (MADI) for modeling and control of saturated traffic metropolis (The head is Professor A.P. Buslaev) [1]. As a result of the high rates of automobilization in the world are actual problems of modeling and forecasting of the traffic flow in complex networks of megalopolises. Actively develops the theory of traffic flows, as well as the means of observation and monitoring of the characteristics of the transport flows. But theoretical models of traffic significantly required of real information about the parameters and characteristics of such a complex and constantly changing socio-technical system, which is the transportation system of the megalopolis, including urban road network and traffic flows. Experiments carried out on the basis of the mobile laboratory of its own design, equipped with a special system of monitoring and processing on the PC and smartphones.

Marina V. Yashina, Andrew V. Provorov
Shuffle–Based Verification of Component Compatibility

An extension of earlier work on component compatibility is described in this paper. Similarly as before, the behavior of components is specified by component interface languages, and the shuffle operation is introduced to represent possible interleavings of service requests that originate at several concurrent components. The paper shows that the verification of component compatibility is possible without the exhaustive analysis of the state space of interacting components. Exhaustive analysis of state spaces was the basis of earlier approaches to compatibility verification.

W. M. Zuberek
Backmatter
Metadaten
Titel
New Results in Dependability and Computer Systems
herausgegeben von
Wojciech Zamojski
Jacek Mazurkiewicz
Jarosław Sugier
Tomasz Walkowiak
Janusz Kacprzyk
Copyright-Jahr
2013
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-00945-2
Print ISBN
978-3-319-00944-5
DOI
https://doi.org/10.1007/978-3-319-00945-2