Skip to main content

2016 | Buch

Dependable Software Engineering: Theories, Tools, and Applications

Second International Symposium, SETTA 2016, Beijing, China, November 9-11, 2016, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the Second International Symposium on Dependable Software Engineering: Theories, Tools, and Applications, SETTA 2016, held in Beijing, China, in November 2016.

The 17 full papers presented together with 3 short papers were carefully reviewed and selected from 58 submissions. The aim of the symposium is to bring together international researchers and practitioners in the field of software technology. Its focus is on formal methods and advanced software technologies, especially for engineering complex, large-scale artifacts like cyber-physical systems, networks of things, enterprise systems, or cloud-based services.

Inhaltsverzeichnis

Frontmatter
Place Bisimulation and Liveness for Open Petri Nets
Abstract
Petri nets are a kind of concurrent models for distributed and asynchronous systems. However they can only model closed systems, but not open ones. We extend Petri nets to model open systems. In Open Petri Nets, the way of interaction is achieved by composing nets. Some places with labels, called open or external, are considered as an interface with environment. Every external places are both input and output ones. Two such open Petri nets can be composed by joining the external places with the same label. In addition, we focus on the operational semantics of open nets and study observational properties, especially bisimulation properties. We define place bisimulations on nets with external places. It turns out that the largest bisimulation, i.e. the bisimilarity, is a congruence. A further result is that liveness is preserved by bisimilarity.
Xiaoju Dong, Yuxi Fu, Daniele Varacca
Divergence Detection for CCSL Specification via Clock Causality Chain
Abstract
The Clock Constraint Specification Language (CCSL), first introduced as a companion language for Modeling and Analysis of Real-Time and Embedded systems (MARTE), has now evolved beyond the time specification of MARTE, and has become a full-fledged domain specific modeling language widely used in many domains. A CCSL specification is a set of constraints, which symbolically represents a set of valid clock schedules, where a schedule represents the order of the actions in a system. This paper proposes an algorithm to detect the divergence behavior in the schedules that satisfy a given CCSL specification (i.e. it proposes to detect the presence of infinite but non periodic schedules in a CCSL specification). We investigate the divergence by constructing causality chains among the clocks resulting from the constraints of the specification. Depending on cycles in the causality chains, a bounded clock set built by our proposed algorithm can be used to decide whether the given specification is divergence-freedom or not. The approach is illustrated on one example for architecture-driven analysis.
Qingguo Xu, Robert de Simone, Julien DeAntoni
Performance Evaluation of Concurrent Data Structures
Abstract
The speed-ups acquired by concurrent programming heavily rely on exploiting highly concurrent data structures. This has led to a variety of coarse-grained and fine-grained locking to lock-free data structures. The performance of such data structures is typically analysed by simulation or implementation. We advocate a model-based approach using probabilistic model checking. The main benefit is that our models can also be used to check the correctness of the data structures. The paper details the approach, and reports on experimental results on several concurrent stacks, queues, and lists. Our analysis yields worst- and best-case bounds on performance metrics such as expected time and probabilities to finish a certain number of operations within a deadline.
Hao Wu, Xiaoxiao Yang, Joost-Pieter Katoen
GPU-Accelerated Steady-State Computation of Large Probabilistic Boolean Networks
Abstract
Computation of steady-state probabilities is an important aspect of analysing biological systems modelled as probabilistic Boolean networks (PBNs). For small PBNs, efficient numerical methods can be successfully applied to perform the computation with the use of Markov chain state transition matrix underlying the studied networks. However, for large PBNs, numerical methods suffer from the state-space explosion problem since the state-space size is exponential in the number of nodes in a PBN. In fact, the use of statistical methods and Monte Carlo methods remain the only feasible approach to address the problem for large PBNs. Such methods usually rely on long simulations of a PBN. Since slow simulation can impede the analysis, the efficiency of the simulation procedure becomes critical. Intuitively, parallelising the simulation process can be an ideal way to accelerate the computation. Recent developments of general purpose graphics processing units (GPUs) provide possibilities to massively parallelise the simulation process. In this work, we propose a trajectory-level parallelisation framework to accelerate the computation of steady-state probabilities in large PBNs with the use of GPUs. To maximise the computation efficiency on a GPU, we develop a dynamical data arrangement mechanism for handling different size PBNs with a GPU, and a specific way of storing predictor functions of a PBN and the state of the PBN in the GPU memory. Experimental results show that our GPU-based parallelisation gains a speedup of approximately 400 times for a real-life PBN.
Andrzej Mizera, Jun Pang, Qixia Yuan
Behavioural Pseudometrics for Nondeterministic Probabilistic Systems
Abstract
For the model of probabilistic labelled transition systems that allow for the co-existence of nondeterminism and probabilities, we present two notions of bisimulation metrics: one is state-based and the other is distribution-based. We provide a sound and complete modal characterisation for each of them, using real-valued modal logics based on Hennessy-Milner logic. The logic for characterising the state-based metric is much simpler than an earlier logic by Desharnais et al. as it uses only two non-expansive operators rather than the general class of non-expansive operators. For the kernels of the two metrics, which correspond to two notions of bisimilarity, we give a comprehensive comparison with some typical distribution-based bisimilarities in the literature.
Wenjie Du, Yuxin Deng, Daniel Gebler
A Comparison of Time- and Reward-Bounded Probabilistic Model Checking Techniques
Abstract
In the design of probabilistic timed systems, requirements concerning behaviour that occurs within a given time or energy budget are of central importance. We observe that model-checking such requirements for probabilistic timed automata can be reduced to checking reward-bounded properties on Markov decision processes. This is traditionally implemented by unfolding the model according to the bound, or by solving a sequence of linear programs. Neither scales well to large models. Using value iteration in place of linear programming achieves scalability but accumulates approximation error. In this paper, we correct the value iteration-based scheme, present two new approaches based on scheduler enumeration and state elimination, and compare the practical performance and scalability of all techniques on a number of case studies from the literature. We show that state elimination can significantly reduce runtime for large models or high bounds.
Ernst Moritz Hahn, Arnd Hartmanns
Computing Specification-Sensitive Abstractions for Program Verification
Abstract
To enable scalability and address the needs of real-world software, deductive verification relies on modularization of the target program and decomposition of its requirement specification. In this paper, we present an approach that, given a Java program and a partial requirement specification written using the Java Modeling Language, constructs a semantic slice. In the slice, the parts of the program irrelevant w.r.t. the partial requirements are replaced by an abstraction. The core idea of our approach is to use bounded program verification techniques to guide the construction of these slices. Our approach not only lessens the burden of writing auxiliary specifications (such as loop invariants) but also reduces the number of proof steps needed for verification.
Tianhai Liu, Shmuel Tyszberowicz, Mihai Herda, Bernhard Beckert, Daniel Grahl, Mana Taghdiri
Reducing State Explosion for Software Model Checking with Relaxed Memory Consistency Models
Abstract
Software model checking suffers from the so-called state explosion problem, and relaxed memory consistency models even worsen this situation. What is worse, parameterizing model checking by memory consistency models, that is, to make the model checker as flexible as we can supply definitions of memory consistency models as an input, intensifies state explosion. This paper explores specific reasons for state explosion in model checking with multiple memory consistency models, provides some optimizations intended to mitigate the problem, and applies them to McSPIN, a model checker for memory consistency models that we are developing. The effects of the optimizations and the usefulness of McSPIN are demonstrated experimentally by verifying copying protocols of concurrent copying garbage collection algorithms. To the best of our knowledge, this is the first model checking of the concurrent copying protocols under relaxed memory consistency models.
Tatsuya Abe, Tomoharu Ugawa, Toshiyuki Maeda, Kousuke Matsumoto
Identifying XML Schema Constraints Using Temporal Logic
Abstract
Twig pattern minimization is an important aspect of XML query optimization. During the minimizing process, it usually needs to take advantage of the constraints of XML Schema. The traditional methods for identifying constraints is to develop corresponding algorithms based on the type of constraints. It is inflexible because the constraints may be changed as new Twig pattern optimizing rules are found. Since the constraints of XML Schema mainly depict the sequence relationship of nodes, it is natural to be described by temporal logic. Based on the recognition, this paper proposes a method of identifying XML Schema constraints using temporal logic. Concretely, an XML Schema is modeled as a graph. In order to easily represent constraints related to parent and ancestor nodes, we made some modifications to Computational Tree Logic(CTL) with backward temporal operators, and developed model checking algorithms for automatically identifying XML Schema constraints. Compared with traditional methods, our method is more flexibility.
Ruifang Zhao, Ke Liu, Hongli Yang, Zongyan Qiu
Schedulability Analysis of Timed Regular Tasks by Under-Approximation on WCET
Abstract
Schedulability analysis is one of the most important issues in developing and analyzing real-time systems. Given a task system where each task is characterized by a worst-case execution time (WCET) and a relative deadline, the schedulability analysis is decidable. However in reality, it is difficult to calculate the WCET of a complex task, even after it is abstracted to a formal model, e.g., timed automata (TAs). This paper proposes a schedulability analysis method without the information of the WCET, by introducing a model named timed regular task automata (TRTAs). Each task is described by a TA, a starting point with a clock valuation, a status and a relative deadline. A test is performed on each TA for an under-approximation of the WCET. The system may still be unschedulable under the approximation. A further schedulability checking is then performed by encoding to the reachability problem of nested timed automata (NeTAs). The methodology is thus sound and complete.
Bingbing Fang, Guoqiang Li, Daniel Sun, Hongming Cai
Importance Sampling for Stochastic Timed Automata
Abstract
We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state-wise change of measure is then applied on-the-fly during simulations, ensuring that dead ends are never reached. The change of measure is guaranteed by construction to reduce the variance of the estimator with respect to crude Monte Carlo, while experimental results demonstrate that we can achieve substantial computational gains.
Cyrille Jegourel, Kim G. Larsen, Axel Legay, Marius Mikučionis, Danny Bøgsted Poulsen, Sean Sedwards
Semipositivity in Separation Logic with Two Variables
Abstract
In a recent work by Demri and Deters (CSL-LICS 2014), first-order separation logic restricted to two variables and separating implication was shown undecidable, where it was shown that even with only two variables, if the use of negations is unrestricted, then they can be nested with separating implication in a complex way to get the undecidability result. In this paper, we revisit the decidability and complexity issues of first-order separation logic with two variables, and proposes semi-positive separation logic with two-variables (SPSL2), where the use of negations is restricted in the sense that negations can only occur in front of atomic formulae. We prove that satisfiability of the fragment of SPSL2 where neither separating conjunction nor septraction (the dual operator of separating implication) occurs in the scope of universal quantifiers, is \(\textsc {nexptime}\)-complete. As a byproduct of the proof, we show that the finite satisfiability problem of first-order logic with two variables and a bounded number of function symbols is \(\textsc {nexptime}\)-complete (the lower bound holds even with only one function symbol and without unary predicates), which may be of independent interest beyond separation logic community.
Zhilin Wu
Distributed Computation of Fixed Points on Dependency Graphs
Abstract
Dependency graph is an abstract mathematical structure for representing complex causal dependencies among its vertices. Several equivalence and model checking questions, boolean equation systems and other problems can be reduced to fixed-point computations on dependency graphs. We develop a novel distributed algorithm for computing such fixed points, prove its correctness and provide an efficient, open-source implementation of the algorithm. The algorithm works in an on-the-fly manner, eliminating the need to generate a priori the entire dependency graph. We evaluate the applicability of our approach by a number of experiments that verify weak simulation/bisimulation equivalences between CCS processes and we compare the performance with the well-known CWB tool. Even though the fixed-point computation, being a P-complete problem, is difficult to parallelize in theory, we achieve significant speed-ups in the performance as demonstrated on a Linux cluster with several hundreds of cores.
Andreas Engelbredt Dalsgaard, Søren Enevoldsen, Kim Guldstrand Larsen, Jiří Srba
A Complete Approximation Theory for Weighted Transition Systems
Abstract
We propose a way of reasoning about minimal and maximal values of the weights of transitions in a weighted transition system (WTS). This perspective induces a notion of bisimulation that is coarser than the classic bisimulation: it relates states that exhibit transitions to bisimulation classes with the weights within the same boundaries. We propose a customized modal logic that expresses these numeric boundaries for transition weights by means of particular modalities. We prove that our logic is invariant under the proposed notion of bisimulation. We show that the logic enjoys the finite model property which allows us to prove the decidability of satisfiability and provide an algorithm for satisfiability checking. Last but not least, we identify a complete axiomatization for this logic, thus solving a long-standing open problem in this field. All our results are proven for a class of WTSs without the image-finiteness restriction, a fact that makes this development general and robust.
Mikkel Hansen, Kim Guldstrand Larsen, Radu Mardare, Mathias Ruggaard Pedersen, Bingtian Xue
Zephyrus2: On the Fly Deployment Optimization Using SMT and CP Technologies
Abstract
Modern cloud applications consist of software components deployed on multiple virtual machines. Deploying such applications is error prone and requires detailed system expertise. The deployment optimization problem is about how to configure and deploy applications correctly while at the same time minimizing resource cost on the cloud. This problem is addressed by tools such as Zephyrus, which take a declarative specification of the components and their configuration requirements as input and propose an optimal deployment. This paper presents Zephyrus2, a new tool which addresses deployment optimization by exploiting modern SMT and CP technologies to handle larger and more complex deployment scenarios. Compared to Zephyrus, Zephyrus2 can solve problems involving hundreds of components to be deployed on hundreds of virtual machines in a matter of seconds instead of minutes. This significant speed-up, combined with an improved specification format, enables Zephyrus2 to interactively support on the fly decision making.
Erika Ábrahám, Florian Corzilius, Einar Broch Johnsen, Gereon Kremer, Jacopo Mauro
Exploiting Symmetry for Efficient Verification of Infinite-State Component-Based Systems
Abstract
In this paper we present an efficient verification algorithm for infinite-state component-based systems modeled in the behavior-interaction-priority (\(\textsc {BIP}\)) framework. Our algorithm extends the persistent set partial order reduction by taking into account system symmetries, and further combines it with lazy predicate abstraction. We have implemented the new verification algorithm in our model checker for \(\textsc {BIP}\). The experimental evaluation shows that for systems exhibiting certain symmetries, our new algorithm outperforms the existing algorithms significantly.
Qiang Wang
Formalization of Fault Trees in Higher-Order Logic: A Deep Embedding Approach
Abstract
Fault Tree (FT) is a standard failure modeling technique that has been extensively used to predict reliability, availability and safety of many complex engineering systems. In order to facilitate the formal analysis of FT based analyses, a higher-order-logic formalization of FTs has been recently proposed. However, this formalization is quite limited in terms of handling large systems and transformation of FT models into their corresponding Reliability Block Diagram (RBD) structures, i.e., a frequently used transformation in reliability and availability analyses. In order to overcome these limitations, we present a deep embedding based formalization of FTs. In particular, the paper presents a formalization of AND, OR and NOT FT gates, which are in turn used to formalize other commonly used FT gates, i.e., NAND, NOR, XOR, Inhibit, Comparator and majority Voting, and the formal verification of their failure probability expressions. For illustration purposes, we present a formal failure analysis of a communication gateway software for the next generation air traffic management system.
Waqar Ahmad, Osman Hasan
An Efficient Synthesis Algorithm for Parametric Markov Chains Against Linear Time Properties
Abstract
In this paper, we propose an efficient algorithm for the parameter synthesis of PLTL formulas with respect to parametric Markov chains. The PLTL formula is translated to an almost fully partitioned Büchi automaton which is then composed with the parametric Markov chain. We then reduce the problem to solving an optimisation problem, allowing to decide the satisfaction of the formula using an SMT solver. The algorithm works also for interval Markov chains. The complexity is linear in the size of the Markov chain, and exponential in the size of the formula. We provide a prototype and show the efficiency of our approach on a number of benchmarks.
Yong Li, Wanwei Liu, Andrea Turrini, Ernst Moritz Hahn, Lijun Zhang
Time-Bounded Statistical Analysis of Resource-Constrained Business Processes with Distributed Probabilistic Systems
Abstract
Business processes often incorporate stochastic decision points, either due to uncontrollable actions or because the control flow is not fully specified. Formal modeling of such business processes with resource constraints and multiple instances is hard because of the interplay among stochastic behavior, concurrency, real-time and resource contention. In this setting, statistical techniques are easier to use and more scalable than numerical methods to verify temporal properties. However, existing approaches towards simulation techniques of business processes typically rest on shaky theoretical foundations. In this paper, we propose a modular approach towards modeling stochastic resource-constrained business processes. We analyze such processes in presence of commonly used resource-allocation strategies. Our model, Distributed Probabilistic Systems (DPS), incorporates a set of probabilistic agents communicating among each other in fixed-duration real-time. Our methodology admits statistical analysis of business processes with provable error bounds. We also illustrate a number of real-life scenarios that can be modeled and verified using this approach.
Ratul Saha, Madhavan Mukund, R. P. Jagadeesh Chandra Bose
Failure Estimation of Behavioral Specifications
Abstract
Behavioral specifications are often employed for modeling complex systems at high levels of abstraction. Failure conditions of such systems can naturally be specified as assertions defined over system variables. In that way, such behavioral descriptions can be transformed to imperative programs with annotated failure assertions. In this paper, we present a scalable source code based framework for computing failure probability of such programs under the fail-stop model by applying formal methods. The imprecision in the estimation process resulting from coverage loss due to time, memory bounds and loop invariant synthesis, is also quantified using an upper bound computation. We further discuss the design and implementation of ProPFA (Probabilistic Path-based Failure Analyzer), an automated tool developed for this purpose.
Debasmita Lohar, Anudeep Dunaboyina, Dibyendu Das, Soumyajit Dey
Erratum to: Formalization of Fault Trees in Higher-Order Logic: A Deep Embedding Approach
Waqar Ahmad, Osman Hasan
Backmatter
Metadaten
Titel
Dependable Software Engineering: Theories, Tools, and Applications
herausgegeben von
Martin Fränzle
Deepak Kapur
Naijun Zhan
Copyright-Jahr
2016
Electronic ISBN
978-3-319-47677-3
Print ISBN
978-3-319-47676-6
DOI
https://doi.org/10.1007/978-3-319-47677-3