Skip to main content

2015 | Buch

Fundamentals of Software Engineering

6th International Conference, FSEN 2015, Tehran, Iran, April 22-24, 2015. Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the 6th IPM International Conference on Fundamentals of Software Engineering, FSEN 2015, held in Tehran, Iran, in April 2015.

The 21 full papers presented in this volume were carefully reviewed and selected from 64 submissions. The topics of interest in FSEN span over all aspects of formal methods, especially those related to advancing the application of formal methods in software industry and promoting their integration with practical engineering techniques.

Inhaltsverzeichnis

Frontmatter
Towards Smart Systems of Systems
Abstract
Systems of Systems (SoS) have started to emerge as a consequence of the general trend toward the integration of beforehand isolated systems. To unleash the full potential, the contained systems must be able to operate as elements in open, dynamic, and deviating SoS architectures and to adapt to open and dynamic contexts while being developed, operated, evolved, and governed independently. We name the resulting advanced SoS to be smart as they must be self-adaptive at the level of the individual systems and self-organizing at the SoS level to cope with the emergent behavior at that level. In this paper we analyze the open challenges for the envisioned smart SoS. In addition, we discuss our ideas for tackling this vision with our SMARTSOS approach that employs open and adaptive collaborations and models at runtime. In particular, we focus on preliminary ideas for the construction and assurance of smart SoS.
Holger Giese, Thomas Vogel, Sebastian Wätzoldt
Automated Integration of Service-Oriented Software Systems
Abstract
In the near future we will be surrounded by a virtually infinite number of software applications that provide services in the digital space. This situation radically changes the way software will be produced and used: (i) software is increasingly produced according to specific goals and by integrating existing software; (ii) the focus of software production will be shifted towards reuse of third-parties software, typically black-box, that is often provided without a machine readable documentation. The evidence underlying this scenario is that the price to pay for this software availability is a lack of knowledge on the software itself, notably on its interaction behaviour. A producer will operate with software artefacts that are not completely known in terms of their functional and non-functional characteristics. The general problem is therefore directed to the ability of interacting with the artefacts to the extent the goal is reached. This is not a trivial problem given the virtually infinite interaction protocols that can be defined at application level. Different software artefacts with heterogeneous interaction protocols may need to interoperate in order to reach the goal. In this paper we focus on techniques and tools for integration code synthesis, which are able to deal with partial knowledge and automatically produce correct-by-construction service-oriented systems with respect to functional goals. The research approach we propose builds around two phases: elicit and integrate. The first concerns observation theories and techniques to elicit functional behavioural models of the interaction protocol of black-box services. The second deals with compositional theories and techniques to automatically synthesize appropriate integration means to compose the services together in order to realize a service choreography that satisfies the goal.
Marco Autili, Paola Inverardi, Massimo Tivoli
Software Architecture Modeling and Evaluation Based on Stochastic Activity Networks
Abstract
Quantitative and integrated evaluation of software quality attributes at the architectural design stage provides a sound basis for making objective decisions for design trade-offs and developing a high quality software. In this paper we introduce a formal method for modeling software architectures and evaluationg their quality attributes quantitatively and in a unified manner. This method is based on stochastic activity networks (SANs) and the quality attributes considered include security, dependability and performance.
Ali Sedaghatbaf, Mohammad Abdolahi Azgomi
Applicative Bisimulation and Quantum λ-Calculi
Abstract
Applicative bisimulation is a coinductive technique to check program equivalence in higher-order functional languages. It is known to be sound — and sometimes complete — with respect to context equivalence. In this paper we show that applicative bisimulation also works when the underlying language of programs takes the form of a linear λ-calculus extended with features such as probabilistic binary choice, but also quantum data, the latter being a setting in which linearity plays a role. The main results are proofs of soundness for the obtained notions of bisimilarity.
Ugo Dal Lago, Alessandro Rioli
Modeling and Efficient Verification of Broadcasting Actors
Abstract
Many distributed systems use broadcast communication for various reasons such as saving energy or increasing throughput. However, the actor model for concurrent and distributed systems does not directly support this kind of communication. In such cases, a broadcast must be modeled as multiple unicasts which leads to loss of modularity and state space explosion for any non-trivial system. In this paper, we extend Rebeca, an actor-based model language, to support asynchronous anonymous message broadcasting. Then, we apply counter abstraction for reducing the state space which efficiently bypasses the constructive orbit problem by considering the global state as a vector of counters, one per each local state. This makes the model checking of systems possible without further considerations of symmetry. This approach is efficient for fully symmetric system like broadcasting environments. We use a couple of case studies to illustrate the applicability of our method and the way their state spaces are reduced in size.
Behnaz Yousefi, Fatemeh Ghassemi, Ramtin Khosravi
A Theory of Integrating Tamper Evidence with Stabilization
Abstract
We propose the notion of tamper-evident stabilization –that combines stabilization with the concept of tamper evidence– for computing systems. On the first glance, these notions are contradictory; stabilization requires that eventually the system functionality is fully restored whereas tamper evidence requires that the system functionality is permanently degraded in the event of tampering. Tamper-evident stabilization captures the intuition that the system will tolerate perturbation upto a limit. In the event that it is perturbed beyond that limit, it will exhibit permanent evidence of tampering, where it may provide reduced (possibly none) functionality. We compare tamper-evident stabilization with (conventional) stabilization and with active stabilization and propose an approach to verify tamper-evident stabilizing programs in polynomial time. We demonstrate tamper-evident stabilization with two examples and argue how approaches for designing stabilization can be used to design tamper-evident stabilization. We also study issues of composition in tamper-evident stabilization. Finally, we point out how tamper-evident stabilization can effectively be used to provide tradeoff between fault-prevention and fault tolerance.
Reza Hajisheykhi, Ali Ebnenasir, Sandeep S. Kulkarni
A Safe Stopping Protocol to Enable Reliable Reconfiguration for Component-Based Distributed Systems
Abstract
Despite the need for change, highly available software systems cannot be stopped to perform changes because disruption in their services may consequent irrecoverable losses. Current work on runtime evolution are either too disruptive, e.g., “blackouts” in unnecessary components in the quiescence criterion approach or presume restrictive assumptions such as the “black-box design” in the tranquility approach. In this paper, an architecture-based approach, called SAFER, is proposed which provides a better timeliness by relaxing any precondition required to start reconfiguration. We demonstrate the validity of the SAFER through model checking and a realization of the approach on a component model.
Mohammad Ghafari, Abbas Heydarnoori, Hassan Haghighi
Efficient Architecture-Level Configuration of Large-Scale Embedded Software Systems
Abstract
Configuration is a recurring problem in many domains. In our earlier work, we focused on architecture-level configuration of largescale embedded software systems and proposed a methodology that enables engineers to configure products by instantiating a given reference architecture model. Products have to satisfy a number of constraints specified in the reference architecture model. If not, the engineers have to backtrack their configuration decisions to rebuild a configured product that satisfies the constraints. Backtracking configuration decisions makes the configuration process considerably slow. In this paper, we improve our earlier work and propose a backtrack-free configuration mechanism. Specifically, given a cycle-free generic reference architecture model, we propose an algorithm that computes an ordering over configuration parameters that yields a consistent configuration without any need to backtrack. We evaluated our approach on a simplified model of an industrial case study.We show that our ordering approach eliminates backtracking. It reduces the overall configuration time by both reducing the required number of value assignments, and reducing the time that it takes to complete one configuration iteration. Furthermore, we show that the latter has a linear growth with the size of the configuration problem.
Razieh Behjati, Shiva Nejati
Benchmarks for Parity Games
Abstract
We propose a benchmark suite for parity games that includes the benchmarks that have been used in the literature, and make it available online. We give an overview of the parity games, including a description of how they have been generated. We also describe structural properties of parity games, and using these properties we show that our benchmarks are representative. With this work we provide a starting point for further experimentation with parity games.
Jeroen J. A. Keiren
A Behavioural Theory for a π-calculus with Preorders
Abstract
We study the behavioural theory of π P, a π-calculus in the tradition of Fusions and Chi calculi. In contrast with such calculi, reduction in π P generates a preorder on names rather than an equivalence relation. We present two characterisations of barbed congruence in π P: the first is based on a compositional LTS, and the second is an axiomatisation. The results in this paper bring out basic properties of π P, mostly related to the interplay between the restriction operator and the preorder on names.
Consequently, π P is a calculus in the tradition of Fusion calculi, in which both types and behavioural equivalences can be exploited in order to reason rigorously about concurrent and mobile systems.
Daniel Hirschkoff, Jean-Marie Madiot, Xian Xu
Incremental Realization of Safety Requirements: Non-determinism vs. Modularity
Abstract
This paper investigates the impact of non-determinism and modularity on the complexity of incremental incorporation of safety requirements while preserving liveness (a.k.a. the problem of incremental synthesis). Previous work shows that realizing safety in non-deterministic programs under limited observability is an NP-complete problem (in the state space of the program), where limited observability imposes read restrictions on program components with respect to the local state of other components. In this paper, we present a surprising result that synthesizing safety remains an NP-complete problem even for deterministic programs! The results of this paper imply that non-determinism is not the source of the hardness of synthesizing safety in concurrent programs; instead, limited observability has a major impact on the complexity of realizing safety. We also provide a roadmap for future research on exploiting the benefits of modularization while keeping the complexity of incremental synthesis manageable.
Ali Ebnenasir
Analyzing Mutable Checkpointing via Invariants
Abstract
The well-known coordinated snapshot algorithm of mutable checkpointing [7,8,9] is studied. We equip it with a concise formal model and analyze its operational behavior via an invariant characterizing the snapshot computation. By this we obtain a clear understanding of the intermediate behavior and a correctness proof of the final snapshot based on a strong notion of consistency (reachability within the partial order representing the underlying computation). The formal model further enables a comparison with the blocking queue algorithm [13] introduced for the same scenario and with the same objective.
From a broader perspective, we advocate the use of formal semantics to formulate and prove correctness of distributed algorithms.
Deepanker Aggarwal, Astrid Kiehn
High Performance Computing Applications Using Parallel Data Processing Units
Abstract
Multicore processors are growing with respect to the number of cores on a chip. In a parallel computation context, multicore platforms have several important features such as exploiting multiple parallel processes, having access to a shared memory with noticeably lower cost than the distributed alternative and optimizing different levels of parallelism. In this paper, we introduce the Parallel Data Processing Unit (PDPU) which is a group of objects that benefits from the shared memory of the multicore configuration and that consists of two parts: a shared memory for maintaining data consistent, and a set of objects that are processing the data, then producing and aggregating the results concurrently. We then implement two examples in Java that illustrate PDPU behavior, and compare them with their actor based counterparts and show significant performance improvements. We also put forward the idea of integrating PDPU with the actor model which will result in an optimization for a specific spectrum of problems in actor based development.
Keyvan Azadbakht, Vlad Serbanescu, Frank de Boer
Improved Iterative Methods for Verifying Markov Decision Processes
Abstract
Value and policy iteration are powerful methods for verifying quantitative properties of Markov Decision Processes (MDPs). In order to accelerate these methods many approaches have been proposed. The performance of these methods depends on the graphical structure of MDPs. Experimental results show that they don’t work much better than normal value/policy iteration when the graph of the MDP is dense. In this paper we present an algorithm which tries to reduce the number of updates in dense MDPs. In this algorithm, instead of saving unnecessary updates we use graph partitioning method to have more important updates.
Jaber Karimpour, Ayaz Isazadeh, MohammadSadegh Mohagheghi, Khayyam Salehi
A Pre-congruence Format for XY-simulation
Abstract
XY-simulation is a generalization of bisimulation that is parameterized with two subsets of actions. XY-simulation is known in the literature under different names such as modal refinement, partial bisimulation, and alternating simulation. In this paper, we propose a pre-congruence rule format for XY-simulation. The format allows for checking compositionality of XY-simulation for an arbitrary language with structural operational semantics, by performing very simple checks on the syntactic shape of the rules. We apply our format to derive concrete compositionality results for different notions of behavioral pre-order with respect to different process calculi in the literature.
Harsh Beohar, Mohammad Reza Mousavi
Tooled Process for Early Validation of SysML Models Using Modelica Simulation
Abstract
The increasing complexity and heterogeneity of systems require engineers to consider the verification and validation aspects in the earliest stages of the system development life cycle. To meet these expectations, Model-Based Systems Engineering (MBSE) is identified as a key practice for efficient system development while simulation is still widely used by engineers to evaluate the performance and conformance of complex systems regarding requirements. To bridge the gap between high-level modeling (from requirements) and simulation, the present paper proposes a Model-Driven Engineering (MDE) tooled approach to automate the system requirements validation using SysML models and Modelica simulation. The implementation of the related toolchain has been officially adopted by the OMG SysML-Modelica working group.
Jean-Marie Gauthier, Fabrice Bouquet, Ahmed Hammad, Fabien Peureux
Can High Throughput Atone for High Latency in Compiler-Generated Protocol Code?
Abstract
High-level concurrency constructs and abstractions have several well-known software engineering advantages when it comes to programming concurrency protocols among threads in multicore applications. To also explore their complementary performance advantages, in ongoing work, we are developing compilation technology for a high-level coordination language, Reo, based on this language’s formal automaton semantics. By now, as shown in our previous work, our tools are capable of generating code that can compete with carefully hand-crafted code, at least for some protocols. An important prerequisite to further advance this promising technology, now, is to gain a better understanding of how the significantly different compilation approaches that we developed so far, which vary in the amount of parallelism in their generated code, compare against each other. For instance, to better and more reliably tune our compilers, we must learn under which circumstances parallel protocol code, with high throughput but also high latency, outperforms sequential protocol code, with low latency but also low throughput.
In this paper, we report on an extensive performance comparison between these approaches for a substantial number of protocols, expressed in Reo. Because we have always formulated our compilation technology in terms of a general kind of communicating automaton (i.e., constraint automata), our findings apply not only to Reo but, in principle, to any language whose semantics can be defined in terms of such automata.
Sung-Shik T. Q. Jongmans, Farhad Arbab
Painless Support for Static and Runtime Verification of Component-Based Applications
Abstract
Architecture Description Languages (ADL) provide descriptions of a software system in terms of its structure. Such descriptions give a high-level overview and come from the need to cope with arbitrarily complex dependencies arising from software components.
In this paper we present Painless, a novel ADL with a declarative trait supporting parametrized specifications and architectural reconfigurations. Moreover, we exhibit its reliable facet on its integration with ProActive — a middleware for distributed programming. This is achieved by building on top of Mefresa, a Coq framework for the reasoning on software architectures. We inherit its strong guarantees by extracting certified code, and subsequently integrating it in our toolchain.
Nuno Gaspar, Ludovic Henrio, Eric Madelaine
Linear Evolution of Domain Architecture in Service-Oriented Software Product Lines
Abstract
In service-oriented software product lines, when a change occurs in the business process variability model, designing the domain architecture from scratch imposes costs of re-architecture, high costs of change in the domain-level assets; and high costs of change in many products based on the new domain architecture. In this paper, focusing on the linear evolution scenario in service-oriented product lines, which refers to propagating changes in some assets to some other assets, both in the domain level, we deal with the problem of propagating changes in domain requirements (the business process variability model) to the domain architecture level, in a cost-optimal and consistency-preserving way. We present a method to suggest the optimal change propagation options to reach the aforementioned goals. The method showed promising to provide minimal change costs as well as to fully preserve consistency of the target models if no human intervention exists in the change propagation results.
Sedigheh Khoshnevis, Fereidoon Shams
An Interval-Based Approach to Modelling Time in Event-B
Abstract
Our work was inspired by our modelling and verification of a cardiac pacemaker, which includes concurrent aspects and a set of interdependent and cyclic timing constraints. To model timing constraints in such systems, we present an approach based on the concept of timing interval. We provide a template-based timing constraint modelling scheme that could potentially be applicable to a wide range of modelling scenarios. We give a notation and Event-B semantics for the interval. The Event-B coding of the interval is decoupled from the application logic of the model, therefore a generative design of the approach is possible. We demonstrate our interval approach and its refinement through a small example. The example is verified, model-checked and animated (manually validated) with the ProB animator.
Gintautas Sulskus, Michael Poppleton, Abdolbaghi Rezazadeh
From Event-B Models to Dafny Code Contracts
Abstract
The constructive approach to software correctness aims at formal modelling and verification of the structure and behaviour of a system in different levels of abstraction. In contrast, the analytical approach to software verification focuses on code level correctness and its verification. Therefore it would seem that the constructive and analytical approaches should complement each other well. To demonstrate this idea we present a case for linking two existing verification methods, Event-B (constructive) and Dafny (analytical). This approach combines the power of Event-B abstraction and its stepwise refinement with the verification capabilities of Dafny. We presented a small case study to demonstrate this approach and outline of the rules for transforming Event-B events to Dafny contracts. Finally, a tool for automatic generation of Dafny contracts from Event-B formal models is presented.
Mohammadsadegh Dalvandi, Michael Butler, Abdolbaghi Rezazadeh
Backmatter
Metadaten
Titel
Fundamentals of Software Engineering
herausgegeben von
Mehdi Dastani
Marjan Sirjani
Copyright-Jahr
2015
Electronic ISBN
978-3-319-24644-4
Print ISBN
978-3-319-24643-7
DOI
https://doi.org/10.1007/978-3-319-24644-4