Skip to main content

2016 | Buch

Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications

7th International Symposium, ISoLA 2016, Imperial, Corfu, Greece, October 10-14, 2016, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 9952 and LNCS 9953 constitutes the refereed proceedings of the 7th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation, ISoLA 2016, held in Imperial, Corfu, Greece, in October 2016.

The papers presented in this volume were carefully reviewed and selected for inclusion in the proceedings. Featuring a track introduction to each section, the papers are organized in topical sections named: statistical model checking; evaluation and reproducibility of program analysis and verification; ModSyn-PP: modular synthesis of programs and processes; semantic heterogeneity in the formal development of complex systems; static and runtime verification: competitors or friends?; rigorous engineering of collective adaptive systems; correctness-by-construction and post-hoc verification: friends or foes?; privacy and security issues in information systems; towards a unified view of modeling and programming; formal methods and safety certification: challenges in the railways domain; RVE: runtime verification and enforcement, the (industrial) application perspective; variability modeling for scalable software evolution; detecting and understanding software doping; learning systems: machine-learning in software products and learning-based analysis of software systems; testing the internet of things; doctoral symposium; industrial track; RERS challenge; and STRESS.

Inhaltsverzeichnis

Frontmatter
Erratum to: Verification Techniques for Hybrid Systems
Pavithra Prabhakar, Miriam Garcia Soto, Ratan Lal

Towards a Unified View of Modeling and Programming

Frontmatter
Towards a Unified View of Modeling and Programming (Track Summary)

Since the 1960s we have seen tremendous amount of scientific and methodological work in the fields of specification, design, and programming languages. In spite of the very high value of this work, however, this effort has found its limitation by the fact that we do not have a sufficient integration of these languages, as well as tools that support the development engineer in applying the corresponding methods and techniques. A tighter integration between specification and verification logics, graphical modeling notations, and programming languages is needed.

Manfred Broy, Klaus Havelund, Rahul Kumar, Bernhard Steffen
Programming ⊂ Modeling ⊂ Engineering

The proven ability of some modeling languages to be used as both design and implementation languages has raised hopes of a seamless blending between design and implementation, a capability that only seems possible in the domain of software engineering. In turn, this has led to methodological questions on how best to take advantage of models and modeling technologies in the development of complex software systems. To provide some insight into this question, we start with a review of the role and types of models that are found in traditional engineering disciplines and compare that to current practices in model-based software engineering in industrial practice. The conclusion reached is that, despite some very unique characteristics of software, there does not seem to be any compelling reason to treat the handling of models in software engineering in a radically different way than what is done in engineering in general.

Bran Selić
On a Unified View of Modeling and Programming Position Paper

In the software community, modeling and programming are generally considered to be different things. However, some software models specify behavior precisely enough that they can be executed in their own right. And all programs can be considered models, at least of the executions that they specify. So perhaps modeling and programming are not actually so different after all. Indeed, there is a modeling/programming convergence going on right now in the Unified Modeling Language (UML) community, with a recent series of specifications on precise execution semantics for a growing subset of UML. But the language design legacy of UML is largely grounded in the old view that sharply separates models and programs, complicating the new convergence. It is perhaps now time to move forward to a new generation of unified modeling/programming languages.

Ed Seidewitz
On the Feasibility of a Unified Modelling and Programming Paradigm

In this article, the feasibility of a unified modelling and programming paradigm is discussed from the perspective of large scale system development and verification in collaborative development environments. We motivate the necessity to utilise multiple formalisms for development and verification, in particular for complex cyber-physical systems or systems of systems. Though modelling, programming, and verification will certainly become more closely integrated in the future, we do not expect a single formalism to become universally applicable and accepted by the development and verification communities. The multi-formalism approach requires to translate verification artefacts (assertions, test cases, etc.) between different representations, in order to allow for the verification of emergent properties based on local verification results established with different methods and modelling techniques. It is illustrated by means of a case study from the railway domain, how this can be achieved, using concepts from the theory of institutions. This also enables the utilisation of verification tools in different formalisms, despite the fact that these tools are usually developed for one specific formal method.

Anne E. Haxthausen, Jan Peleska
Modeling Meets Programming: A Comparative Study in Model Driven Engineering Action Languages

Modeling and programming have often been considered two different activities. While this may be true when modeling is primarily meant for human communication and early design explorations, it is not the case when modeling is meant for execution. Some approaches have been specifically developed to address this latter case with variable successes. In this paper, we discuss two such approaches, namely ALF and Umple. ALF has evolved from the modeling community to provide a textual syntax for an executable subset of UML called Foundation UML (fUML). Umple has evolved from the academic community to introduce the abstractions of modeling into programing languages. We compare both approaches, highlight their critical differences, and discuss their contribution to the evolution of model oriented programming languages.

Maged Elaasar, Omar Badreddin
Abstractions for Modeling Complex Systems

The ever increasing popularity of model-based system- and software engineering has resulted in more and more systems—and more and more complex systems—being modeled. Hence, the problem of managing the complexity of the models themselves has gained importance. This paper introduces three abstractions that are specifically targeted at improving the scalability of the modeling process and the system models themselves.

Zsolt Lattmann, Tamás Kecskés, Patrik Meijer, Gábor Karsai, Péter Völgyesi, Ákos Lédeczi
Specifying and Verifying Advanced Control Features

Advances in programming often revolve around key design patterns, which programming languages embody as new control features. These control features, such as higher-order functions, advice, and context dependence, use indirection to decrease coupling and enhance modularity. However, this indirection makes them difficult to verify, because it hides actions (and their effects) behind an abstraction barrier. Such abstraction barriers can be overcome in a modular way using greybox specification techniques, provided the programming language supports interfaces as a place to record specifications. These techniques have previously allowed specification and modular verification of higher-order functional and object-oriented programs, as well as aspect-oriented and context-oriented programs.

Gary T. Leavens, David Naumann, Hridesh Rajan, Tomoyuki Aotani
Simplifying OMG MOF-Based Metamodeling

This paper advocates for a unification of modeling & programming from the perspective of normalized, implementation-neutral database schemas: representing programs and models in terms of irreducible and independent tables. This idea departs from the mainstream of modeling & programming, which typically revolves around Application Program Interface (API) ecosystems for operational needs and external serialization for interchange needs. Instead, this idea emphasizes an information-centric architecture to separate the structural aspects of language syntax via normalized schema tables from the operational aspects of language syntax and semantics via programs operating on normalized tables or derived table views. Such tables constitute the basis of a functional information architecture unifying modeling and programming as a radical departure from standardizing APIs in a programming fashion or standardizing serialization interchange in a modeling fashion. This paper focuses on the current API-less serialization-centric modeling paradigm because it is the farthest from a unified functional information architecture compared to functional programming languages where thinking about programs as pure functions and models as pure data is closest to this kind of unification. This paper first deconstructs the multi-level, reflective architecture for modeling languages defined at the Object Management Group (OMG) based on the Meta-Object Facility (MOF) and the Unified Modeling Language (UML) and subsequently reconstructs several normalized schema accounting for the information content and organization of different kinds of resources involved in modeling: libraries of datatypes, metamodels like UML, profiles like the Systems Modeling Language (SysML) that extend metamodels and models that conform to metamodels optionally extended with applied profiles.

Nicolas F. Rouquette
Modelling and Testing of Real Systems

Modelling and Programming are often used together in system development. However, typically there is a large conceptual gap between modelling and programming. This leads to problems in unified handling and the transition between the two. This way, extra work is required when combining modelling and programming. This paper develops a common understanding that can unify modelling and programming in system development.

Andreas Prinz, Birger Møller-Pedersen, Joachim Fischer
Unifying Modelling and Programming: A Systems Biology Perspective

Despite significant research progress made and methodological experience gained over the past few decades, a tight integration between programming and modelling, guided and supported by intuitive yet rigorous formal reasoning and verification methods that ensure high reliability and quality of the developed system remains challenging and is still far from becoming mainstream. We suggest that recent developments in the area of computational systems biology could allow us to gain some new perspectives about the challenges involved in developing pragmatic solutions unifying programming and modelling.

Hillel Kugler
Formally Unifying Modeling and Design for Embedded Systems - A Personal View

Based on the author’s academic and industrial experience, we discuss the smooth relation between model-based design and programming realized by synchronous languages in the embedded systems field. These languages are used to develop high quality embedded software, in particular for safety-critical applications in avionics, railway, etc., subject to the strongest software certification processes in industry. They have also been used for the efficient model-based development of production hardware circuits. One of their main characteristics is their well-defined formal semantics, with is the base of their simulation and compiling processes and is also fundamental for their link to automatic formal verification systems and other tools related to model-based design. We briefly discuss their current limitations and some ideas to lift them.

G. Berry
Interactive Model-Based Compilation Continued – Incremental Hardware Synthesis for SCCharts

The Single-Pass Language-Driven Incremental Compilation (slic) strategy uses a series of model-to-model (M2M) transformations to compile a model or program to a specified target. Tool developer and modeler can inspect the result of each transformation step, using a familiar, graphical syntax of the successively transformed model, which is made possible by harnessing automatic layout. Previous work (presented at ISoLA’14) introduced the basics of the slic approach and illustrated it with a compiler that translated SCCharts, a synchronous, deterministic statechart language developed for safety-critical systems, to software. The compiler is implemented in the Kiel Integrated Environment for Layout Eclipse Rich Client (KIELER), an open-source development framework based on Eclipse.This paper proposes two extensions to slic. First, we extend the M2M transformation mechanism with a tracing capability that keeps track of model elements during transformations. Second, we make use of the tracing capability for an interactive simulation, where we not only observe a model’s input/output behavior during execution, but can inspect the run-time behavior of each model component, at any transformation stage. We illustrate these concepts by new transformations in the KIELER SCCharts compiler, which allow to synthesize hardware circuits, and a simulator that executes an intermediate-level software model and visualizes the simulation at the high-level model as well as the low-level circuit.

Francesca Rybicki, Steven Smyth, Christian Motika, Alexander Schulz-Rosengarten, Reinhard von Hanxleden
Towards Semantically Integrated Models and Tools for Cyber-Physical Systems Design

We describe an approach to the model-based engineering of embedded and cyber-physical systems, based on the semantic integration of diverse discipline-specific notations and tools. Using the example of a small unmanned aerial vehicle, we explain the need for multiple notations and collaborative modelling. Learning from experience with binary co-modelling based on a bespoke operational semantics, we describe current work delivering an extended approach that enables integration of multiple models and tools in a consistent tool chain, founded on an extensible semantic framework exploiting the Unifying Theories of Programming.

Peter Gorm Larsen, John Fitzgerald, Jim Woodcock, René Nilsson, Carl Gamble, Simon Foster
Merging Modeling and Programming Using Umple

We discuss how Umple merges modeling and programming by adding modeling constructs to programming languages and vice-versa. Umple has what we call model-code duality; we show how it fulfills key attributes of being both a programming language and a modeling technology. Umple also has what we call text-diagram duality in that the model or code can be updated by editing the textual or diagram form. We give an example of Umple, and explain how key benefits of textual programming languages are found in Umple, as are important features of modeling technology.

Timothy C. Lethbridge, Vahdat Abdelzad, Mahmoud Husseini Orabi, Ahmed Husseini Orabi, Opeyemi Adesina
Systems Modeling and Programming in a Unified Environment Based on Julia

A new approach for modeling of cyber-physical systems is proposed that combines the modeling power of a Modelica-like equation-based language with the powerful features of Julia; a programming language with strong focus on scientific computing, meta-programming and just-in-time compilation. The modeling language is directly defined and implemented with Julia’s meta-programming constructs and is designed tightly together with the symbolic and numeric algorithms. This approach is very well suited for experimenting with evolutions of modeling capabilities.

Hilding Elmqvist, Toivo Henningsson, Martin Otter
Meta-Level Reuse for Mastering Domain Specialization

We reflect on the distinction between modeling and programming in terms of what and how and emphasize the importance of perspectives: what is a model (a what) for the one, may well be a program (a how) for the other. In fact, attempts to pinpoint technical criteria like executability or abstraction for clearly separating modeling from programming seem not to survive modern technical developments. Rather, the underlying conceptual cores continuously converge. What remains is the distinction of what and how separating true purpose from its realization, i.e. providing the possibility of formulating the primary intent without being forced to over-specify. We argue that no unified general-purpose language can adequately support this distinction in general, and propose a meta-level framework for mastering the wealth of required domain-specific languages in a bootstrapping fashion.

Stefan Naujokat, Johannes Neubauer, Tiziana Margaria, Bernhard Steffen
Towards a Unified View of Modeling and Programming

In this paper we argue that there is a value in providing a unified view of modeling and programming. Models are meant to describe a system at a high level of abstraction for the purpose of human understanding and analysis. Programs, on the other hand, are meant for execution. However, programming languages are becoming increasingly higher-level, with convenient notation for concepts that in the past would only be reserved formal specification languages. This leads to the observation, that programming languages could be used for modeling, if only appropriate modifications were made to these languages. At the same time, model-based engineering formalisms such as UML and SysML are highly popular in engineering communities due to their graphical nature. However, these communities are, due to the complex nature of these formalisms, struggling to find grounds in textual formalisms with proper semantics. A unified view of modeling and programming may provide a common ground. The paper illustrates these points with selected examples comparing models and programs.

Manfred Broy, Klaus Havelund, Rahul Kumar

Formal Methods and Safety Certification: Challenges in the Railways Domain

Frontmatter
Formal Methods and Safety Certification: Challenges in the Railways Domain

The railway signalling sector has historically been a source of success stories about the adoption of formal methods in the certification of software safety of computer-based control equipment.

Alessandro Fantechi, Alessio Ferrari, Stefania Gnesi
On the Use of Static Checking in the Verification of Interlocking Systems

In the formal methods community, the correctness of interlocking tables is typically verified by model checking. This paper suggests to use a static checker for this purpose and it demonstrates for the RobustRailS verification tool set that the execution time and memory usage of its static checker are much less than of its model checker. Furthermore, the error messages of the static checker are much more informative than the counter examples produced by classical model checkers.

Anne E. Haxthausen, Peter H. Østergaard
Compositional Verification of Multi-station Interlocking Systems

Because interlocking systems are highly safety-critical complex systems, their automated safety verification is an active research topic investigated by several groups, employing verification techniques to produce important cost and time savings in their certification. However, such systems also pose a big challenge to current verification methodologies, due to the explosion of state space size as soon as large, if not medium sized, multi-station systems have to be controlled.For these reasons, verification techniques that exploit locality principles related to the topological layout of the controlled system to split in different ways the state space have been investigated. In particular, compositional approaches divide the controlled track network in regions that can be verified separately, once proper assumptions are considered on the way the pieces are glued together.Basing on a successful method to verify the size of rather large networks, we propose a compositional approach that is particularly suitable to address multi-station interlocking systems which control a whole line composed of stations linked by mainline tracks. Indeed, it turns out that for such networks, and for the adopted verification approach, the verification effort amounts just to the sum of the verification efforts for each intermediate station and for each connecting line.

Hugo D. Macedo, Alessandro Fantechi, Anne E. Haxthausen
OnTrack: The Railway Verification Toolset
Extended Abstract

The verification of railway interlocking systems is a challenging task for which a number of different modelling, simulation and verification approaches have been proposed. In this paper, we present the OnTrack toolset. In OnTrack, application data for the railway domain is represented using a domain specific language. This data can be entered manually or imported from standard data formats. OnTrack then comprises of a number of different model transformations that allow the user to automatically generate formal models for a specific approach, e.g., in CASL, CSP, or CSP||B. Other transformations offer abstractions on the application data to address scalability.

Phillip James, Faron Moller, Hoang Nga Nguyen, Markus Roggenbach, Helen Treharne, Xu Wang
Experiments in Formal Modelling of a Deadlock Avoidance Algorithm for a CBTC System

This paper presents a set of experiments in formal modelling and verification of a deadlock avoidance algorithm of an Automatic Train Supervision System (ATS). The algorithm is modelled and verified using four formal environment, namely UMC, Promela/SPIN, NuSMV, and mCRL2. The experience gained in this multiple modelling/verification experiments is described. We show that the algorithm design, structured as a set of concurrent activities cooperating through a shared memory, can be replicated in all the formal frameworks taken into consideration with relative effort. In addition, we highlight specific peculiarities of the various tools and languages, which emerged along our experience.

Franco Mazzanti, Alessio Ferrari, Giorgio O. Spagnolo
Tuning Energy Consumption Strategies in the Railway Domain: A Model-Based Approach

Cautious usage of energy resources is gaining great attention nowadays, both from environmental and economical point of view. Therefore, studies devoted to analyze and predict energy consumption in a variety of application sectors are becoming increasingly important, especially in combination with other non-functional properties, such as reliability, safety and availability.This paper focuses on energy consumption strategies in the railway sector, addressing in particular rail road switches through which trains are guided from one track to another. Given the criticality of their task, the temperature of these devices needs to be kept above certain levels to assure their correct functioning. By applying a stochastic model-based approach, we analyse a family of energy consumption strategies based on thresholds to trigger the activation/deactivation of energy supply. The goal is to offer an assessment framework through which appropriate tuning of threshold-based energy supply solutions can be achieved, so to select the most appropriate one, resulting in a good compromise between energy consumption and reliability level.

Davide Basile, Felicita Di Giandomenico, Stefania Gnesi

RVE: Runtime Verification and Enforcement, the (Industrial) Application Perspective

Frontmatter
Runtime Verification and Enforcement, the (Industrial) Application Perspective (Track Introduction)

During the last decade, the runtime verification and enforcement (RVE) community has been incredibly prolific in producing many theories, tools and techniques aiming towards the efficient analysis of systems’ executions and guaranteeing their correctness w.r.t. some desired properties. With the major strides made in recent years, much effort is still needed to make RVE attractive and viable methodologies for industrial use. In addition to industry, numerous other domains, such as security, bio-health monitoring, etc., can gain from RVE. The purpose of the “ Runtime Verification and Enforcement: the (industrial) application perspective” track at ISoLA’16 is to bring together RVE experts and potential application domains to try and advance the state-of-the-art on how to make RVE more useable and attractive to industry and other disciplines.

Ezio Bartocci, Ylies Falcone
What Is a Trace? A Runtime Verification Perspective

Runtime Monitoring or Verification deals with traces. In its most simple form a monitoring system takes a trace produced by a system and a specification of correct behaviour and checks if the trace conforms to the specification. More complex applications may introduce notions of feedback and reaction. The notion that unifies the field is that we can abstract the runtime behaviour of a system by an execution trace and check this for conformance. However, there is little uniform understanding of what a trace is. This is most keenly seen when comparing theoretical and practical work. This paper surveys the different notions of trace and reflects on the related issues.

Giles Reger, Klaus Havelund
Execution Trace Analysis Using LTL-FO

We explore of use of the tool BeepBeep, a monitor for the temporal logic LTL-FO$$^+$$, in interpreting assembly traces, focusing on security-related applications. LTL-FO$$^+$$ is an extension of LTL, which includes first order quantification. We show that LTL-FO$$^+$$ is a sufficiently expressive formalism to state a number of interesting program behaviors, and demonstrate experimentally that BeepBeep can efficiently verify the validity of the properties on assembly traces in tractable time.

Raphaël Khoury, Sylvain Hallé, Omar Waldmann
Challenges in Fault-Tolerant Distributed Runtime Verification

Runtime Verification is a lightweight method for monitoring the formal specification of a system (usually in some form of temporal logics) at execution time. In a setting, where a set of distributed monitors have only a partial view of a large system and may be subject to different types of faults, the literature of runtime verification falls short in answering many fundamental questions. Examples include techniques to reason about the soundness and consistency of the collective set of verdicts computed by the set of distributed monitors. In this paper, we discuss open research problems on fault-tolerant distributed monitoring that stem from different design choices and implementation platforms.

Borzoo Bonakdarpour, Pierre Fraigniaud, Sergio Rajsbaum, Corentin Travers
The HARMONIA Project: Hardware Monitoring for Automotive Systems-of-Systems

The verification of complex mixed-signal integrated circuit products in the automotive industry accounts for around 60 %–70 % of the total development time. In such scenario, any effort to reduce the design and verification costs and to improve the time-to-market and the product quality will play an important role to boost up the competitiveness of the automotive industry.The aim of the HARMONIA project is to provide a framework for assertion-based monitoring of automotive systems-of-systems with mixed criticality. It will enable a uniform way to reason about both safety-critical correctness and non-critical robustness properties of such systems. Observers embedded on FPGA hardware will be generated from assertions, and used for monitoring automotive designs emulated on hardware. The project outcome will improve the competitiveness of the automotive application oriented nano and microelectronics industry by reducing verification time and cost in the development process.

Thang Nguyen, Ezio Bartocci, Dejan Ničković, Radu Grosu, Stefan Jaksic, Konstantin Selyunin
Runtime Verification for Interconnected Medical Devices

In this tool paper we present a software development kit (SDK) for the Open Surgical Communication Protocol (OSCP) that supports the development of interconnected medical devices according to the recent IEEE 11073 standards for interoperable medical device communication. Building on service-oriented architecture (SOA), dynamically interconnected medical devices publish their connectivity interface, via which these systems provide data and can be controlled. To achieve the safety requirements necessary for medical devices, our tool, the OSCP Device Modeler, allows the specification of temporal assertions for the respective data streams of the systems and generates automatically corresponding monitors that may be used during testing, but also during the application in field to ensure adherence to the interface specification. A further tool, the OSCP Swiss Army Knife, allows subscribing to the services provided via the interfaces of the system under development and thereby supports its debugging. The whole OSCP SDK makes heavy use of runtime verification techniques and shows their advantages in this application area.

Martin Leucker, Malte Schmitz, Danilo à Tellinghusen
Dynamic Analysis of Regression Problems in Industrial Systems: Challenges and Solutions

This paper presents the result of our experience with the application of runtime verification, testing and static analysis techniques to several industrial projects. We discuss the eight most relevant challenges that we experienced, and the strategies that we elaborated to face them.

Fabrizio Pastore, Leonardo Mariani
Towards a Logic for Inferring Properties of Event Streams

We outline the background, motivation, and requirements of an approach to create abstractions of event streams, which are time-tagged sequences of events generated by an executing software system. Our work is motivated by the need to process event streams with millions of events that are generated by a spacecraft, that must be processed quickly after they are received on the ground. Our approach involves building a tool that adds hierarchical labels to a received event stream. The labels add contextual information to the event stream, and thus make it easier to build tools for visualizing and analyzing telemetry. We describe a notation for writing hierarchical labeling rules; the notation is based on a modification of Allen Temporal Logic, augmented with rule-definitions and features for referring to data in data parameterized events. We illustrate our notation and its use with an example.

Sean Kauffman, Rajeev Joshi, Klaus Havelund
Runtime Verification for Stream Processing Applications

Runtime verification (RV) has long been applied beyond its strict delineation of verification, through the notion of monitor-oriented programming. In this paper we present a portfolio of real-life case studies where RV is used to program stream-processing systems directly — where all the logic of the implemented system is defined in terms of monitors. The systems include the processing of Facebook events for business intelligence, analysing users’ activity log for detecting UI usability issues, video frame analysis for human movement detection, and telescope signals processing for pulsar identification.

Christian Colombo, Gordon J. Pace, Luke Camilleri, Claire Dimech, Reuben Farrugia, Jean Paul Grech, Alessio Magro, Andrew C. Sammut, Kristian Zarb Adami
On the Runtime Enforcement of Evolving Privacy Policies in Online Social Networks

Online Social Networks have increased the need to understand well and extend the expressiveness of privacy policies. In particular, the need to be able to define and enforce dynamic (and recurrent) policies that are activated or deactivated by context (events) or timeouts. We propose an automaton-based approach to define and enforce such policies using runtime verification techniques. In this paper we discuss how our proposed solution addresses this problem without focussing on concrete technical details.

Gordon J. Pace, Raúl Pardo, Gerardo Schneider
On the Specification and Enforcement of Privacy-Preserving Contractual Agreements

We are here concerned with the enforcement at runtime of contractual agreements (e.g., Terms of Service) that respect users’ privacy policies. We do not provide a technical solution to the problem but rather give an overview of a framework for such an enforcement, and briefly discuss related work and ideas on how to address part of the framework.

Gerardo Schneider

Variability Modeling for Scalable Software Evolution

Frontmatter
Introduction to the Track on Variability Modeling for Scalable Software Evolution

Information and communication technology today is increasingly integrated into the environment we live in, distributed on cars, appliances and smart infrastructures. The software running on these devices is increasingly individualized, adapted to the preferences and needs of the specific customer and must be able to evolve after deployment by means of software patches. Upgrades are becoming individualized; software patches used to upgrade the software are selected and adapted depending on the configuration and external constraints of the host device. The objective of the European project HyVar is to develop techniques and tools for fast and customizable software design, for the management of highly distributed applications, for continuous software evolution of remote devices, and scalable infrastructure to accommodate a large number of devices. The track Variability Modeling for Scalable Software Evolution aims to foster cooperation opportunities and create synergies between related research directions to address challenges stemming from software variability, evolution, and cloud technology for highly distributed applications in heterogeneous environments. This paper introduces the track and its individual contributions.

Ferruccio Damiani, Christoph Seidl, Ingrid Chieh Yu
Towards Incremental Validation of Railway Systems

We propose to formally model requirements and interoperability constraints among components of a railway system to enable automated, incremental analysis and validation mechanisms. The goal is to provide the basis for a technology that can drastically reduce the time and cost for certification by making it possible to trace changes from requirements via design to implementation.

Reiner Hähnle, Radu Muschevici
Modeling and Optimizing Automotive Electric/Electronic (E/E) Architectures: Towards Making Clafer Accessible to Practitioners

Modern automotive electric/electronic (E/E) architectures are growing to the point where architects can no longer manually predict the effects of their design decisions. Thus, in addition to applying an architecture reference model to decompose their architectures, they also require tools for synthesizing and evaluating candidate architectures during the design process. Clafer is a modeling language, which has been used to model variable multi-layer, multi-perspective automotive system architectures according to an architecture reference model. Clafer tools allow architects to synthesize optimal candidates and evaluate effects of their design decisions. However, since Clafer is a general-purpose structural modeling language, it does not help the architects in building models conforming to the given architecture reference model. In this paper, we present an E/E architecture domain-specific language (DSL) built on top of Clafer, which embodies the reference model and which guides the architects in correctly applying it. We evaluate the DSL and its implementation by modeling two existing automotive systems, which were originally modeled in plain Clafer. The evaluation showed that by using the DSL, an evaluator obtained correct models by construction because the DSL helped prevent typical errors that are easy to make in plain Clafer. The evaluator was also able to synthesize and evaluate candidate architectures as with plain Clafer.

Eldar Khalilov, Jordan Ross, Michał Antkiewicz, Markus Völter, Krzysztof Czarnecki
Variability-Based Design of Services for Smart Transportation Systems

A smart transportation system can be seen as an aggregate of transportation opportunities and services, accompanied by advanced management services that make the access to the system easier for the user. In this paper, we exploit the product line paradigm to address the variability of an exemplary smart transportation system: a bike-sharing system. Improving the satisfaction of a user of a bike-sharing system includes providing information at runtime on the filling degree of the docking stations in the near future. To fulfill this expectation, a prediction service is needed to infer the probability that at a certain time of the day a user will return a bike to or take one from a station. In earlier studies, several possible advanced smart predictive services were identified. The choice of which services to offer to users by the managers of a bike-sharing system is influenced by minimizing the costs while maximizing customer satisfaction. To aid the managers, we modeled a family of smart bike-sharing services, after which an attributed feature model was used to augment the model with quantitative attributes related to cost and customer satisfaction, allowing for a multi-objective optimization by dedicated tools. We observe that the performance of the smart prediction services, and therefore of the related customer satisfaction, is highly dependent on the amount of collected historical data on which the predictive analysis is based. Therefore the result of the optimization also depends on this factor, which evolves over time.

Maurice H. ter Beek, Alessandro Fantechi, Stefania Gnesi, Laura Semini
Comparing AWS Deployments Using Model-Based Predictions

Cloud computing provides on-demand resource provisioning for scalable applications with a pay-as-you-go pricing model. However, the cost-efficient use of virtual resources requires the application to exploit the available resources efficiently. Will an application perform equally well on fewer or cheaper resources? Will the application successfully finish on these resources? We have previously proposed a model-centric approach, ABS-YARN, for prototyping deployment decisions to answer such questions during the design of an application. In this paper, we make model-centric predictions for applications on Amazon Web Services (AWS), which is a prominent platform for cloud deployment. To demonstrate how ABS-YARN can help users make deployment decisions with a high cost-performance ratio on AWS, we design several workload scenarios based on MapReduce benchmarks and execute these scenarios on ABS-YARN by considering different AWS resource purchasing options.

Einar Broch Johnsen, Jia-Chun Lin, Ingrid Chieh Yu
A Toolchain for Delta-Oriented Modeling of Software Product Lines

Software is increasingly individualized to the needs of customers and may have to be adapted to changing contexts and environments after deployment. Therefore, individualized software adaptations may have to be performed. As a large number of variants for affected systems and domains may exist, the creation and deployment of the individualized software should be performed automatically based on the software’s configuration and context. In this paper, we present a toolchain to develop and deploy individualized software adaptations based on Software Product Line (SPL) engineering. In particular, we contribute a description and technical realization of a toolchain ranging from variability modeling over variability realization to variant derivation for the automated deployment of individualized software adaptations. To capture the variability within realization artifacts, we employ delta modeling, a transformational SPL implementation approach. As we aim to fulfill requirements of industrial practice, we employ model-driven engineering using statecharts as realization artifacts. Particular statechart variants are further processed by generating C/C++ code, linking to external code artifacts, compiling and deploying to the target device. To allow for flexible and parallel execution the toolchain is provided within a cloud environment. This way, required variants can automatically be created and deployed to target devices. We show the feasibility of our toolchain by developing the industry-related case of emergency response systems.

Cristina Chesta, Ferruccio Damiani, Liudmila Dobriakova, Marco Guernieri, Simone Martini, Michael Nieke, Vítor Rodrigues, Sven Schuster
A Technology-Neutral Role-Based Collaboration Model for Software Ecosystems

In large-scale software ecosystems, many developers contribute extensions to a common software platform. Due to the independent development efforts and the lack of a central steering mechanism, similar functionality may be developed multiple times by different developers. We tackle this problem by contributing a role-based collaboration model for software ecosystems to make such implicit similarities explicit and to raise awareness among developers during their ongoing efforts. We extract this model based on realization artifacts in a specific programming language located in a particular source code repository and present it in a technology-neutral way. We capture five essential collaborations as independent role models that may be composed to present developer collaborations of a software ecosystem in their entirety, which fosters overview of the software ecosystem, analyses of duplicated development efforts and information of ongoing development efforts. Finally, using the collaborations defined in the formalism we model real artifacts from Marlin, a firmware for 3D printers, and we show that for the selected scenarios, the five collaborations were sufficient to raise awareness and make implicit information explicit.

Ştefan Stănciulescu, Daniela Rabiser, Christoph Seidl
Adaptable Runtime Monitoring for the Java Virtual Machine

Nowadays, many programming language implementations and programming frameworks target the Java Virtual Machine (JVM). Examples include the Java and Scala compilers, Oracle’s Truffle framework and the interpreters built on top of it for a variety of dynamic programming languages, as well as big-data frameworks such as Apache Spark, Apache Flink, and Apache Storm. Unfortunately, the JVM provides only limited support for runtime monitoring. The JVM Tool Interface (JVMTI) offers only a very low-level programming model, often introduces high overhead, and does not guarantee proper isolation of the monitoring logic from the observed program. Aspect-Oriented Programming (AOP), in particular AspectJ, is often used to implement runtime monitoring tools. While offering a convenient programming model, the use of such technologies acerbates performance- and isolation-related problems. In this paper, we advocate the use of our dynamic program analysis framework DiSL for runtime monitoring on the JVM. DiSL reconciles an AOP-based programming model, full coverage of all executed bytecodes, optimizations of the monitoring logic, and support for isolation of the monitoring logic. Moreover, DiSL also offers an API to deploy, adapt, and remove monitoring logic at runtime, and it provides seamless support for monitoring also applications running on Android devices.

Andrea Rosà, Yudi Zheng, Haiyang Sun, Omar Javed, Walter Binder
Identifying Variability in Object-Oriented Code Using Model-Based Code Mining

A large set of object-oriented programming (OOP) languages exists to realize software for different purposes. Companies often create variants of their existing software by copying and modifying them to changed requirements. While these so-called clone-and-own approaches allow to save money in short-term, they expose the company to severe risks regarding long-term evolution and product quality. The main reason is the high manual maintenance effort which is needed due to the unknown relations between variants. In this paper, we introduce a model-based approach to identify variability information for OOP code, allowing companies to better understand and manage variability between their variants. This information allows to improve maintenance of the variants and to transition from single variant development to more elaborate reuse strategies such as software product lines. We demonstrate the applicability of our approach by means of a case study analyzing variants generated from an existing software product line and comparing our findings to the managed reuse strategy.

David Wille, Michael Tiede, Sandro Schulze, Christoph Seidl, Ina Schaefer
User Profiles for Context-Aware Reconfiguration in Software Product Lines

Software Product Lines (SPLs) are a mechanism to capture families of closely related software systems by modeling commonalities and variability. Although user customization has a growing importance in software systems and is a vital sales argument, SPLs currently only allow user customization at deploy-time. In this paper, we extend the notion of context-aware SPLs by means of user profiles, containing a linearly ordered set of preferences. Preferences have priorities, meaning that a low priority preference can be neglected in favor of a higher prioritized one. We present a reconfiguration engine checking the validity of the current configuration and, if necessary, reconfiguring the SPL while trying to fulfill the preferences of the active user profile. Thus, users can be assured about the reconfiguration engine providing the most suitable configuration for them. Moreover, we demonstrate the feasibility of our approach using a case study based on existing car customizability.

Michael Nieke, Jacopo Mauro, Christoph Seidl, Ingrid Chieh Yu
Refactoring Delta-Oriented Product Lines to Enforce Guidelines for Efficient Type-Checking

A Software Product Line (SPL) is a family of similar programs generated from a common code base. Delta-Oriented Programming (DOP) is a flexible and modular approach to construct SPLs. Ensuring type safety in an SPL (i.e., ensuring that all its programs are well-typed) is a computationally expensive task. Recently, five guidelines to address the complexity of type checking delta-oriented SPLs have been proposed. This paper presents algorithms to refactor delta-oriented SPLs in order to follow the five guidelines. Complexity and correctness of the refactoring algorithms are stated.

Ferruccio Damiani, Michael Lienhardt

Detecting and Understanding Software Doping

Frontmatter
Facets of Software Doping

This paper provides an informal discussion of the formal aspects of software doping.

Gilles Barthe, Pedro R. D’Argenio, Bernd Finkbeiner, Holger Hermanns
Software that Meets Its Intent

Software is widely used, and society increasingly depends on its reliability. However, software has become so complex and it evolves so quickly that we fail to keep it under control. Therefore, we propose intents: fundamental laws that capture a software systems’ intended behavior (resilient, secure, safe, sustainable, etc.). The realization of this idea requires novel theories, algorithms, tools, and techniques to discover, express, verify, and evolve software intents. Thus, future software systems will be able to verify themselves that they meet their intents. Moreover, they will be able to respond to deviations from intents through self-correction. In this article we propose a research agenda, outlining which novel theories, algorithms and tools are required.

Marieke Huisman, Herbert Bos, Sjaak Brinkkemper, Arie van Deursen, Jan Friso Groote, Patricia Lago, Jaco van de Pol, Eelco Visser
Compliance, Functional Safety and Fault Detection by Formal Methods

With the increasing complexity of today’s cars functional safety and compliance guarantees are more and more difficult to obtain. During the life time of a vehicle the detection of malfunctioning non-mechanical components requires meanwhile more attention than the maintenance of its mechanical counterparts. A full fledged formal verification of the overall car is not realistic and even hard to obtain for single non-trivial components such as assistant systems. Furthermore, it does not support fault detection at run time. We suggest an approach towards formal safety, compliance and fault detection at run time via an auditor. The auditor is automatically fed out of the engineering and production process by a suitable abstract specification and respective model of the car and can detect then detect violations and faulty components.

Christof Fetzer, Christoph Weidenbach, Patrick Wischnewski
What the Hack Is Wrong with Software Doping?

Today we often deal with hybrid products, i.e. physical devices containing embedded software. Sometimes, e.g., in the VW emission scandal, such hybrid systems aims rather at the fulfillment of interests of the manufacturers than at those of the customers. This often happens hidden from and unbeknown to the owners or users of these devices and especially unbeknown to supervisory authorities. While examples of such software doping can be easily found, the phenomenon itself isn’t well understood yet. Not only do we lack a proper definition of the term “software doping”, it is also the moral status of software doping that seems vague and unclear. In this paper, I try, in the tradition of computer ethics, to first understand what software doping is and then to examine its moral status. I argue that software doping is at least pro tanto morally wrong. I locate problematic features of software doping that are in conflict with moral rights that come with device ownership. Furthermore, I argue for the stronger claim that, in general, software doping also is morally wrong all things considered – at least from the point of view of some normative theories. Explicitly, the VW emission scandal is adduced as a significant specimen of software doping that unquestionably is morally wrong all things considered. Finally, I conclude that we ought to develop software doping detection if only for moral reasons and point towards the implications my work might have for the development of future software doping detection methods.

Kevin Baum

Learning Systems: Machine-Learning in Software Products and Learning-Based Analysis of Software Systems

Frontmatter
Learning Systems: Machine-Learning in Software Products and Learning-Based Analysis of Software Systems
Special Track at ISoLA 2016

We are entering the age of learning systems! On the one hand, we are surrounded by devices that learn from our behavior [3]: household appliances, smart phones, wearables, cars, etc.—the most recent prominent example being Tesla Motor’s autopilot that learns from human drivers. On the other hand, man-made systems are becoming ever more complex, requiring us to learn the behavior of these systems: Learning-based testing [8, 13, 17], e.g., has been proposed as a method for testing the behavior of systems systematically without models and at a high level of abstraction. Promising results have been obtained here using active automata learning technology in verification [6, 16] and testing [1, 8]. At the same time, active automata learning has been extended to support the inference of program structures [5, 10] (it was first introduced for regular languages).

Falk Howar, Karl Meinke, Andreas Rausch
ALEX: Mixed-Mode Learning of Web Applications at Ease

In this paper, we present ALEX, a web application that enables non-programmers to fully automatically infer models of web applications via active automata learning. It guides the user in setting up dedicated learning scenarios, and invites her to experiment with the available options in order to infer models at adequate levels of abstraction. In the course of this process, characteristics that go beyond a mere “site map” can be revealed, such as hidden states that are often either specifically designed or indicate errors in the application logic. Characteristic for ALEX is its support for mixed-mode learning: REST and web services can be executed simultaneously in one learning experiment, which is ideal when trying to compare back-end and front-end functionality of a web application. ALEX has been evaluated in a comparative study with 140 undergraduate students, which impressively highlighted its potential to make formal methods like active automata learning more accessible to a non-expert crowd.

Alexander Bainczyk, Alexander Schieweck, Malte Isberner, Tiziana Margaria, Johannes Neubauer, Bernhard Steffen
Assuring the Safety of Advanced Driver Assistance Systems Through a Combination of Simulation and Runtime Monitoring

Autonomous vehicles will share the road with human drivers within the next couple of years. One of the big open challenges is the lack of established and cost-efficient approaches for assuring the safety of Advanced Driver Assistance Systems and autonomous driving. Product liability regulations impose high standards on manufacturers regarding the safe operation of such systems. Today’s conventional engineering methods are not adequate for providing such guarantees in a cost-efficient way. One strategy for reducing the cost of quality assurance is transferring a significant part of the testing effort from road tests to (system-level) simulations. It is not clear, however, how results obtained from simulations transfer to the road. In this paper, we present a method for ensuring that an Advanced Driver Assistance System satisfies its safety requirements at runtime and operates within safe limits that were tested in simulations. Our approach utilizes runtime monitors that are generated from safety requirements and trained using simulated test cases. We evaluate our approach using an industrial prototype of a lane change assistant and data recorded in road tests on German highways.

Malte Mauritz, Falk Howar, Andreas Rausch
Enhancement of an Adaptive HEV Operating Strategy Using Machine Learning Algorithms

For vehicle manufacturers as well as for many drivers saving fuel has been a popular issue. In order to maximize the potential of the consumption-savings, optimization of operating strategy of vehicles, particularly of HEV (HEV: hybrid electric vehicle.), becomes an increasingly important task.To enhance the current rule-based operating strategy of HEV, an adaptive heuristic operating strategy has been developed, which identifies driving patterns and chooses the best parameter set for the control strategy from a database. However, this strategy does not fit to the driving behavior of individual drivers.Therefore, a further knowledge-based approach that independently optimizes the operating strategy has been developed using of multigene symbolic regression by utilizing supervised machine learning.The investigation showed that a knowledge-based approach is able to save about 18.3 % CO2 and fuel compared to a basic heuristic strategy.

Mark Schudeleit, Meng Zhang, Xiaofei Qi, Ferit Küçükay, Andreas Rausch

Testing the Internet of Things

Frontmatter
Data Science Challenges to Improve Quality Assurance of Internet of Things Applications

With the increasing importance and complexity of Internet of Things (IoT) applications, also the development of adequate quality assurance techniques becomes essential. Due to the massive amount of data generated in workflows of IoT applications, data science plays a key role in their quality assurance. In this paper, we present respective data science challenges to improve quality assurance of Internet of Things applications. Based on an informal literature review, we first outline quality assurance requirements evolving with the IoT grouped into six categories (Environment, User, Compliance/Service Level Agreement, Organizational, Security and Data Management). Finally, we present data science challenges to improve the quality assurance of Internet of Things applications sub-divided into four categories (Defect prevention, Defect analysis, User incorporation and Organizational) derived from the six quality assurance requirement categories.

Harald Foidl, Michael Felderer
Model-Based Testing as a Service for IoT Platforms

The Internet of Things (IoT) has increased its footprint becoming globally a ‘must have’ for today’s most innovative companies. Applications extend to multitude of domains, such as smart cities, healthcare, logistics, manufacturing, etc. Gartner Group estimates an increase up to 21 billion connected things by 2020. To manage ‘things’ heterogeneity and data streams over large scale and secured deployments, IoT and data platforms are becoming a central part of the IoT. To respond to this fast growing demand we see more and more platforms being developed, requiring systematic testing. Combining Model-Based Testing (MBT) technique and a service-oriented solution, we present Model-Based Testing As A Service (MBTAAS) for testing data and IoT platforms. In this paper, we present a first step towards MBTAAS for data and IoT Platforms, with experimentation on FIWARE, one of the EU most emerging IoT enabled platforms.

Abbas Ahmad, Fabrice Bouquet, Elizabeta Fourneret, Franck Le Gall, Bruno Legeard

Doctoral Symposium

Frontmatter
Handling Domain Knowledge in Formal Design Models: An Ontology Based Approach
Kahina Hacid

Industrial Track

Frontmatter
Effective and Efficient Customization Through Lean Trans-Departmental Configuration

Todayʼs organizations tend to offer customized products and services to satisfy challenging customer needs. In this context, product customization spans product variant configuration as well as actual custom engineering. Particularly custom engineering induces various obstacles that organizations must overcome in order to effectively and efficiently realize inter-departmental communication and smooth internal order processing. Both are mandatory to achieve customer satisfaction and outcompete competitors. These obstacles span topics in the fields of recommendation-driven and knowledge-centered customization, knowledge engineering and maintenance, organizational learning, as well as intra-departmental process alignment. Based on first-hand insights and expert interviews, we identify the main business requirements when faced with process innovations and technological challenges in this context and discuss shortcomings of available approaches in common practice. We also report preliminary results that include the conceptual design of a knowledge-centric configuration framework designed to overcome the obstacles raised.

Barbara Steffen, Steve Boßelmann, Axel Hessenkämper
A Fully Model-Based Approach to Software Development for Industrial Centrifuges

We present a model-based approach to software development for industrial process automation, overall aiming at decreased development efforts, increased quality, and reduced time to market. Key to this approach is the high-level specification of all required aspects using dedicated domain-specific modeling languages. This abstraction provides both a standardized framework supporting the communication between experts on industrial processes and PLC software developers, as well as the reduction of implementation overhead through full code generation. As a proof of concept, processes for industrial centrifuges are considered. We present the standardized models and tools used to fully generate code for Beckhoff Twin CAT 3. The presented work is the result of a cooperation project at TU Dortmund with the GEA Group AG involving 12 students.

Nils Wortmann, Malte Michel, Stefan Naujokat

RERS Challenge

Frontmatter
RERS 2016: Parallel and Sequential Benchmarks with Focus on LTL Verification

The 5th challenge of Rigorous Examination of Reactive Systems (RERS 2016) once again provided generated and tailored benchmarks suited for comparing the effectiveness of automatic software verifiers. RERS is the only software verification challenge that features problems with linear temporal logic (LTL) properties in larger sizes that are available in different programming languages. This paper describes the revised rules and the refined profile of the challenge, which lowers the entry hurdle for new participants. The challenge format with its three tracks, their benchmarks, and the related LTL and reachability properties are explained. Special emphasis is put on changes that were implemented in RERS — compared to former RERS challenges. The competition comprised 18 sequential and 20 parallel benchmarks. The 20 benchmarks from the new parallel track feature LTL properties and a compact representation as labeled transition systems and Promela code.

Maren Geske, Marc Jasper, Bernhard Steffen, Falk Howar, Markus  Schordan, Jaco van de Pol

STRESS

Frontmatter
DIME: A Programming-Less Modeling Environment for Web Applications

We present DIME, an integrated solution for the rigorous model-driven development of sophisticated web applications based on the Dynamic Web Application (DyWA) framework, that is designed to accelerate the realization of requirements in agile development environments. DIME provides a family of Graphical Domain-Specific Languages (GDSLs), each of which is tailored towards a specific aspect of typical web applications, including persistent entities (i.e., a data model), business logic in form of various types of process models, the structure of the user interface, and access control. They are modeled on a high level of abstraction in a simplicity-driven fashion that focuses on describing what application is sought, instead of how the application is realized. The choice of platform, programming language, and frameworks is moved to the corresponding (full) code generator.

Steve Boßelmann, Markus Frohme, Dawid Kopetzki, Michael Lybecait, Stefan Naujokat, Johannes Neubauer, Dominic Wirkner, Philip Zweihoff, Bernhard Steffen
Verification Techniques for Hybrid Systems

A brief introduction to the state-of-the-art techniques in verification of hybrid systems is presented. In particular, the hybrid automaton model is introduced, important correctness properties are discussed and a brief overview of the analysis techniques and tools is presented.

Pavithra Prabhakar, Miriam Garcia Soto, Ratan Lal
On the Power of Statistical Model Checking

This paper contains material for our tutorial presented at STRESS 2016. This includes an introduction to Statistical Model Checking algorithms and their rare event extensions, as well as an introduction to two well-known SMC tools: Plasma and Uppaal.

Kim G. Larsen, Axel Legay
Backmatter
Metadaten
Titel
Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications
herausgegeben von
Tiziana Margaria
Bernhard Steffen
Copyright-Jahr
2016
Electronic ISBN
978-3-319-47169-3
Print ISBN
978-3-319-47168-6
DOI
https://doi.org/10.1007/978-3-319-47169-3