Skip to main content

2016 | Buch

Leveraging Applications of Formal Methods, Verification and Validation: Foundational Techniques

7th International Symposium, ISoLA 2016, Imperial, Corfu, Greece, October 10–14, 2016, Proceedings, Part I

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 9952 and LNCS 9953 constitutes the refereed proceedings of the 7th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation, ISoLA 2016, held in Imperial, Corfu, Greece, in October 2016.


The papers presented in this volume were carefully reviewed and selected for inclusion in the proceedings. Featuring a track introduction to each section, the papers are organized in topical sections named: statistical model checking; evaluation and reproducibility of program analysis and verification; ModSyn-PP: modular synthesis of programs and processes; semantic heterogeneity in the formal development of complex systems; static and runtime verification: competitors or friends?; rigorous engineering of collective adaptive systems; correctness-by-construction and post-hoc verification: friends or foes?; privacy and security issues in information systems; towards a unified view of modeling and programming; formal methods and safety certification: challenges in the railways domain; RVE: runtime verification and enforcement, the (industrial) application perspective; variability modeling for scalable software evolution; detecting and understanding software doping; learning systems: machine-learning in software products and learning-based analysis of software systems; testing the internet of things; doctoral symposium; industrial track; RERS challenge; and STRESS.

Inhaltsverzeichnis

Frontmatter
Erratum to: Leveraging Applications of Formal Methods, Verification and Validation (Part I)
Tiziana Margaria, Bernhard Steffen

Statistical Model Checking

Frontmatter
Statistical Model Checking: Past, Present, and Future

Statistical Model Checking (SMC) is a compromise between verification and testing where executions of the systems are monitored until an algorithm from statistics can produce an estimate for the system to satisfy a given property.The objective of this introduction is to summarizes SMC as well as a series of challenges for which contributors at Isola propose a solution.Contributions include new SMC toolsets, new flexible SMC algorithms for larger classes of systems, and new applications.

Kim G. Larsen, Axel Legay
Hypothesis Testing for Rare-Event Simulation: Limitations and Possibilities

One of the main applications of probabilistic model checking is to decide whether the probability of a property of interest is above or below a threshold. Using statistical model checking (SMC), this is done using a combination of stochastic simulation and statistical hypothesis testing. When the probability of interest is very small, one may need to resort to rare-event simulation techniques, in particular importance sampling (IS). However, IS simulation does not yield 0/1-outcomes, as assumed by the hypothesis tests commonly used in SMC, but likelihood ratios that are typically close to zero, but which may also take large values.In this paper we consider two possible ways of combining IS and SMC. One involves a classical IS-scheme from the rare-event simulation literature that yields likelihood ratios with bounded support when applied to a certain (nontrivial) class of models. The other involves a particular hypothesis testing scheme that does not require a-priori knowledge about the samples, only that their variance is estimated well.

Daniël Reijsbergen, Pieter-Tjerk de Boer, Werner Scheinhardt
Survey of Statistical Verification of Linear Unbounded Properties: Model Checking and Distances

We survey statistical verification techniques aiming at linear properties with unbounded or infinite horizon, as opposed to properties of runs of fixed length. We discuss statistical model checking of Markov chains and Markov decision processes against reachability, unbounded-until, LTL and mean-payoff properties. Moreover, the respective strategies can be represented efficiently using statistical techniques. Further, we also discuss when it is possible to statistically estimate linear distances between Markov chains.

Jan Křetínský
Feedback Control for Statistical Model Checking of Cyber-Physical Systems

We introduce feedback-control statistical system checking (FC-SSC), a new approach to statistical model checking that exploits principles of feedback-control for the analysis of cyber-physical systems (CPS). FC-SSC uses stochastic system identification to learn a CPS model, importance sampling to estimate the CPS state, and importance splitting to control the CPS so that the probability that the CPS satisfies a given property can be efficiently inferred. We illustrate the utility of FC-SSC on two example applications, each of which is simple enough to be easily understood, yet complex enough to exhibit all of FC-SCC’s features. To the best of our knowledge, FC-SSC is the first statistical system checker to efficiently estimate the probability of rare events in realistic CPS applications or in any complex probabilistic program whose model is either not available, or is infeasible to derive through static-analysis techniques.

K. Kalajdzic, C. Jegourel, A. Lukina, E. Bartocci, A. Legay, S. A. Smolka, R. Grosu
Probabilistic Model Checking of Incomplete Models

It is crucial for accurate model checking that the model be a complete and faithful representation of the system. Unfortunately, this is not always possible, mainly because of two reasons: (i) the model is still under development and (ii) the correctness of implementation of some modules is not established. In such circumstances, is it still possible to get correct answers for some model checking queries?This paper is a step towards answering this question. We formulate the problem for the Discrete Time Markov Chains (DTMC) modeling formalism and the Probabilistic Computation Tree Logic (PCTL) query language. We then propose a simple solution by modifying DTMC and PCTL to accommodate three valued logic. The technique builds on existing model checking algorithms and tools, obviating the need for new ones to account for three valued logic. Finally, we provide an experimental demonstration of our approach.

Shiraj Arora, M. V. Panduranga Rao
Plasma Lab: A Modular Statistical Model Checking Platform

We present an overview of Plasma Lab, a modular statistical model checking (SMC) platform that facilitates multiple SMC algorithms, multiple modelling and query languages and has multiple modes of use. Plasma Lab may be used as a stand-alone tool with a graphical development environment or invoked from the command line for high performance scripting applications. Plasma Lab is written in Java for maximum cross-platform compatibility, but it may interface with tools and libraries written in arbitrary programming languages. Plasma Lab’s API also allows it to be incorporated as a library within other tools.We first describe the motivation and architecture of Plasma Lab, then proceed to describe some of its important algorithms, including those for rare events and nondeterminism. We conclude with a number of industrially-relevant case studies and applications.

Axel Legay, Sean Sedwards, Louis-Marie Traonouez
Synthesizing Energy-Optimal Controllers for Multiprocessor Dataflow Applications with Uppaal Stratego

Streaming applications for mobile platforms impose high demands on a system’s throughput and energy consumption. Dynamic system-level techniques have been introduced, to reduce power consumption at the expense of performance. We consider DPM (Dynamic Power Management) and DVFS (Dynamic Voltage and Frequency Scaling). The complex programming task now includes mapping and scheduling every task onto a heterogeneous multi-processor hardware platform. Moreover, DPM and DVFS parameters must be controlled, to meet all throughput constraints while minimizing the energy consumption.Previous work proposed to automate this process, by modeling streaming applications in SDF (Synchronous Data Flow), modeling the processor platform, translating both models to PTA (Priced Timed Automata, where prices model energy), and using Uppaal Cora to compute energy-optimal schedules that adhere to the throughput constraints.In this paper, we experiment with an alternative approach, based on stochastic hybrid games. We investigate the applicability of Uppaal Stratego to first synthesize a permissive controller satisfying a throughput constraint, and then select a near-optimal strategy that additionally minimizes the energy consumption. Our goal is to compare the Uppaal Cora and Uppaal Stratego approaches in terms of modeling effort, results and computation times, and to reveal potential limitations.

Waheed Ahmad, Jaco van de Pol
Statistical Model Checking for Product Lines

We report on the suitability of statistical model checking for the analysis of quantitative properties of product line models by an extended treatment of earlier work by the authors. The type of analysis that can be performed includes the likelihood of specific product behaviour, the expected average cost of products (in terms of the attributes of the products’ features) and the probability of features to be (un)installed at runtime. The product lines must be modelled in QFLan, which extends the probabilistic feature-oriented language PFLan with novel quantitative constraints among features and on behaviour and with advanced feature installation options. QFLan is a rich process-algebraic specification language whose operational behaviour interacts with a store of constraints, neatly separating product configuration from product behaviour. The resulting probabilistic configurations and probabilistic behaviour converge in a discrete-time Markov chain semantics, enabling the analysis of quantitative properties. Technically, a Maude implementation of QFLan, integrated with Microsoft’s SMT constraint solver Z3, is combined with the distributed statistical model checker MultiVeStA, developed by one of the authors. We illustrate the feasibility of our framework by applying it to a case study of a product line of bikes.

Maurice H. ter Beek, Axel Legay, Alberto Lluch Lafuente, Andrea Vandin
Towards Adaptive Scheduling of Maintenance for Cyber-Physical Systems

Scheduling and control of Cyber-Physical Systems (CPS) are becoming increasingly complex, requiring the development of new techniques that can effectively lead to their advancement. This is also the case for failure detection and scheduling component replacements. The large number of factors that influence how failures occur during operation of a CPS may result in maintenance policies that are time-monitoring based, which can lead to suboptimal scheduling of maintenance. This paper investigates how to improve maintenance scheduling of such complex embedded systems, by means of monitoring in real-time the critical components and dynamically adjusting the optimal time between maintenance actions. The proposed technique relies on machine learning classification models in order to classify component failure cases vs. non-failure cases, and on real-time updating of the maintenance policy of the sub-system in question. The results obtained from the domain of printers show that a model that is responsive to the environmental changes can enable consumable savings, while keeping the same product quality, and thus be relevant for industrial purposes.

Alexis Linard, Marcos L. P. Bueno
Better Railway Engineering Through Statistical Model Checking

Maintenance is essential to ensuring the dependability of a technical system. Periodic inspections, repairs, and renewals can prevent failures and extend a system’s lifespan. At the same time, maintenance incurs cost and planned downtime. It is therefore important to find a maintenance policy that balances cost and dependability.This paper presents a framework, fault maintenance trees (FMTs), integrating maintenance into the industry-standard formalism of fault trees. By translating FMTs to priced timed automata and applying statistical model checking, we can obtain system dependability metrics such as system reliability and mean time to failure, as well as costs of maintenance and failures over time, for different maintenance policies.Our framework is flexible and can be extended to include effects specific to the system being analysed. We demonstrate that our framework can be used in practice using two case studies from the railway industry: electrically insulated joints, and pneumatic compressors.

Enno Ruijters, Mariëlle Stoelinga
On Creation and Analysis of Reliability Models by Means of Stochastic Timed Automata and Statistical Model Checking: Principle

The paper presents a method for creation and analysis of reliability models by means of stochastic timed automata and statistical model checking approach available in the UPPAAL SMC tool; its application is expected in, but not limited to, the area of electronic systems. The method can be seen as an alternative to classical analytic approaches based on instruments such as fault-tree or Markov reliability models of the above-specified systems. By the means of the method, reliability analysis of systems can be facilitated even for adverse conditions such as inconstant failure (hazard) rate of system components, various fault scenarios or dependencies among components and/or faults. Moreover, the method is applicable to dynamic, evolvable/reconfigurable systems able to add, remove their components and/or change their parameters at run-time; last but not least, it can be utilized to analyze and study reliability in the context of further system parameters such as liveness, safety and/or timing, power and other, application-specific, constraints. A solution to the related problems is far beyond the scope of recent methods.

Josef Strnadel
Automatic Synthesis of Code Using Genetic Programming

Correct-by-design automatic system construction can relieve both programmers and quality engineers from part of their tasks. Classical program synthesis involves a series of transformations, starting with the given formal specification. However, this approach is often prohibitively intractable, and in some cases undecidable. Model-checking-based genetic programming provides a method for software synthesis; it uses randomization, together with model checking, to heuristically search for code that satisfies the given specification. We present model checking based genetic programming as an alternative to classical transformational synthesis and study its weakness and strengths.

Doron Peled

Evaluation and Reproducibility of Program Analysis and Verification

Frontmatter
Evaluation and Reproducibility of Program Analysis and Verification (Track Introduction)

Manual inspection of complex software is costly and error prone. Techniques and tools that do not require manual inspection are in dire need as our software systems grow at a rapid rate. This track is concerned with the methods of comparative evaluation of program analyses and the tools that implement them. It also addresses the question how program properties that have been verified can be represented such that they remain reproducible and reusable as intermediate results for other analyses and verification phases. In particular, it is of interest how different tools can be combined to achieve better results than with only one of those tools alone.

Markus Schordan, Dirk Beyer, Jonas Lundberg
Symbolic Execution with CEGAR

Symbolic execution, a standard technique in program analysis, is a particularly successful and popular component in systems for test-case generation. One of the open research problems is that the approach suffers from the path-explosion problem. We apply abstraction to symbolic execution, and refine the abstract model using counterexample-guided abstraction refinement (Cegar), a standard technique from model checking. We also use refinement selection with existing and new heuristics to influence the behavior and further improve the performance of our refinement procedure. We implemented our new technique in the open-source software-verification framework CPAchecker. Our experimental results show that the implementation is highly competitive.

Dirk Beyer, Thomas Lemberger
Multi-core Model Checking of Large-Scale Reactive Systems Using Different State Representations

Model checking software systems allows to formally verify that their behavior adheres to certain properties. The state explosion problem presents a major obstacle to model checking due to the implied large concrete state spaces. We present an approach to efficient model checking of large-scale reactive systems that aims at a trade-off between the number of verifiable and falsifiable properties and the required analysis time. Our two-phase approach is based on a parallel state space exploration with explicit states for falsifying linear temporal logic (LTL) properties, and an abstract phase reasoning on the entire state space for verifying LTL properties. This two-phase approach enabled us to win the Rigorous Examination of Reactive Systems Challenge (RERS) in 2014 and 2015. We present a detailed evaluation based on 30 different RERS benchmarks regarding both our verification results and the obtainable parallel speedup.

Marc Jasper, Markus Schordan
Sparse Analysis of Variable Path Predicates Based upon SSA-Form

Static Single Assignment Form benefits data flow analysis by its static guarantees on the definitions and uses of variables. In this paper, we show how to exploit these guarantees to enable a sparse data flow analysis of variable predicates, for gaining a rich predicate-based and path-oriented characterization of the values of program variables.

Thomas S. Heinze, Wolfram Amme
A Model Interpreter for Timed Automata

In the model-centric approach to model-driven development, the models used are sufficiently detailed to be executed. Being able to execute the model directly, without any intermediate model-to-code translation, has a number of advantages. The model is always up-to-date and runtime updates of the model are possible. This paper presents a model interpreter for timed automata, a formalism often used for modeling and verification of real-time systems. The model interpreter supports real-time system features like simultaneous execution, system wide signals, a ticking clock, and time constraints. Many existing formal representations can be verified, and many existing DSMLs can be executed. It is the combination of being both verifiable and executable that makes our approach rather unique.

M. Usman Iftikhar, Jonas Lundberg, Danny Weyns

ModSyn-PP: Modular Synthesis of Programs and Processes

Frontmatter
ModSyn-PP: Modular Synthesis of Programs and Processes Track Introduction

It is an old and beautiful dream of computer science to synthesize software applications from specifications. The beginning can be traced back to, at least, to 1957 (Summer Institute of Symbolic Logic, Cornell 1957) when Alonzo Church proposed to consider the problem of automatically constructing a finite-state procedure implementing a given input/output relation over infinite bitstreams specified as a logical formula. The problem, since then widely known as “Church’s Problem”, gave rise to a major branch of theoretical computer science which has been concerned with many different forms of synthesis.

Boris Düdder, George T. Heineman, Jakob Rehof
Combinatory Process Synthesis

We report on a type-theoretic method for functional synthesis of processes from repositories of components. Our method relies on the existing framework for composition synthesis based on combinatory logic, (CL)S. Simple types for BPMN 2.0 components and a taxonomy of domain specific concepts are used to assign types to BPMN 2.0 fragments and functional fragment constructors. Both serve as input for the automatic creation of meaningful processes. Staging synthesis into two levels provides a separation of concerns between the easy task of extracting fragments from existing processes and the more sophisticated task of deducing functional fragment transformations.We study the applicability of the described approach by synthesizing control processes for LEGO® Mindstorms® NXT robots deployed on the Activiti platform. We evaluate experimental results analyzing synthesized processes regarding correctness, variability and the time consumed for their creation by the (CL)S framework. Additionally, the steps necessary to target a different application domain are described.

Jan Bessai, Andrej Dudenhefner, Boris Düdder, Moritz Martens, Jakob Rehof
Synthesis from a Practical Perspective

Based on a very liberal understanding of synthesis as a generic term for techniques that generate programming artifacts from higher-level specifications, the paper discusses several corresponding facets from a practical perspective. The synthesis examples we consider comprise variations of code generation, model synthesis from temporal logic descriptions, and metamodel-based tool generation. Although very different, they can all be regarded as means to “factor out” predefined aspects of the envisioned product or the production environment so that developers/designers can simply focus on the remaining issues. This “factoring out” of what is pre-agreed or predefined is a primary goal of domain-specific languages design, and it is applicable to both modeling and programming languages. Leading synthesis techniques elegantly achieve this factoring by combining forms of substitution/partial evaluation to those steps that can be determined locally, as is typically the case for most parts of code generation, with (heuristic) search for those parts where more global patterns need to be matched, as is the case, e.g., for temporal- logic synthesis. The paper presents our experience with a variety of synthesis approaches and corresponding design and synthesis tools. It also discusses the synergetic potential of their combination, e.g., to control the computational complexity by reducing the underlying search space. This is, in our opinion, a viable path to achieve a fundamentally higher simplicity in IT system design.

Sven Jörges, Anna-Lena Lamprecht, Tiziana Margaria, Stefan Naujokat, Bernhard Steffen
A Long and Winding Road Towards Modular Synthesis

This paper offers a personal reflection on a number of attempts over the past decade to apply a variety of approaches to construct a product line for solitaire card games implemented in Java. A product line shares a common set of features developed from a common set of software artifacts. A feature is a unit of functionality within a system that is visible to an end-user and can be used to differentiate members of the product line. The ultimate research goal is to assemble a product line by selecting a configuration of a set of pre-designed modular units and developing new units as necessary for individual members; in short, incorporating configuration into routine development. A secondary goal was to develop a suitable tool chain that could be integrated with existing IDEs to achieve widespread acceptance of the approach. We compare progress against by-hand development in Java. During this period we investigated a number of approaches from the research literature, including components, aspects, and layers; these efforts led to a productive collaboration supported by type theory.

George T. Heineman, Jan Bessai, Boris Düdder, Jakob Rehof

Semantic Heterogeneity in the Formal Development of Complex Systems

Frontmatter
Semantic Heterogeneity in the Formal Development of Complex Systems: An Introduction

Nowadays, the formal development of complex systems (including hardware and/or software) implies the writing, synthesis and analysis of many kind of models on which properties are expressed and then formally verified. These models first provide separation of concerns, but also the appropriate level of abstraction to ease the formal verification. However, the building of such heterogeneous models can introduce gaps and information loss between the various models as elements that are explicit in the whole integrated models are only explicit in some concerns and implicit in others. The whole correct development should thus only be conducted on the whole integrated model whereas separate development is mandatory for scalability of system development. More precisely, parts of these systems can be defined within contexts, imported and/or instantiated. Such contexts usually represent the implicit elements and associated semantics for these systems. Several relevant properties are defined on these implicit parts according to the formal technique being used. When considering these properties in their context with the associated explicit semantics, these properties may be not provable or even can be satisfiable in the limited explicit semantics whereas they would be unsatisfiable in the whole semantics including the implicit part. Therefore, the development activities need to be revisited in order to facilitate handling of both the explicit and implicit semantics.

J. Paul Gibson, Idir Aït-Sadoune, Marc Pantel
On the Use of Domain and System Knowledge Modeling in Goal-Based Event-B Specifications

When using formal methods, one of the main difficulties is to elaborate the initial formal specification from informal descriptions obtained during the requirements analysis phase. For that purpose, we propose a goal-based approach in which the building of an initial formal model (in Event-B) is driven by a goal-oriented requirements engineering model (SysML/KAOS). In a previous work, we have defined a set of rules to derive a partial Event-B specification from a goal model. In this paper, we propose to enhance the goal model in order to obtain a more complete formal specification. First, we advocate the specification of a domain ontology in order to share common understanding of the structure of the different applications of the underlying domain. This is particularly useful for complex systems to explicit and make clearer the domain knowledge. For a specific system, a class and an object diagrams are then specified to detail its components and their relationships. Finally, we describe how the ontology and the structural model are translated into Event-B. The proposed approach is illustrated through a landing gear system.

Amel Mammar, Régine Laleau
Strengthening MDE and Formal Design Models by References to Domain Ontologies. A Model Annotation Based Approach

Critical systems are running in heterogeneous domains. This heterogeneity is rarely considered explicitly when describing and validating processes. Handling explicitly such domain knowledge increases design models robustness due to the expression and validation of new properties mined from the domain models. This paper proposes a stepwise approach to enrich design models describing complex information systems with domain knowledge. We use ontologies to model such domain knowledge. Design models are annotated by references to domain ontologies. The resulting annotated models are checked. It becomes possible to verify domain-related properties and obtain strengthened models. The approach is deployed for two design model development approaches: a Model Driven Engineering (MDE) approach and a correct by construction formal modeling one based on refinement and proof using Event-B method. A case study illustrates both approaches (This work is partially supported by the French ANR-IMPEX project.).

Kahina Hacid, Yamine Ait-Ameur
Towards Functional Requirements Analytics

In the Era of sharing, several efforts have been launched to construct repositories (referential/warehouses) storing entities used in projects for enterprises (e.g., data, processes, models, APIs, etc.). These repositories augment the business value of the enterprises in terms of reuse, sharing, traceability and analysis. By exploring the literature, we figure out the absence of appropriate warehouses dedicated to functional requirements ($$\mathcal {FR}$$) for the analytical purpose. In a large scale software in the context of global enterprises, involving numerous partners, $$\mathcal {FR}$$ may be very heterogeneous in terms of the used vocabularies and formalisms to model them. Other aspects that have to be handled, when constructing a $$\mathcal {FR}$$ warehouse are: (a) the management of interdependencies that may exist among $$\mathcal {FR}$$ and (b) their scheduling. These aspects complicate the construction of such warehouse. In this paper, we first propose a complete and comprehensive semantic-driven methodology, to design $$\mathcal {FR}$$ warehouses. Secondly, all steps of our approach leveraged from the traditional warehouse design are highlighted: (i) the definition of the multidimensional model, (ii) adapting the existing operators of ETL (Extract, Transform, Load) to deal with $$\mathcal {FR}$$. ETL uses reasoning capabilities to eliminate the conflictual requirements. (iii) Translating the multidimensional model to its corresponding logical model and (iv) evaluating the performance of the final warehouse. Finally, a proof of concept for our proposal is presented using Oracle DBMS and the vocabulary QB4OLAP proposed by the W3C Government Linked Data Working Group to facilitate the manipulation of semantic warehouses.

Zouhir Djilani, Nabila Berkani, Ladjel Bellatreche
Heterogeneous Semantics and Unifying Theories

Model-driven development is being used increasingly in the development of modern computer-based systems. In the case of cyber-physical systems (including robotics and autonomous systems) no single modelling solution is adequate to cover all aspects of a system, such as discrete control, continuous dynamics, and communication networking. Instead, a heterogeneous modelling solution must be adopted. We propose a theory engineering technique involving Isabelle/HOL and Hoare & He’s Unifying Theories of Programming. We illustrate this approach with mechanised theories for building a contractual theory of sequential programming, a theory of pointer-based programs, and the reactive theory underpinning CSP’s process algebra. Galois connections provide the mechanism for linking these theories.

Jim Woodcock, Simon Foster, Andrew Butterfield

Static and Runtime Verification: Competitors or Friends?

Frontmatter
Static and Runtime Verification, Competitors or Friends? (Track Summary)

Over the last years, significant progress has been made both on static and runtime program verification techniques, focusing on increasing the quality of software. Within this track, we would like to investigate how we can leverage these techniques by combining them. Questions that will be addressed are for example: what can static verification bring to runtime verification to reduce impact on execution time and memory use, and what can runtime verification bring to static verification to take over where static verification fails to either scale or provide precise results? One can to some extent consider these two views (static verification supporting runtime verification, and runtime verification supporting static verification) as fundamentally representing the same scenario: prove what can be proved statically, and dynamically analyze the rest.

Dilian Gurov, Klaus Havelund, Marieke Huisman, Rosemary Monahan
StaRVOOrS — Episode II
Strengthen and Distribute the Force

Static and runtime techniques for the verification of programs are complementary. They both have their advantages and disadvantages, and a natural question is whether they may be combined in such a way as to get the advantages of both without inheriting too much from their disadvantages. In a previous contribution to ISoLA’12, we have proposed StaRVOOrS (‘Static and Runtime Verification of Object-Oriented Software’), a unified framework for combining static and runtime verification in order to check data- and control-oriented properties. Returning to ISoLA here, we briefly report on advances since then: a unified specification language for data- and control-oriented properties, a tool for combined static and runtime verification, and experiments. On that basis, we discuss two future research directions to strengthen the power, and broaden the scope, of combined static and runtime verification: (i) to use static analysis techniques to further optimise the runtime monitor, and (ii) to extend the framework to the distributed case.

Wolfgang Ahrendt, Gordon J. Pace, Gerardo Schneider
A Model-Based Approach to Combining Static and Dynamic Verification Techniques

Given the complementary nature of static and dynamic analysis, there has been much work on identifying means of combining the two. In particular, the use of static analysis as a means of alleviating the overheads induced by dynamic analysis, typically by trying to prove parts of the properties, which would then not need to be verified at runtime. In this paper, we propose a novel framework which combines static with dynamic verification using a model-based approach. The approach allows the support of applications running on untrusted devices whilst using centralised sensitive services whose use is to be tightly regulated. In particular, we discuss how this approach is being adopted in the context of the Open Payments Ecosystem (OPE) — an ecosystem meant to support the development of payment and financial transaction applications with strong compliance verification to enable adoption by payment institutions.

Shaun Azzopardi, Christian Colombo, Gordon Pace
Information Flow Analysis for Go

We present the current state of the art of information flow analyses for Go applications. Based on our findings, we discuss future directions of where static analysis information can be used at runtime to for example achieve higher precision, or optimise runtime checks. We focus specifically on outstanding language features such as closures and message-based communication via channels.

Eric Bodden, Ka I. Pun, Martin Steffen, Volker Stolz, Anna-Katharina Wickert
Challenges in High-Assurance Runtime Verification

Safety-critical systems are growing more complex and becoming increasingly autonomous. Runtime Verification (RV) has the potential to provide protections when a system cannot be assured by conventional means, but only if the RV itself can be trusted. In this paper, we present a number of challenges to realizing high-assurance RV and illustrate how we have addressed them in our research. We argue that high-assurance RV provides a rich target for automated verification tools in hope of fostering closer collaboration among the communities.

Alwyn Goodloe
Static versus Dynamic Verification in Why3, Frama-C and SPARK 2014

Why3 is an environment for static verification, generic in the sense that it is used as an intermediate tool by different front-ends for the verification of Java, C or Ada programs. Yet, the choices made when designing the specification languages provided by those front-ends differ significantly, in particular with respect to the executability of specifications. We review these differences and the issues that result from these choices. We emphasize the specific feature of ghost code which turns out to be extremely useful for both static and dynamic verification. We also present techniques, combining static and dynamic features, that help users understand why static verification fails.

Nikolai Kosmatov, Claude Marché, Yannick Moy, Julien Signoles
Considering Typestate Verification for Quantified Event Automata

This paper discusses how the existing static analyses developed for typestate properties may be extended to a more expressive class of properties expressible by a specification formalism originally developed for runtime verification. The notion of typestate was introduced as a refinement of the notion of type and captures the allowed operations in certain contexts (states) as a subset of those operations allowed on the type. Typestates therefore represent per-object safety properties. There exist effective static analysis techniques for checking typestate properties and this has been an area of research since typestates were first introduced in 1986. It has already been observed that common properties monitored in runtime verification activities take the form of typestate properties. Additionally, the notion of typestate has been extended to reflect the more expressive properties seen in this area and additional static and dynamic analyses have been introduced. This paper considers a highly expressive specification language for runtime verification, quantified event automata, and discusses how these could be viewed as typestate properties and if/how the static analysis techniques could be updated accordingly. The details have not been worked out yet and are not presented, this is intended for later work.

Giles Reger
Combining Static and Runtime Methods to Achieve Safe Standing-Up for Humanoid Robots

Due to its complexity, the standing-up task for robots is highly challenging, and often implemented by scripting the strategy that the robot should execute per hand. In this paper we aim at improving the approach of a scripted stand-up strategy by making it more stable and safe. To achieve this aim, we apply both static and runtime methods by integrating reinforcement learning, static analysis and runtime monitoring techniques.

Francesco Leofante, Simone Vuotto, Erika Ábrahám, Armando Tacchella, Nils Jansen
On Combinations of Static and Dynamic Analysis – Panel Introduction

Model checking [1] deals with the problem of deciding whether all runs of a system under scrutiny satisfy a given specification. As typically infinite runs of the system are of interest, a dynamic analysis of complete runs is not possible. Hence, model checking is necessarily a static analysis technique. Runtime verification [2] on the other hand concentrates on prefixes of runs of the system as it typically considers the actual, necessarily finite run of the system. So while both model checking and runtime verification focus on runs of the system under scrutiny they have a different focus and are applied on a different level of abstraction. In this presentation we elaborate on the similarities and differences of model checking and runtime verification, yet, we mainly introduce question but questions to be studied in the future. We recall the work pointed out in [3] putting runtime verification and (LTL-based) model checking into the same formal framework.

Martin Leucker
Safer Refactorings

Refactorings often require semantic correctness conditions that amount to software model checking. However, IDEs such as Eclipse’s Java Development Tools implement far simpler checks on the structure of the code. This leads to the phenomenon that a seemingly innocuous refactoring can change the behaviour of the program. In this paper we demonstrate our technique of introducing runtime checks for two particular refactorings for the Java programming language: Extract And Move Method, and Extract Local Variable. These checks can, in combination with unit tests, detect changed behaviour and allow identification of which specific refactoring step introduced the deviant behaviour.

Anna Maria Eilertsen, Anya Helene Bagge, Volker Stolz

Rigorous Engineering of Collective Adaptive Systems

Frontmatter
Rigorous Engineering of Collective Adaptive Systems Track Introduction

Today’s software systems are becoming increasingly distributed and decentralized and it would be important to have them adapt autonomously to dynamically changing, open-ended environments. Often the nodes of such systems have their own individual properties and objectives; interactions with other nodes or with humans may lead to the emergence of unexpected phenomena. We call such systems collective adaptive systems. Examples for collective adaptive systems are robot swarms, smart cities, voluntary peer-to-peer clouds as well as socio-technical systems and the internet of things.

Stefan Jähnichen, Martin Wirsing
Programming of CAS Systems by Relying on Attribute-Based Communication

In most distributed systems, named connections (i.e., channels) are used as means for programming interaction between communicating partners. These kinds of connections are low level and usually totally independent of the knowledge, the status, the capabilities, ..., in one word, of the attributes of the interacting partners. We have recently introduced a calculus, called AbC, in which interactions among agents are dynamically established by taking into account “connection” as determined by predicates over agent attributes. In this paper, we present , a Java run-time environment that has been developed to support modeling and programming of collective adaptive systems by relying on the communication primitives of the AbC calculus. Systems are described as sets of parallel components, each component is equipped with a set of attributes and communications among components take place in an implicit multicast fashion. By means of a number of examples, we also show how opportunistic behaviors, achieved by run-time attribute updates, can be exploited to express different communication and interaction patterns and to program challenging case studies.

Yehia Abd Alrahman, Rocco De Nicola, Michele Loreti
Towards Static Analysis of Policy-Based Self-adaptive Computing Systems

For supporting the design of self-adaptive computing systems, the PSCEL language offers a principled approach that relies on declarative definitions of adaptation and authorisation policies enforced at runtime. Policies permit managing system components by regulating their interactions and by dynamically introducing new actions to accomplish task-oriented goals. However, the runtime evaluation of policies and their effects on system components make the prediction of system behaviour challenging. In this paper, we introduce the construction of a flow graph that statically points out the policy evaluations that can take place at runtime and exploit it to analyse the effects of policy evaluations on the progress of system components.

Andrea Margheri, Hanne Riis Nielson, Flemming Nielson, Rosario Pugliese
A Calculus for Open Ensembles and Their Composition

We consider specifications of dynamically evolving ensembles consisting of entities which collaborate through message exchange. Each ensemble specification defines a set of messages, a set of process type declarations and an initial ensemble state. An ensemble state is given by a set of process instances that can trigger the creation of further process instances during ensemble evolution. We distinguish between internal and external messages of an ensemble. Internal messages are exchanged between the participants of a single ensemble while the external messages can be considered as ensemble interfaces which give rise to a composition operator for open ensemble specifications. A structural operational semantics for open ensemble specifications is provided based on two levels: a process and an ensemble level. We define an equivalence relation for ensemble specifications which generalizes bisimulation to dynamic architectures. As a main result we prove that equivalence of ensemble specifications is preserved by ensemble composition. We also introduce a semantic composition operator on the level of labeled transition systems and show that it is compatible with the syntactic composition of ensemble specifications; i.e. our semantics is compositional.

Rolf Hennicker
Logic Fragments: Coordinating Entities with Logic Programs

Rigorous engineering of self-organising and self-adaptive systems is a challenging activity. Interactions with humans and unexpected entities, dependence on contextual information for self-organisation and adaptation represent just some of the factors complicating the coordination process among multiple entities of the system. Recently we proposed a coordination model based on logic inference named Logic Fragments Coordination Model. Logic Fragments are combinations of logic programs defining interactions among agents distributed over the nodes of the system. They are able to accommodate various types of logics, ranging from classical up to many-valued paraconsistent ones. The logical formalisation makes it possible to express coordination in a rigorous and predicle way, both at design-time and run-time. In this paper we define, under the form of an evaluation algorithm, the semantics of Logic Fragments; introducing logical predicates used to manage and reason on local and remote information. By associating specific semantics to the symbols inferred during the evaluation of logic programs it is possible to make logical inference effects unambiguous on the system; such an approach turns Logic Fragments into a coordination-oriented logic-based programming model. We conclude the paper discussing three examples showing the use of Logic Fragments to implement on-the-fly ad-hoc coordination mechanisms, as well as design-time and run-time verification of spatial properties.

Francesco Luca De Angelis, Giovanna Di Marzo Serugendo
Mixed-Critical Systems Design with Coarse-Grained Multi-core Interference

Those autonomic concurrent systems which are timing-critical and compute intensive need special resource managers in order to ensure adaptation to unexpected situations in terms of compute resources. So-called mixed-criticality managers may be required that adapt system resource usage to critical run-time situations (e.g., overheating, overload, hardware errors) by giving the highly critical subset of system functions priority over low-critical ones in emergency situations. Another challenge comes from the fact that for modern platforms – multi- and many- cores – make the scheduling problem more complicated because of their inherent parallelism and because of “parasitic” interference between the cores due to shared hardware resources (buses, FPU’s, DMA’s, etc.). In our work-in-progress design flow we provide the so-called concurrency language for expressing, at high abstraction level, new emerging custom resource management policies that can handle these challenges. We compile the application into a representation in this language and combine the result with a resource manager into a joint software design used to deploy the given system on the target platform. In this context, we discuss our work in progress on a scheduler that aims to handle the interference in mixed-critical applications by controlling it at the task level.

Peter Poplavko, Rany Kahil, Dario Socci, Saddek Bensalem, Marius Bozga
A Library and Scripting Language for Tool Independent Simulation Descriptions

In modeling and simulation it is often necessary to simulate a model with a variety of settings and evaluate the simulation results with measured data or previously acquired results. As doing this manually is error-prone and ineffective, scripting languages are often used to automate this process. In general a simulation description is tool and model dependent. Therefore, simulating the same model with the same simulation description in different simulation tools or comparing two different models with the same settings is often not easily achieved. We propose an object-oriented, tool-independent, easy-to-use, domain-specific scripting language to describe simulations in an exchangeable and uniform manner. Through this simulation description the simulation settings and the simulation environment can easily be changed while syntax and sequence of commands remain the same. The language is Python based and is designed to be simple, well-readable and intuitive even with marginal programming experience while maintaining Pythons’ strength. The language uses an in-house Python library which provides interfaces to different simulation environments (so far Dymola, OpenModelica, Simulink). This library can also be used directly in Python, enabling experienced Python users to keep describing their simulations in Python but benefiting from our efforts to achieve tool-independence.

Alexandra Mehlhase, Stefan Jähnichen, Amir Czwink, Robert Heinrichs
Adaptation to the Unforeseen: Do we Master our Autonomous Systems? Questions to the Panel – Panel Introduction

This short paper gives an introduction to a panel held as part of the track on “Rigorous Engineering of Collective Adaptive Systems” at ISOLA 2016. The moderator Stefan Jähnichen (TU Berlin) and the panelists Saddek Bensalem (VERIMAG), Michele Loreti (University of Florence), Giovanna di Marzo Serugendo (University of Geneva), and Emil Vassev (LERO) discussed how to master the engineering of autonomous systems that have to cope with unforeseen events and situations. The discussion was structured along 14 questions ranging from the evolution and universality of autonomous systems to correctness, reliability, and legal issues.

Stefan Jähnichen, Martin Wirsing
Smart Coordination of Autonomic Component Ensembles in the Context of Ad-Hoc Communication

Smart Cyber-Physical Systems (sCPS) are complex distributed decentralized systems that typically operate in an uncertain environment and thus have to be resilient to both network and individual node failures. At the same time, sCPS are commonly required to exhibit complex smart coordination while being limited in terms of resources such as network. However, optimizing network usage in a general sCPS coordination framework while maintaining the system function is complex. To better enable this, we allow incorporating key network parameters and constraints into the architecture, realized as an extension of the autonomic component ensembles paradigm. We show that when chosen well, these parameters make it possible to improve network resource usage without hampering the system utility too much. We demonstrate the parameter selection on a mobile gossip-based sCPS coordination scenario and use simulation to show the impact on overall system utility.

Tomas Bures, Petr Hnetynka, Filip Krijt, Vladimir Matena, Frantisek Plasil
A Tool-Chain for Statistical Spatio-Temporal Model Checking of Bike Sharing Systems

Prominent examples of collective systems are often encountered when analysing smart cities and smart transportation systems. We propose a novel modelling and analysis approach combining statistical model checking, spatio-temporal logics, and simulation. The proposed methodology is applied to modelling and statistical analysis of user behaviour in bike sharing systems. We present a tool-chain that integrates the statistical analysis toolkit MultiVeStA, the spatio-temporal model checker topochecker, and a bike sharing systems simulator based on Markov renewal processes. The obtained tool allows one to estimate, up to a user-specified precision, the likelihood of specific spatio-temporal formulas, such as the formation of clusters of full stations and their temporal evolution.

Vincenzo Ciancia, Diego Latella, Mieke Massink, Rytis Paškauskas, Andrea Vandin
Rigorous Graphical Modelling of Movement in Collective Adaptive Systems

Formal modelling provides valuable intellectual tools which can be applied to the problem of analysis and optimisation of systems. In this paper we present a novel software tool which provides a graphical approach to modelling of Collective Adaptive Systems (CAS) with constrained movement. The graphical description is translated into a model that can be analysed to understand the dynamic behaviour of the system. This generated model is expressed in CARMA, a modern feature-rich modelling language designed specifically for modelling CAS. We demonstrate the use of the software tool with an example scenario representing carpooling, in which travellers group together and share a car in order to reach a common destination. This can reduce their travel time and travel costs, whilst also ameliorating traffic congestion by reducing the number of vehicles on the road.

N. Zoń, S. Gilmore, J. Hillston
Integration and Promotion of Autonomy with the ARE Framework

The integration and promotion of autonomy in software-intensive systems is an extremely challenging task. Among the many challenges the engineers must overcome are those related to the elicitation and expression of autonomy requirements. Striving to solve this problem, Lero the Irish Software Engineering Research Center has developed an Autonomy Requirements Engineering (ARE) approach within the mandate of a joint project with ESA, the European Space Agency. The approach is intended to help system engineers tackle the integration and promotion of autonomy in software-intensive systems, e.g., space-exploration robots. To handle autonomy requirements, ARE provides a requirements engineering baseline where despite their principle differences in application domain and functionality all autonomous and self-adaptive systems are expected to extend upstream the regular software-intensive systems with special self-managing objectives (self-* objectives). Basically, the self-* objectives provide the system’s ability to automatically discover, diagnose, and cope with various problems. ARE emphasizes this ability as being driven by the system’s degree of autonomicity, quality and quantity of knowledge, awareness and monitoring capabilities, and quality attributes such as adaptability, dynamicity, robustness, resilience, and mobility. As part of its successful validation, ARE was applied to capture the autonomy requirements for the ESA’s BepiColombo unmanned space exploration mission.

Emil Vassev, Mike Hinchey
Safe Artificial Intelligence and Formal Methods
(Position Paper)

In one aspect of our life or another, today we all live with AI. For example, the mechanisms behind the search engines operating on the Internet do not just retrieve information, but also constantly learn how to respond more rapidly and usefully to our requests. Although framed by its human inventors, this AI is getting stronger and more powerful every day to go beyond the original human intentions in the future. One of the major questions emerging along with the propagation of AI in both technology and life is about safety in AI. This paper presents the author’s view about how formal methods can assist us in building safer and reliable AI.

Emil Vassev
Engineering Adaptivity, Universal Autonomous Systems Ethics and Compliance Issues
ISOLA’2016 - Panel Discussion Position Paper

This paper summarises some of the discussion held during the panel of the ISOLA’2016 conference on whether artificial systems actually adapt to unforeseen situations and whether we master autonomous adaptive systems. We focus here on three questions: (1) What is a collective adaptive system and what are the elements to consider when engineering a collective adaptive system? (2) What type of universal autonomous system can we envision and what for? and finally (3) How are we considering and integrating ethics, trust, privacy and compliance to laws and regulations in adaptive systems?

Giovanna Di Marzo Serugendo

Correctness-by-Construction and Post-hoc Verification: Friends or Foes?

Frontmatter
Correctness-by-Construction and Post-hoc Verification: Friends or Foes?

While correctness-by-construction and post-hoc verification are traditionally considered to provide two opposing views on proving software systems to be free from errors, nowadays numerous techniques and application fields witness initiatives that try to integrate elements of both ends of the spectrum. The ultimate aim is not merely to improve the correctness of software systems but also to improve their time-to-market, and to do so at a reasonable cost. This track brings together researchers and practitioners interested in the inherent ‘tension’ that is usually felt when trying to balance the pros and cons of correctness-by-construction versus post-hoc verification.

Maurice H. ter Beek, Reiner Hähnle, Ina Schaefer
Correctness-by-Construction and Post-hoc Verification: A Marriage of Convenience?

Correctness-by-construction (CbC), traditionally based on weakest precondition semantics, and post-hoc verification (PhV) aspire to ensure functional correctness. We argue for a lightweight approach to CbC where lack of formal rigour increases productivity. In order to mitigate the risk of accidentally introducing errors during program construction, we propose to complement lightweight CbC with PhV. We introduce lightweight CbC by example and discuss strength and weaknesses of CbC and PhV and their combination, both conceptually and using a case study.

Bruce W. Watson, Derrick G. Kourie, Ina Schaefer, Loek Cleophas
Deductive Verification of Legacy Code

Deductive verification is about proving that a piece of code conforms to a given requirement specification. For legacy code, this task is notoriously hard for three reasons: (1) writing specifications post-hoc is much more difficult than producing code and its specification simultaneously, (2) verification does not scale as legacy code is often badly modularized, (3) legacy code may be written in a way such that verification requires frequent user interaction.We give examples for which characteristics of (imperative) legacy code impede the specification and verification effort. We also discuss how to handle the challenges of legacy code verification and suggest a strategy for post-hoc verification, together with possible improvements to existing verification approaches. We draw our experience from two case studies for verification of imperative implementations (in Java and C) in which we verified legacy software, i.e., code that was provided by a third party and was not designed to be verified.

Bernhard Beckert, Thorsten Bormer, Daniel Grahl
Correctness-by-Construction Taxonomies Deep Comprehension of Algorithm Families

Correctness-by-construction (CbC) is an approach for developing algorithms inline with rigorous correctness arguments. A high-level specification is evolved into an implementation in a sequence of small, tractable refinement steps guaranteeing the resulting implementation to be correct. CbC facilitates the design of algorithms that are more efficient and more elegant than code that is hacked into correctness. In this paper, we discuss another benefit of CbC, i.e., that it supports the deep comprehension of algorithm families. We organise the different refinements of the algorithms carried out during CbC-based design in a taxonomy. The constructed taxonomy provides a classification of the commonality and variability of the algorithm family and, hence, provides deep insights into their structural relationships. Such taxonomies together with the implementation of the algorithms as toolkits provide an excellent starting point for extractive and proactive software product line engineering.

Loek Cleophas, Derrick G. Kourie, Vreda Pieterse, Ina Schaefer, Bruce W. Watson
Conditions for Compatibility of Components
The Case of Masters and Slaves

We consider systems composed of reactive components that collaborate through synchronised execution of common actions. These multi-component systems are formally represented as team automata, a model that allows a wide spectrum of synchronisation policies to combine components into higher-level systems. We investigate the correct-by-construction engineering of such systems of systems from the point of view of correct communications between the components (no message loss or deadlocks due to indefinite waiting). This leads to a proposal for a generic definition of compatibility of components relative to the adopted synchronisation policy. This definition appears to be particularly appropriate for so-called master-slave synchronisations by which input actions (for ‘slaves’) are driven by output actions (from ‘masters’).

Maurice H. ter Beek, Josep Carmona, Jetty Kleijn
A Logic for the Statistical Model Checking of Dynamic Software Architectures

Dynamic software architectures emerge when addressing important features of contemporary systems, which often operate in dynamic environments subjected to change. Such systems are designed to be reconfigured over time while maintaining important properties, e.g., availability, correctness, etc. Verifying that reconfiguration operations make the system to meet the desired properties remains a major challenge. First, the verification process itself becomes often difficult when using exhaustive formal methods (such as model checking) due to the potentially infinite state space. Second, it is necessary to express the properties to be verified using some notation able to cope with the dynamic nature of these systems. Aiming at tackling these issues, we introduce DynBLTL, a new logic tailored to express both structural and behavioral properties in dynamic software architectures. Furthermore, we propose using statistical model checking (SMC) to support an efficient analysis of these properties by evaluating the probability of meeting them through a number of simulations. In this paper, we describe the main features of DynBLTL and how it was implemented as a plug-in for PLASMA, a statistical model checker.

Jean Quilbeuf, Everton Cavalcante, Louis-Marie Traonouez, Flavio Oquendo, Thais Batista, Axel Legay
On Two Friends for Getting Correct Programs
Automatically Translating Event B Specifications to Recursive Algorithms in Rodin

We report on our progress-to-date in implementing a software development environment which integrates the efforts of two formal software engineering techniques: program refinement as supported by Event B and program verification as supported by the Spec# programming system. Our objective is to improve the usability of formal verification tools by providing a general framework for integrating these two approaches to software verification. We show how the two approaches Correctness-by-Construction and Post-hoc Verification can be used in a productive way. Here, we focus on the final steps in this process where the final concrete specification is transformed into an executable algorithm. We present EB2RC, a plug-in for the Rodin platform, that reads in an Event B model and uses the control framework introduced during its refinement to generate a graphical representation of the executable algorithm. EB2RC also generates a recursive algorithm that is easily translated into executable code. We illustrate our technique through case studies and their analysis.

Zheng Cheng, Dominique Méry, Rosemary Monahan
Proof-Carrying Apps: Contract-Based Deployment-Time Verification

For extensible software platforms in safety-critical domains, it is important that deployed plug-ins work as specified. This is especially true with the prospect of allowing third parties to add plug-ins. We propose a contract-based approach for deployment-time verification. Every plug-in guarantees its functional behavior under a specific set of assumptions towards its environment. With proof-carrying apps, we generalize proof-carrying code from proofs to artifacts that facilitate deployment-time verification, where the expected behavior is specified by the means of design-by-contract. With proof artifacts, the conformance of apps to environment assumptions is checked during deployment, even on resource-constrained devices. This procedure prevents unsafe operation by unintended programming mistakes as well as intended malicious behavior. We discuss which criteria a formal verification technique has to fulfill to be applicable to proof-carrying apps and evaluate the verification tools KeY and Soot for proof-carrying apps.

Sönke Holthusen, Michael Nieke, Thomas Thüm, Ina Schaefer
Supervisory Controller Synthesis for Product Lines Using CIF 3

Using the CIF 3 toolset, we illustrate the general idea of controller synthesis for product line engineering for a prototypical example of a family of coffee machines. The challenge is to integrate a number of given components into a family of products such that the resulting behaviour is guaranteed to respect an attributed feature model as well as additional behavioural requirements. The proposed correctness-by-construction approach incrementally restricts the composed behaviour by subsequently incorporating feature constraints, attribute constraints and temporal constraints. The procedure as presented focusses on synthesis, but leaves ample opportunity to handle e.g. uncontrollable behaviour, dynamic reconfiguration, and product- and family-based analysis.

Maurice H. ter Beek, Michel A. Reniers, Erik P. de Vink
Partial Verification and Intermediate Results as a Solution to Combine Automatic and Interactive Verification Techniques

Many of the current verification approaches can be classified into automatic and interactive techniques, each having different strengths and weaknesses. Thus, one of the current open problems is to design solutions to combine the two approaches and accelerate technology transfer. We outline four existing techniques that might be able to contribute to combination solutions: (1) Conditional model checking is a technique that gives detailed information (in form of a condition) about the verified state space, i.e., informs the user (or tools later in a tool chain) of the outcome. Also, it accepts as input detailed information (again as condition) about what the conditional model checker has to do. (2) Correctness witnesses, stored in a machine-readable exchange format, contain (partial) invariants that can be used to prove the correctness of a system. For example, tools that usually expect invariants from the user can read the invariants from such correctness witnesses and ask the user only for the remaining invariants. (3) Abstraction-refinement based approaches that use a dynamically adjustable precision (such as in lazy CEGAR approaches) can be provided with invariants from the user or from other tools, e.g., from deductive methods. This way, the approach can succeed in constructing a proof even if it was not able to come up with the required invariant. (4) The technique of path invariants extracts (in a CEGAR method) a path program that represents an interesting part of the program for which an invariant is needed. Such a path program can be given to an expensive (or interactive) method for computing invariants that can then be fed back to a CEGAR method to continue verifying the large program. While the existing techniques originate from software verification, we believe that the new combination ideas are useful for verifying general systems.

Dirk Beyer

Privacy and Security Issues in Information Systems

Frontmatter
Security and Privacy of Protocols and Software with Formal Methods

The protection of users’ data conforming to best practice and legislation is one of the main challenges in computer science. Very often, large-scale data leaks remind us that the state of the art in data privacy and anonymity is severely lacking. The complexity of modern systems make it impossible for software architect to create secure software that correctly implements privacy policies without the help of automated tools. The academic community needs to invest more effort in the formal modelization of security and anonymity properties, providing a deeper understanding of the underlying concepts and challenges and allowing the creation of automated tools to help software architects and developers. This track provides numerous contributions to the formal modeling of security and anonymity properties and the creation of tools to verify them on large-scale software projects.

Fabrizio Biondi, Axel Legay
A Model-Based Approach to Secure Multiparty Distributed Systems

Within distributed systems with completely distributed interactions between parties with mutual distrust, it is hard to control the (illicit) flowing of private information to unintended parties. Unlike existing methods dealing with verification of low-level cryptographic protocols, we propose a novel model-based approach based on model transformations to build a secure-by-construction multiparty distributed system. First, starting from a component-based model of the system, the designer annotates different parts of it in order to define the security policy. Then, the security is checked and when valid, a secure distributed model, consistent with the desired security policy, is automatically generated. To illustrate the approach, we present a framework that implements our method and use it to secure an online social network application.

Najah Ben Said, Takoua Abdellatif, Saddek Bensalem, Marius Bozga
Information Leakage Analysis of Complex C Code and Its application to OpenSSL

The worldwide attention generated by the Heartbleed bug has demonstrated even to the general public the potential devastating consequences of information leaks.While substantial academic work has been done in the past on information leaks, these works have so far not satisfactorily addressed the challenges of automated analysis of real-world complex C code. On the other hand, effective working solutions rely on ad-hoc principles that have little or no theoretical justification.The foremost contribution of this paper is to bridge this chasm between advanced theoretical work and concrete practical needs of programmers developing real world software. We present an analysis, based on clear security principles and verification tools, which is largely automatic and effective in detecting information leaks in complex C code running everyday on millions of systems worldwide.

Pasquale Malacaria, Michael Tautchning, Dino DiStefano
Integrated Modeling Workflow for Security Assurance

Cyber-physical systems are generally composed of several software components executing on different processors that are interconnected through entities that can be represented as buses. These complex systems collocate functions operating at different security levels, which can introduce unexpected interactions that affect system security. The security policy for these systems is realized through various complex physical or logical mechanisms. The security policy, as a stakeholder goal, is then refined into system requirements and implementation constraints that are used to guarantee security objectives. Unfortunately, verifying the correct decomposition and its enforcement in the system architecture is an overwhelming task. To overcome these issues, requirements must be clearly specified and traced through the system architecture, and automatically verified throughout the development process.In this report, we introduce a modeling framework for the design and validation of requirements from a security perspective. It is composed of a new language for requirements specification, an extension of the Architecture Analysis & Design Language, for specifying security and a set of theorems to check the requirements against the architecture. The framework provides the capability to validate the requirements of several candidate architectures and reiterate models to cope with the impact of changes to requirements and architecture during development. This model-based approach helps software architects and developers detect requirements and architecture issues early in the development life cycle and avoid the propagation of their effects during integration.

Min-Young Nam, Julien Delange, Peter Feiler
A Privacy-Aware Conceptual Model for Handling Personal Data

Handling personal data adequately is one of the biggest challenges of our era. Consequently, law and regulations are in the process of being released, like the European General Data Protection Regulation (GDPR), which attempt to deal with these challenging issue early on. The core question motivating this work is how software developers can validate their technical design vis-a-vis the prescriptions of the privacy legislation. In this paper, we outline the technical concepts related to privacy that need to be taken into consideration in a software design. Second, we extend a popular design notation in order to support the privacy concepts illustrated in the previous point. Third, we show how some of the prescriptions of the privacy legislation and standards may be related to a technical design that employs our enriched notation, which would facilitate reasoning about compliance.

Thibaud Antignac, Riccardo Scandariato, Gerardo Schneider
Guaranteeing Privacy-Observing Data Exchange

Privacy is a major concern in large of parts of the world when exchanging information. Ideally, we would like to be able to have fine-grained control about how information that we deem sensitive can be propagated and used. While privacy policy languages exist, it is not possible to control whether the entity that receives data is living up to its own policy specification. In this work we present our initial work on an approach that empowers data owners to specify their privacy preferences, and data consumers to specify their data needs. Using a static analysis of the two specifications, our approach then finds a communication scheme that complies with these preferences and needs. While applicable to online transactions, the same techniques can be used in development of IT systems dealing with sensitive data. To the best of our knowledge, no existing privacy policy languages supports negotiation of policies, but only yes/no answers. We also discuss how the same approach can be used to identify a qualitative level of sharing, where data may be shared according to, e.g., the level of trust to another entity.

Christian W. Probst
Backmatter
Metadaten
Titel
Leveraging Applications of Formal Methods, Verification and Validation: Foundational Techniques
herausgegeben von
Tiziana Margaria
Bernhard Steffen
Copyright-Jahr
2016
Electronic ISBN
978-3-319-47166-2
Print ISBN
978-3-319-47165-5
DOI
https://doi.org/10.1007/978-3-319-47166-2