Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 21st International Symposium on Formal Methods, FM 2016, held in Limassol, Cyprus, in November 2016. The 38 full papers and 11 short papers presented together with one abstract of an invited talk and one invited presentation were carefully reviewed and selected from 162 submissions. The broad topics of interest for FM include: interdisciplinary formal methods; formal methods in practice; tools for formal methods; role of formal methods in software and systems engineering; theoretical foundations.

Inhaltsverzeichnis

Frontmatter

Erratum to: Simulink to UPPAAL Statistical Model Checker: Analyzing Automotive Industrial Systems

Predrag Filipovikj, Nesredin Mahmud, Raluca Marinescu, Cristina Seceleanu, Oscar Ljungkrantz, Henrik Lönn

Invited Presentations

Frontmatter

Industrial-Strength Model-Based Testing of Safety-Critical Systems

In this article we present an industrial-strength approach to automated model-based testing. This approach is applied by Verified Systems International GmbH in safety-critical verification and validation projects in the avionic, railway, and automotive domains. The SysML modelling formalism is used for creating test models. Associating SysML with a formal behavioural semantics allows for full automation of the whole work flow, as soon as the model including SysML requirements tracing information has been elaborated. The presentation highlights how certain aspects of formal methods are key enablers for achieving the degree of automation that is needed for effectively testing today’s safety critical systems with acceptable effort and the degree of comprehensiveness required by the applicable standards. It is also explained which requirements from the industry and from certification authorities have to be considered when designing test automation tools fit for integration into the verification and validation work flow set up for complex system developments. From the collection of scientific challenges the following questions are addressed. (1) What is the formal equivalent to traceable requirements and associated test cases? (2) How can requirements based, property-based, and model-based testing be effectively automated? (3) Which test strategies provide guaranteed test strength, independent on the syntactic representation of the model?

Jan Peleska, Wen-ling Huang

Research Track

Frontmatter

Counter-Example Guided Program Verification

This paper presents a novel counter-example guided abstraction refinement algorithm for the automatic verification of concurrent programs. Our algorithm proceeds in different steps. It first constructs an abstraction of the original program by slicing away a given subset of variables. Then, it uses an external model checker as a backend tool to analyze the correctness of the abstract program. If the model checker returns that the abstract program is safe then we conclude that the original one is also safe. If the abstract program is unsafe, we extract an “abstract” counter-example. In order to check if the abstract counter-example can lead to a real counter-example of the original program, we add back to the abstract counter-example all the omitted variables (that have been sliced away) to obtain a new program. Then, we call recursively our algorithm on the new obtained program. If the recursive call of our algorithm returns that the new program is unsafe, then we can conclude that the original program is also unsafe and our algorithm terminates. Otherwise, we refine the abstract program by removing the abstract counter-example from its set of possible runs. Finally, we repeat the procedure with the refined abstract program. We have implemented our algorithm, and run it successfully on the concurrency benchmarks in SV-COMP15. Our experimental results show that our algorithm significantly improves the performance of the backend tool.

Parosh Aziz Abdulla, Mohamed Faouzi Atig, Bui Phi Diep

Tighter Reachability Criteria for Deadlock-Freedom Analysis

We combine a prior incomplete deadlock-freedom-checking approach with two new reachability techniques to create a more precise deadlock-freedom-checking framework for concurrent systems. The reachability techniques that we propose are based on the analysis of individual components of the system; we use static analysis to summarise the behaviour that might lead components to this system state, and we analyse this summary to assess whether components can cooperate to reach a given system state. We implement this new framework on a tool called DeadlOx. This implementation encodes the proposed deadlock-freedom analysis as a satisfiability problem that is later checker by a SAT solver. We demonstrate by a series of practical experiments that this tool is more accurate than (and as efficient as) similar incomplete techniques for deadlock-freedom analysis.

Pedro Antonino, Thomas Gibson-Robinson, A. W. Roscoe

Compositional Parameter Synthesis

We address the problem of parameter synthesis for parametric timed systems (PTS). The motivation comes from industrial configuration problems for production lines. Our method consists in compositionally generating over-approximations for the individual components of the input systems, which are translated, together with global properties, to $$\exists \forall $$SMT problems. Our translation forms the basis for optimised and robust parameter synthesis for slightly richer models than PTS.

Lacramioara Aştefănoaei, Saddek Bensalem, Marius Bozga, Chih-Hong Cheng, Harald Ruess

Combining Mechanized Proofs and Model-Based Testing in the Formal Analysis of a Hypervisor

Virtualization engines play a critical role in many modern software products. In an effort to gain definitive confidence on critical components, our company has invested on the formal verification of the NOVA micro hypervisor, following recent advances in similar academic and industrial operating-system verification projects. There are inherent difficulties in applying formal methods to low-level implementations, and even more under specific constraints arising in commercial software development. In order to deal with these, the chosen approach consists in the splitting of the verification effort by combining the definition of an abstract model of NOVA, the verification of fundamental security properties over this model, and testing the conformance of the model w.r.t. the NOVA implementation. This article reports on our experiences in applying formal methods to verify a hypervisor for commercial purposes. It describes the verification approach, and the security properties under consideration, and reports the results obtained.

Hanno Becker, Juan Manuel Crespo, Jacek Galowicz, Ulrich Hensel, Yoichi Hirai, César Kunz, Keiko Nakata, Jorge Luis Sacchini, Hendrik Tews, Thomas Tuerk

A Model Checking Approach to Discrete Bifurcation Analysis

Bifurcation analysis is a central task of the analysis of parameterised high-dimensional dynamical systems that undergo transitions as parameters are changed. The classical numerical and analytical methods are typically limited to a small number of system parameters. In this paper we propose a novel approach to bifurcation analysis that is based on a suitable discrete abstraction of the system and employs model checking for discovering critical parameter values, referred to as bifurcation points, for which various kinds of behaviour (equilibrium, cycling) appear or disappear. To describe such behaviour patterns, called phase portraits, we use a hybrid version of a CTL logic augmented with direction formulae. We demonstrate the method on a case study taken from systems biology.

Nikola Beneš, Luboš Brim, Martin Demko, Samuel Pastva, David Šafránek

State-Space Reduction of Non-deterministically Synchronizing Systems Applicable to Deadlock Detection in MPI

The paper is motivated by non-deterministic synchronizations in MPI (Message Passing Interface), where some send operations and collective operations may or may not synchronize; a correctly written MPI program should count with both options. Here we focus on the deadlock detection in such systems and propose the following reduction of the explored state space. The system is first analyzed without forcing the respective synchronizations, by applying standard partial-order reduction methods. Then a suggested algorithm is used that searches for potentially missed deadlocks caused by synchronization. In practical examples this approach leads to major reductions of the explored state-space in comparison to encoding the synchronization options into the state-space search directly. The algorithm is presented as a stand-alone abstract framework that can be also applied to the future versions of MPI as well as to other related problem domains.

Stanislav Böhm, Ondřej Meca, Petr Jančar

Formal Verification of Multi-Paxos for Distributed Consensus

This paper describes formal specification and verification of Lamport’s Multi-Paxos algorithm for distributed consensus. The specification is written in TLA+, Lamport’s Temporal Logic of Actions. The proof is written and checked using TLAPS, a proof system for TLA+. Building on Lamport, Merz, and Doligez’s specification and proof for Basic Paxos, we aim to facilitate the understanding of Multi-Paxos and its proof by minimizing the difference from those for Basic Paxos, and to demonstrate a general way of proving other variants of Paxos and other sophisticated distributed algorithms. We also discuss our general strategies for proving properties about sets and tuples that helped the proof check succeed in significantly reduced time.

Saksham Chand, Yanhong A. Liu, Scott D. Stoller

Validated Simulation-Based Verification of Delayed Differential Dynamics

Verification by simulation, based on covering the set of time-bounded trajectories of a dynamical system evolving from the initial state set by means of a finite sample of initial states plus a sensitivity argument, has recently attracted interest due to the availability of powerful simulators for rich classes of dynamical systems. System models addressed by such techniques involve ordinary differential equations (ODEs) and can readily be extended to delay differential equations (DDEs). In doing so, the lack of validated solvers for DDEs, however, enforces the use of numeric approximations such that the resulting verification procedures would have to resort to (rather strong) assumptions on numerical accuracy of the underlying simulators, which lack formal validation or proof. In this paper, we pursue a closer integration of the numeric solving and the sensitivity-related state bloating algorithms underlying verification by simulation, together yielding a safe enclosure algorithm for DDEs suitable for use in automated formal verification. The key ingredient is an on-the-fly computation of piecewise linear, local error bounds by nonlinear optimization, with the error bounds uniformly covering sensitivity information concerning initial states as well as integration error.

Mingshuai Chen, Martin Fränzle, Yangjia Li, Peter N. Mosaad, Naijun Zhan

Towards Learning and Verifying Invariants of Cyber-Physical Systems by Code Mutation

Cyber-physical systems (CPS), which integrate algorithmic control with physical processes, often consist of physically distributed components communicating over a network. A malfunctioning or compromised component in such a CPS can lead to costly consequences, especially in the context of public infrastructure. In this short paper, we argue for the importance of constructing invariants (or models) of the physical behaviour exhibited by CPS, motivated by their applications to the control, monitoring, and attestation of components. To achieve this despite the inherent complexity of CPS, we propose a new technique for learning invariants that combines machine learning with ideas from mutation testing. We present a preliminary study on a water treatment system that suggests the efficacy of this approach, propose strategies for establishing confidence in the correctness of invariants, then summarise some research questions and the steps we are taking to investigate them.

Yuqi Chen, Christopher M. Poskitt, Jun Sun

From Electrical Switched Networks to Hybrid Automata

In this paper, we propose a novel symbolic approach to automatically synthesize a Hybrid Automaton (HA) from a switched electrical network. The input network consists of a set of physical components interconnected according to some reconfigurable network topology. The underlying model defines a local dynamics for each component in terms of a Differential-Algebraic Equation (DAE), and a set of network topologies by means of discrete switches. Each switch configuration induces a different topology, where the behavior of the system is a Hybrid Differential-Algebraic Equations.Two relevant problems for these networks are validation and reformulation. The first consists of determining if the network admits an Ordinary Differential Equations (ODE) that describes its dynamics; the second consists of obtaining such ODE from the initial DAE. This step is a key enabler to use existing formal verification tools that can cope with ODEs but not with DAEs.Since the number of network topologies is exponential in the number of switches, first, we propose a technique based on Satisfiability Modulo Theories (SMT) that can solve the validation problem symbolically, avoiding the explicit enumeration of the topologies. Then, we show an SMT-based algorithm that reformulates the network into a symbolic HA. The algorithm avoids to explicitly enumerate the topologies clustering them by equivalent continuous dynamics.We implemented the approach with several optimizations and we compared it with the explicit enumeration of configurations. The results demonstrate the scalability of our technique.

Alessandro Cimatti, Sergio Mover, Mirko Sessa

Danger Invariants

Static analysers search for overapproximating proofs of safety commonly known as safety invariants. Conversely, static bug finders (e.g. Bounded Model Checking) give evidence for the failure of an assertion in the form of a counterexample trace. As opposed to safety invariants, the size of a counterexample is dependent on the depth of the bug, i.e., the length of the execution trace prior to the error state, which also determines the computational effort required to find them. We propose a way of expressing danger proofs that is independent of the depth of bugs. Essentially, such danger proofs constitute a compact representation of a counterexample trace, which we call a danger invariant. Danger invariants summarise sets of traces that are guaranteed to be able to reach an error state. Our conjecture is that such danger proofs will enable the design of bug finding analyses for which the computational effort is independent of the depth of bugs, and thus find deep bugs more efficiently. As an exemplar of an analysis that uses danger invariants, we design a bug finding technique based on a synthesis engine. We implemented this technique and compute danger invariants for intricate programs taken from SV-COMP 2016.

Cristina David, Pascal Kesseli, Daniel Kroening, Matt Lewis

Local Planning of Multiparty Interactions with Bounded Horizons

Dynamic scheduling of distributed real-time systems with multiparty interactions is acknowledged to be a very hard task. For such systems, multiple schedulers are used to coordinate the parallel activities of remotely running components. In order to ensure global consistency and timing constraints satisfaction, these schedulers must cope with significant communication delays while moreover, use only point-to-point message passing as communication primitive on the platform.In this paper, we investigate a formal model for such systems as compositions of timed automata subject to multiparty interactions, and we propose a distributed implementation method aiming to overcome the communication delays problem through planning ahead interactions. Moreover, we identify static conditions allowing to make the planning decisions local to different schedulers, and thus to decrease the overall coordination overhead. The method has been implemented and we report preliminary results on benchmarks.

Mahieddine Dellabani, Jacques Combaz, Marius Bozga, Saddek Bensalem

Finding Suitable Variability Abstractions for Family-Based Analysis

For program families (Software Product Lines), specially designed variability-aware static (dataflow) analyses allow analyzing all variants (products) of the family, simultaneously, in a single run without generating any of the variants explicitly. They are also known as lifted or family-based analyses. The variability-aware analyses may be too costly or even infeasible for families with a large number of variants. In order to make them computationally cheaper, we can apply variability abstractions which aim to tame the combinatorial explosion of the number of variants (configurations) and reduce it to something more tractable. However, the number of possible abstractions is still intractably large to search naively, with most abstractions being too imprecise or too costly.In this work, we propose a technique to efficiently find suitable variability abstractions from a large family of abstractions for a variability-aware static analysis. The idea is to use a pre-analysis to estimate the impact of variability-specific parts of the program family on the analysis’s precision. Then we use the pre-analysis results to find out when and where the analysis should turn off or on its variability-awareness. We demonstrate the practicality of this approach on several Java benchmarks.

Aleksandar S. Dimovski, Claus Brabrand, Andrzej Wąsowski

Recovering High-Level Conditions from Binary Programs

The need to get confidence in binary programs without access to their source code has pushed efforts forward to directly analyze executable programs. However, low-level programs lack high-level structures (such as types, control-flow graph, etc.), preventing the straightforward application of source-code analysis techniques. Especially, conditional jumps rely on low-level flag predicates, whereas they often encode high-level “natural” conditions on program variables. Most static analyzers are unable to infer any interesting information from these low-level conditions, leading to serious precision loss compared with source-level analysis. In this paper, we propose template-based recovery, an automatic approach for retrieving high-level predicates from their low-level flag versions. Especially, the technique is sound, efficient, platform-independent and it achieves very high ratio of recovery. This method allows more precise analyses and helps to understand machine encoding of conditionals rather than relying on error-prone human interpretation or (syntactic) pattern-based reasoning.

Adel Djoudi, Sébastien Bardin, Éric Goubault

Upper and Lower Amortized Cost Bounds of Programs Expressed as Cost Relations

Resource analysis aims at statically obtaining bounds on the resource consumption of programs in terms of input parameters. A well known approach to resource analysis is based on transforming the target program into a set of cost relations, then solving these into a closed-form bound. In this paper we develop a new analysis for computing upper and lower cost bounds of programs expressed as cost relations. The analysis is compositional: it computes the cost of each loop or function separately and composes the obtained expressions to obtain the total cost. Despite being modular, the analysis can obtain precise upper and lower bounds of programs with amortized cost. The key is to obtain bounds that depend on the values of the variables at the beginning and at the end of each program part. In addition we use a novel cost representation called cost structure. It allows to reduce the inference of complex polynomial expressions to a set of linear problems that can be solved efficiently. We implemented our method and performed an extensive experimental evaluation that demonstrates its power.

Antonio Flores-Montoya

Exploring Model Quality for ACAS X

The next generation airborne collision avoidance system, ACAS X, aims to provide robustness through a probabilistic model that represents sources of uncertainty. From this model, dynamic programming produces a look-up table that is used to give advisories to the pilot in real time. The model is not present in the final system and is therefore not included in the standard certification processes. Rather, the model is checked indirectly, by ensuring that ACAS X performs as well as, or better than, the state-of-the-art, TCAS. We claim that to build confidence in such systems, it is important to target model quality directly. We investigate this issue of model quality as part of our research on informing certification standards for autonomy. Using ACAS X as our driving example, we study the relationship between the probabilistic model and the real world, in an attempt to characterize the quality of the model for the purpose of building ACAS X. This paper presents model conformance metrics, their application to ACAS X, and the results that we obtained from our study.

Dimitra Giannakopoulou, Dennis Guck, Johann Schumann

Learning Moore Machines from Input-Output Traces

The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample.

Georgios Giantamidis, Stavros Tripakis

Modal Kleene Algebra Applied to Program Correctness

Modal Kleene algebras are relatives of dynamic logics that support program construction and verification by equational reasoning. We describe their application in implementing versatile program correctness components in interactive theorem provers such as Isabelle/HOL. Starting from a weakest precondition based component with a simple relational store model, we show how variants for Hoare logic, strongest postconditions and program refinement can be built in a principled way. Modularity of the approach is demonstrated by variants that capture program termination and recursion, memory models for programs with pointers, and program trace semantics.

Victor B. F. Gomes, Georg Struth

Mechanised Verification Patterns for Dafny

In Dafny, the program text is used to both specify and implement programs in the same language [24]. It then uses a fully automated theorem prover to verify that the implementation satisfies the specification. However, the prover often needs further guidance from the user, and another role of the language is to provide such necessary hints and guidance. In this paper, we present a set of verification patterns to support this process. In previous work, we have developed a tactic language for Dafny, where users can encode their verification patterns and re-apply them for several proof tasks [16]. We extend this language with new features, implement our patterns in this tactic language and show, through experiments, generality of the patterns, and applicability of the tactic language.

Gudmund Grov, Yuhui Lin, Vytautas Tumas

Formalising and Validating the Interface Description in the FMI Standard

The Functional Mock-up Interface (FMI) aims to support the interoperability in an interdisciplinary formal methods setting by describing an interface between different formal models in a tool co-simulation setting. However, the FMI standard describes the requirements for the static limitations on the interfaces in an informal manner using tables and textual descriptions. In this short paper we demonstrate how this kind of static constraints can be formalised using the Vienna Development Method Specification Language, and how the formalisation can be examined and validated exhaustively. Afterwards we present how this can be transferred into code in order to develop a tool that can be used by anyone using the FMI standard enabling a more well-founded basis in such an interdisciplinary setting, by having a formal description of the FMI interface.

Miran Hasanagić, Peter W. V. Tran-Jørgensen, Kenneth Lausdahl, Peter Gorm Larsen

An Algebra of Synchronous Atomic Steps

This research started with an algebra for reasoning about rely/guarantee concurrency for a shared memory model. The approach taken led to a more abstract algebra of atomic steps, in which atomic steps synchronise (rather than interleave) when composed in parallel. The algebra of rely/guarantee concurrency then becomes an interpretation of the more abstract algebra. Many of the core properties needed for rely/guarantee reasoning can be shown to hold in the abstract algebra where their proofs are simpler and hence allow a higher degree of automation. Moreover, the realisation that the synchronisation mechanisms of standard process algebras, such as CSP and CCS/SCCS, can be interpreted in our abstract algebra gives evidence of its unifying power. The algebra has been encoded in Isabelle/HOL to provide a basis for tool support.

Ian J. Hayes, Robert J. Colvin, Larissa A. Meinicke, Kirsten Winter, Andrius Velykis

Error Invariants for Concurrent Traces

Error invariants are assertions that over-approximate the reachable program states at a given position in an error trace while only capturing states that will still lead to failure if execution of the trace is continued from that position. Such assertions reflect the effect of statements that are involved in the root cause of an error and its propagation, enabling slicing of statements that do not contribute to the error. Previous work on error invariants focused on sequential programs. We generalize error invariants to concurrent traces by augmenting them with additional information about hazards such as write-after-write events, which are often involved in race conditions and atomicity violations. By providing the option to include varying levels of details in error invariants—such as hazards and branching information—our approach allows the programmer to systematically analyze individual aspects of an error trace. We have implemented a hazard-sensitive slicing tool for concurrent traces based on error invariants and evaluated it on benchmarks covering a broad range of real-world concurrency bugs. Hazard-sensitive slicing significantly reduced the length of the considered traces and still maintained the root causes of the concurrency bugs.

Andreas Holzer, Daniel Schwartz-Narbonne, Mitra Tabaei Befrouei, Georg Weissenbacher, Thomas Wies

An Executable Formalisation of the SPARCv8 Instruction Set Architecture: A Case Study for the LEON3 Processor

The SPARCv8 instruction set architecture (ISA) has been used in various processors for workstations, embedded systems, and space missions. However, there are no publicly available formal models for the SPARCv8 ISA. In this work, we give the first formal model for the integer unit of SPARCv8 ISA in Isabelle/HOL. We capture the operational semantics of the instructions using monadic definitions. Our model is a detailed model, which covers many features specific to SPARC processors, such as delayed-write for control registers, windowed general registers, and more complex memory access. Our model is also general, as we retain an abstract layer of the model which allows it to be instantiated to support all SPARCv8 compliant processors. We extract executable code from our formalisation, giving us the first systematically verified executable semantics for the SPARCv8 ISA. We have tested our model extensively against a LEON3 simulation board, covering both single-step executions and sequential execution of programs. We prove some important properties for our formal model, including a non-interference property for the LEON3 processor.

Zhe Hou, David Sanan, Alwen Tiu, Yang Liu, Koh Chuen Hoa

Hybrid Statistical Estimation of Mutual Information for Quantifying Information Flow

Analysis of a probabilistic system often requires to learn the joint probability distribution of its random variables. The computation of the exact distribution is usually an exhaustive precise analysis on all executions of the system. To avoid the high computational cost of such an exhaustive search, statistical analysis has been studied to efficiently obtain approximate estimates by analyzing only a small but representative subset of the system’s behavior. In this paper we propose a hybrid statistical estimation method that combines precise and statistical analyses to estimate mutual information and its confidence interval. We show how to combine the analyses on different components of the system with different precision to obtain an estimate for the whole system. The new method performs weighted statistical analysis with different sample sizes over different components and dynamically finds their optimal sample sizes. Moreover it can reduce sample sizes by using prior knowledge about systems and a new abstraction-then-sampling technique based on qualitative analysis. We show the new method outperforms the state of the art in quantifying information leakage.

Yusuke Kawamoto, Fabrizio Biondi, Axel Legay

A Generic Logic for Proving Linearizability

Linearizability is a commonly accepted notion of correctness for libraries of concurrent algorithms, and recent years have seen a number of proposals of program logics for proving it. Although these logics differ in technical details, they embody similar reasoning principles. To explicate these principles, we propose a logic for proving linearizability that is generic: it can be instantiated with different means of compositional reasoning about concurrency, such as separation logic or rely-guarantee. To this end, we generalise the Views framework for reasoning about concurrency to handle relations between programs, required for proving linearizability. We present sample instantiations of our generic logic and show that it is powerful enough to handle concurrent algorithms with challenging features, such as helping.

Artem Khyzha, Alexey Gotsman, Matthew Parkinson

Refactoring Refinement Structure of Event-B Machines

Refinement in formal specifications has received significant attention as a method to gradually construct a rigorous model. Although refactoring methods for formal specifications have been proposed, there are no methods for refactoring of refinement structures in formal specifications. In this paper, we describe a method to restructure refinements in specifications of Event-B, a formal specification method with supports for refinement. The core of our method is decomposition of refinements. Namely, when an abstract Event-B machine A, a concrete machine C refining A, and a slicing strategy are provided, our method constructs a consistent intermediate machine B, which refines A and is refined by C. We show effectiveness of our methods through two case studies on representative usages of our method: decomposition of large-scale refinements and extraction of reusable parts of specifications.

Tsutomu Kobayashi, Fuyuki Ishikawa, Shinichi Honiden

Towards Concolic Testing for Hybrid Systems

Hybrid systems exhibit both continuous and discrete behavior. Analyzing hybrid systems is known to be hard. Inspired by the idea of concolic testing (of programs), we investigate whether we can combine random sampling and symbolic execution in order to effectively verify hybrid systems. We identify a sufficient condition under which such a combination is more effective than random sampling. Furthermore, we analyze different strategies of combining random sampling and symbolic execution and propose an algorithm which allows us to dynamically switch between them so as to reduce the overall cost. Our method has been implemented as a web-based checker named HyChecker. HyChecker has been evaluated with benchmark hybrid systems and a water treatment system in order to test its effectiveness.

Pingfan Kong, Yi Li, Xiaohong Chen, Jun Sun, Meng Sun, Jingyi Wang

Explaining Relaxed Memory Models with Program Transformations

Weak memory models determine the behavior of concurrent programs. While they are often understood in terms of reorderings that the hardware or the compiler may perform, their formal definitions are typically given in a very different style—either axiomatic or operational. In this paper, we investigate to what extent weak behaviors of existing memory models can be fully explained in terms of reorderings and other program transformations. We prove that TSO is equivalent to a set of two local transformations over sequential consistency, but that non-multi-copy-atomic models (such as C11, Power and ARM) cannot be explained in terms of local transformations over sequential consistency. We then show that transformations over a basic non-multi-copy-atomic model account for the relaxed behaviors of (a large fragment of) Power, but that ARM’s relaxed behaviors cannot be explained in a similar way. Our positive results may be used to simplify correctness of compilation proofs from a high-level language to TSO or Power.

Ori Lahav, Viktor Vafeiadis

SpecCert: Specifying and Verifying Hardware-Based Security Enforcement

Over time, hardware designs have constantly grown in complexity and modern platforms involve multiple interconnected hardware components. During the last decade, several vulnerability disclosures have proven that trust in hardware can be misplaced. In this article, we give a formal definition of Hardware-based Security Enforcement (HSE) mechanisms, a class of security enforcement mechanisms such that a software component relies on the underlying hardware platform to enforce a security policy. We then model a subset of a x86-based hardware platform specifications and we prove the soundness of a realistic HSE mechanism within this model using Coq, a proof assistant system.

Thomas Letan, Pierre Chifflier, Guillaume Hiet, Pierre Néron, Benjamin Morin

Automated Verification of Timed Security Protocols with Clock Drift

Time is frequently used in security protocols to provide better security. For instance, critical credentials often have limited lifetime which improves the security against brute-force attacks. However, it is challenging to correctly use time in protocol design, due to the existence of clock drift in practice. In this work, we develop a systematic method to formally specify as well as automatically verify timed security protocols with clock drift. We first extend the previously proposed timed applied$$\pi $$π-calculus as a formal specification language for timed protocols with clock drift. Then, we define its formal semantics based on timed logic rules, which facilitates efficient verification against various security properties. Clock drift is encoded as parameters in the rules. The verification result shows the constraints associated with clock drift that are required for the security of the protocol, e.g., the maximum drift should be less than some constant. We evaluate our method with multiple timed security protocols. We find a time-related security threat in the TESLA protocol, a complex time-related broadcast protocol for lossy channels, when the clocks used by different protocol participants do not share the same clock rate.

Li Li, Jun Sun, Jin Song Dong

Dealing with Incompleteness in Automata-Based Model Checking

A software specification is often the result of an iterative process that transforms an initial incomplete model through refinement decisions. A model is incomplete because the implementation of certain functionalities is postponed to a later development step or is delegated to third parties. An unspecified functionality may be later replaced by alternative solutions, which may be evaluated to analyze tradeoffs. Model checking has been proposed as a technique to verify that a model of the system under development is compliant with a formal specification of its requirements. However, most classical model checking approaches assume that a complete model of the system is given: they do not support incompleteness. A verification-driven design process would instead benefit from the ability to apply formal verification at any stage, hence also to incomplete models. After any change, it is desirable that only the portion affected by the change, called replacement, is analyzed. To achieve this goal, this paper extends the classical automata-based model checking procedure to deal with incompleteness. The proposed model checking approach is able not only to evaluate whether a property definitely holds, possibly holds or does not hold in an incomplete model but, when the satisfaction of the specification depends on the incomplete parts, to compute the constraints that must be satisfied by their future replacements. Constraints are properties on the unspecified components that, if satisfied by the replacement, guarantee the satisfaction of the original specification in the refined model. Each constraint is verified in isolation on the corresponding replacement.

Claudio Menghi, Paola Spoletini, Carlo Ghezzi

Equivalence Checking of a Floating-Point Unit Against a High-Level C Model

Semiconductor companies have increasingly adopted a methodology that starts with a system-level design specification in C/C++/SystemC. This model is extensively simulated to ensure correct functionality and performance. Later, a Register Transfer Level (RTL) implementation is created in Verilog, either manually by a designer or automatically by a high-level synthesis tool. It is essential to check that the C and Verilog programs are consistent. In this paper, we present a two-step approach, embodied in two equivalence checking tools, VerifOx and hw-cbmc, to validate designs at the software and RTL levels, respectively. VerifOx is used for equivalence checking of an untimed software model in C against a high-level reference model in C. hw-cbmc verifies the equivalence of a Verilog RTL implementation against an untimed software model in C. To evaluate our tools, we applied them to a commercial floating-point arithmetic unit (FPU) from ARM and an open-source dual-path floating-point adder.

Rajdeep Mukherjee, Saurabh Joshi, Andreas Griesmayer, Daniel Kroening, Tom Melham

Battery-Aware Scheduling in Low Orbit: The GomX–3 Case

When working with space systems the keyword is resources. For a satellite in orbit all resources are sparse and the most critical resource of all is power. It is therefore crucial to have detailed knowledge on how much power is available for an energy harvesting satellite in orbit at every time – especially when in eclipse, where it draws its power from onboard batteries. This paper addresses this problem by a two-step procedure to perform task scheduling for low-earth-orbit (LEO) satellites exploiting formal methods. It combines cost-optimal reachability analyses of priced timed automata networks with a realistic kinetic battery model capable of capturing capacity limits as well as stochastic fluctuations. The procedure is in use for the automatic and resource-optimal day-ahead scheduling of GomX–3, a power-hungry nanosatellite currently orbiting the earth. We explain how this approach has overcome existing problems, has led to improved designs, and has provided new insights.

Morten Bisgaard, David Gerhardt, Holger Hermanns, Jan Krčál, Gilles Nies, Marvin Stenger

Discounted Duration Calculus

To formally reason about the temporal quality of systems discounting was introduced to CTL and LTL. However, these logic are discrete and they cannot express duration properties. In this work we introduce discounting for a variant of Duration Calculus. We prove decidability of model checking for a useful fragment of discounted Duration Calculus formulas on timed automata under mild assumptions. Further, we provide an extensive example to show the usefulness of the fragment.

Heinrich Ody, Martin Fränzle, Michael R. Hansen

Sound and Complete Mutation-Based Program Repair

This work presents a novel approach for automatically repairing an erroneous program with respect to a given set of assertions. Programs are repaired using a predefined set of mutations. We refer to a bounded notion of correctness, even though, for a large enough bound all returned programs are fully correct. To ensure no changes are made to the original program unless necessary, if a program can be repaired by applying a set of mutations Mut, then no superset of Mut is later considered. Programs are checked in increasing number of mutations, and every minimal repaired program is returned as soon as found.We impose no assumptions on the number of erroneous locations in the program, yet we are able to guarantee soundness and completeness. That is, we assure that a program is returned iff it is minimal and bounded correct.Searching the space of mutated programs is reduced to searching unsatisfiable sets of constraints, which is performed efficiently using a sophisticated cooperation between SAT and SMT solvers. Similarities between mutated programs are exploited in a new way, by using both the SAT and the SMT solvers incrementally.We implemented a prototype of our algorithm, compared it with a state-of-the-art repair tool and got very encouraging results.

Bat-Chen Rothenberg, Orna Grumberg

An Implementation of Deflate in Coq

The widely-used compression format “Deflate” is defined in RFC 1951 and is based on prefix-free codings and backreferences. There are unclear points about the way these codings are specified, and several sources for confusion in the standard. We tried to fix this problem by giving a rigorous mathematical specification, which we formalized in Coq. We produced a verified implementation in Coq which achieves competitive performance on inputs of several megabytes. In this paper we present the several parts of our implementation: a fully verified implementation of canonical prefix-free codings, which can be used in other compression formats as well, and an elegant formalism for specifying sophisticated formats, which we used to implement both a compression and decompression algorithm in Coq which we formally prove inverse to each other – the first time this has been achieved to our knowledge. The compatibility to other Deflate implementations can be shown empirically. We furthermore discuss some of the difficulties, specifically regarding memory and runtime requirements, and our approaches to overcome them.

Christoph-Simon Senjak, Martin Hofmann

Decoupling Abstractions of Non-linear Ordinary Differential Equations

We investigate decoupling abstractions, by which we seek to simulate (i.e. abstract) a given system of ordinary differential equations (ODEs) by another system that features completely independent (i.e. uncoupled) sub-systems, which can be considered as separate systems in their own right. Beyond a purely mathematical interest as a tool for the qualitative analysis of ODEs, decoupling can be applied to verification problems arising in the fields of control and hybrid systems. Existing verification technology often scales poorly with dimension. Thus, reducing a verification problem to a number of independent verification problems for systems of smaller dimension may enable one to prove properties that are otherwise seen as too difficult. We show an interesting correspondence between Darboux polynomials and decoupling simulating abstractions of systems of polynomial ODEs and give a constructive procedure for automatically computing the latter.

Andrew Sogokon, Khalil Ghorbal, Taylor T. Johnson

Regression Verification for Unbalanced Recursive Functions

We address the problem of proving the equivalence of two recursive functions that have different base-cases and/or are not in lock-step. None of the existing software equivalence checkers (like rêve, rvt, Symdiff), or general unbounded software model-checkers (like Seahorn, HSFC, Automizer) can prove such equivalences. We show a proof rule for the case of different base cases, based on separating the proof into two parts—inputs which result in the base case in at least one of the two compared functions, and all the rest. We also show how unbalanced unrolling of the functions can solve the case in which the functions are not in lock-step. In itself this type of unrolling may again introduce the problem of the different base cases, and we show a new proof rule for solving it. We implemented these rules in our regression-verification tool rvt. We conclude by comparing our approach to that of Felsig et al.’s counterexample-based refinement, which was implemented lately in their equivalence checker rêve.

Ofer Strichman, Maor Veitsman

Automated Mutual Explicit Induction Proof in Separation Logic

We present a sequent-based deductive system for automatically proving entailments in separation logic by using mathematical induction. Our technique, called mutual explicit induction proof, is an instance of Noetherian induction. Specifically, we propose a novel induction principle on a well-founded relation of separation logic model, and follow the explicit induction methods to implement this principle as inference rules, so that it can be easily integrated into a deductive system. We also support mutual induction, a natural feature of implicit induction, where the goal entailment and other entailments derived during the proof search can be used as hypotheses to prove each other. We have implemented a prototype prover and evaluated it on a benchmark of handcrafted entailments as well as benchmarks from a separation logic competition.

Quang-Trung Ta, Ton Chanh Le, Siau-Cheng Khoo, Wei-Ngan Chin

Finite Model Finding Using the Logic of Equality with Uninterpreted Functions

The problem of finite model finding, finding a satisfying model for a set of first-order logic formulas for a finite scope, is an important step in many verification techniques. In MACE-style solvers, the problem is mapped directly to a SAT problem. We investigate an alternative solution of mapping the problem to the logic of equality with uninterpreted functions (EUF), a decidable logic with many well-supported tools (e.g., SMT solvers). EUF reasoners take advantage of the typed functional structures found in the formulas to improve performance. The challenge is that EUF reasoning is not inherently finite scope. We present an algorithm for mapping a finite model finding problem to an equisatisfiable EUF problem. We present results that show our method has better overall performance than existing tools on a range of problems.

Amirhossein Vakili, Nancy A. Day

GPUexplore 2.0: Unleashing GPU Explicit-State Model Checking

In earlier work, we were the first to investigate the potential of using graphics processing units (GPUs) to speed up explicit-state model checking. Back then, the conclusion was clearly that this potential exists, having measured speed-ups of around 10 times, compared to state-of-the-art single-core model checking. In this paper, we present a new version of our GPU model checker, GPUexplore. Since publication of our earlier work, we have identified and implemented several approaches to improve the performance of the model checker considerably. These include enhanced lock-less hashing of the states and improved thread synchronizations. We discuss experimental results that show the impact of both the progress in hardware in the last few years and our proposed optimisations. The new version of GPUexplore running on state-of-the-art hardware can be more than 100 times faster than a sequential implementation for large models and is on average eight times faster than the previous version of the tool running on the same hardware.

Anton Wijs, Thomas Neele, Dragan Bošnački

Approximate Bisimulation and Discretization of Hybrid CSP

Hybrid Communicating Sequential Processes (HCSP) is a powerful formal modeling language for hybrid systems, which is an extension of CSP by introducing differential equations for modeling continuous evolution and interrupts for modeling interaction between continuous and discrete dynamics. In this paper, we investigate the semantic foundation for HCSP from an operational point of view by proposing the notion of approximate bisimulation, which provides an appropriate criterion to characterize the equivalence between HCSP processes with continuous and discrete behaviour. We give an algorithm to determine whether two HCSP processes are approximately bisimilar. In addition, based on which, we propose an approach on how to discretize HCSP, i.e., given an HCSP process A, we construct another HCSP process B which does not contain any continuous dynamics such that A and B are approximately bisimilar with given precisions. This provides a rigorous way to transform a verified control model to a correct program model, which fills the gap in the design of embedded systems.

Gaogao Yan, Li Jiao, Yangjia Li, Shuling Wang, Naijun Zhan

A Linear Programming Relaxation Based Approach for Generating Barrier Certificates of Hybrid Systems

This paper presents a linear programming (LP) relaxation based approach for generating polynomial barrier certificates for safety verification of semi-algebraic hybrid systems. The key idea is to introduce an LP relaxation to encode the set of nonnegativity constraints derived from the conditions of the associated barrier certificates and then resort to LP solvers to find the solutions. The most important benefit of the LP relaxation based approach is that it possesses a much lower computational complexity and hence can be solved very efficiently, which is demonstrated by the theoretical analysis on complexity as well as the experiment on a set of examples gathered from the literature. As far as we know, it is the first method that enables LP relaxation based polynomial barrier certificate generation.

Zhengfeng Yang, Chao Huang, Xin Chen, Wang Lin, Zhiming Liu

Industry Track

Frontmatter

Model-Based Design of an Energy-System Embedded Controller Using Taste

Model-based design has become a standard practice in the development of control systems. Many solutions provide simulation, code generation, and other functionalities to minimize the design time and optimize the resulting control system implementation.In this paper, we report on the experience of using Taste as the design environment for the controller of an energy system comprising a parabolic dish collector and a Stirling engine. Besides standard advantages of model-based design, an appealing feature of Taste is the possibility of specifying the design model with a formal language such as SDL. The complexity of the designed system stressed the tool’s performances and usability. Nevertheless, the functionalities provided by Taste were essential to manage such complexity.

Roberto Cavada, Alessandro Cimatti, Luigi Crema, Mattia Roccabruna, Stefano Tonetta

Simulink to UPPAAL Statistical Model Checker: Analyzing Automotive Industrial Systems

The advanced technology used for developing modern automotive systems increases their complexity, making their correctness assurance very tedious. To enable analysis by simulation, but also enhance understanding and communication, engineers use MATLAB/Simulink modeling during system development. In this paper, we provide further analysis means to industrial Simulink models by proposing a pattern-based, execution-order preserving transformation of Simulink blocks into the input language of UPPAAL Statistical Model checker, that is, timed (or hybrid) automata with stochastic semantics. The approach leads to being able to analyze complex Simulink models of automotive systems, and we report our experience with two vehicular systems, the Brake-by-Wire and the Adjustable Speed Limiter.

Predrag Filipovikj, Nesredin Mahmud, Raluca Marinescu, Cristina Seceleanu, Oscar Ljungkrantz, Henrik Lönn

Safety-Assured Formal Model-Driven Design of the Multifunction Vehicle Bus Controller

In this paper, we present a formal model-driven engineering approach to establishing a safety-assured implementation of Multifunction vehicle bus controller (MVBC) based on the generic reference models and requirements described in the International Electrotechnical Commission (IEC) standard IEC-61375. First, the generic models described in IEC-61375 are translated into a network of timed automata, and some safety requirements tested in IEC-61375 are formalized as timed computation tree logic (TCTL) formulas. With the help of Uppaal, we check and debug whether the timed automata satisfy the formulas or not. Within this step, several logic inconsistencies in the original standard are detected and corrected. Then, we apply the tool Times to generate C code from the verified model, which was later synthesized into a real MVBC chip. Finally, the runtime verification tool RMOR is applied to verify some safety requirements at the implementation level. We set up a real platform with worldwide mostly used MVBC D113, and verify the correctness and the scalability of the synthesized MVBC chip more comprehensively. The errors in the standard has been confirmed and the resulted MVBC has been deployed in real train communication network.

Yu Jiang, Han Liu, Houbing Song, Hui Kong, Ming Gu, Jiaguang Sun, Lui Sha

Taming Interrupts for Verifying Industrial Multifunction Vehicle Bus Controllers

Multifunction Vehicle Bus controllers (MVBC) are safety-critical sub-systems in the industrial train communication network. As an interrupt-driven system, MVBC is practically hard to verify. The reasons are twofold. First, MVBC introduces the concurrency semantics of deferred interrupt handlers and communication via hardware registers, making existing formalism infeasible. Second, verifying MVBC requires considering the environmental features (i.e., interrupt ordering), which is hard to model and reason. To overcome these limitations, we proposed a novel framework for formal verification on MVBC. First, we formalized the concurrency semantics of MVBC and described a sequentialization technique so that well-designed sequential analyses can be performed. Moreover, we introduced the happen-before interrupt graph to model interrupt dependency and further eliminate false alarms. The framework scaled well on an industrial MVBC product from CRRC Inc. and found 3 severe software bugs, which were all confirmed by engineers.

Han Liu, Yu Jiang, Huafeng Zhang, Ming Gu, Jiaguang Sun

Rule-Based Incremental Verification Tools Applied to Railway Designs and Regulations

When designing railway infrastructure (tracks, signalling systems, etc.), railway engineers need to keep in mind numerous regulations for ensuring safety. Many of these regulations are simple, but demonstrably conforming with them often involves tedious manual work. We have worked on automating the verification of regulations against CAD designs, and integrated a verification tool and methodology into the tool chain of railway engineers. Automatically generating a model from the railway designs and running the verification tool on it is a valuable step forward, compared to manually reviewing the design for compliance and consistency. To seamlessly integrate the consistency checking into the CAD work-flow of the design engineers, however, requires a fast, on-the-fly mechanism, similar to real-time compilation done in standard programming tools.In consequence, in this paper we turn to incremental verification and investigate existing rule-based tools, looking at various aspects relevant for engineering railway designs. We discuss existing state-of-the-art methods for incremental verification in the setting of rule-based modelling. We survey and compare relevant tools (ca. 30) and discuss if/how they could be integrated in a railway design environment, such as CAD software. We examine and compare four promising tools: XSB Prolog, a standard tool in the Datalog community, RDFox from the semantic web community, Dyna from the AI community, and LogicBlox, a proprietary solution.

Bjørnar Luteberget, Christian Johansen, Claus Feyling, Martin Steffen

RIVER: A Binary Analysis Framework Using Symbolic Execution and Reversible x86 Instructions

We present a binary analysis framework based on symbolic execution with the distinguishing capability to execute stepwise forward and also backward through the execution tree. It was developed internally at Bitdefender and code-named RIVER. The framework provides components such as a taint engine, a dynamic symbolic execution engine, and integration with Z3 for constraint solving.

Teodor Stoenescu, Alin Stefanescu, Sorina Predut, Florentin Ipate

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise