Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 20th International Symposium on Formal Methods, FM 2015, held in Oslo, Norway, in June 2015. The 30 full papers and 2 short papers presented were carefully reviewed and selected from 124 submissions. The papers cover a wide spectrum of all the different aspects of the use of and the research on formal methods for software development.



Invited Presentations


Resource Analysis: From Sequential to Concurrent and Distributed Programs

Resource analysis aims at automatically inferring





on the worst/best-case cost of executing programs. Ideally, a resource analyzer should be parametric on the

cost model

, i.e., the type of cost that the user wants infer (e.g., number of steps, amount of memory allocated, amount of data transmitted, etc.). The inferred upper bounds have important applications in the fields of program optimization, verification and certification. In this talk, we will review the basic techniques used in resource analysis of sequential programs and the new extensions needed to handle concurrent and distributed systems.

Elvira Albert, Puri Arenas, Jesús Correas, Samir Genaim, Miguel Gómez-Zamalloa, Enrique Martin-Martin, Germán Puebla, Guillermo Román-Díez

AVACS: Automatic Verification and Analysis of Complex Systems Highlights and Lessons Learned

This talk presents highlights and lessons learned from the Transregional Collaborative Research Center AVACS, funded by the German Science Foundation under contract SFB-TR 14 from January 1, 2004 to December 31, 2015 with a total funding of about 30 Million €, involving between 18 to 22 principal investigators at the three AVACS sites Freiburg, Oldenburg and Saarbrücken. Through this funding the German Science Foundation provided an excellent environment for foundational cross-site research in the highly relevant and challenging research are of Automatic Verification and Analysis of Complex Systems.

Werner Damm

Main Track


Automated Circular Assume-Guarantee Reasoning

Compositional verification techniques aim to decompose the verification of a large system into the more manageable verification of its components. In recent years, compositional techniques have gained significant successes following a breakthrough in the ability to automate assume-guarantee reasoning. However, automation is still restricted to simple acyclic assume-guarantee rules.

In this work, we focus on automating


assume-guarantee reasoning in which the verification of individual components mutually depends on each other. We use a sound and complete circular assume-guarantee rule and we describe how to automatically build the assumptions needed for using the rule. Our algorithm accumulates


constraints on the assumptions based on (spurious) counterexamples obtained from checking the premises of the rule, and uses a SAT solver to synthesize minimal assumptions that satisfy these constraints.

We implemented our approach and compared it with an established learning-based method that uses an acyclic rule. In all cases, the assumptions generated for the circular rule were significantly smaller, leading to smaller verification problems. Further, on larger examples, we obtained a significant speedup as well.

Karam Abd Elkader, Orna Grumberg, Corina S. Păsăreanu, Sharon Shoham

Towards Formal Verification of Orchestration Computations Using the ${\mathbb K}$ Framework

Orchestration provides a general model of concurrent computations. A minimal yet expressive theory of orchestration is provided by Orc, in which computations are modeled by site calls and their orchestrations through a few combinators. Using Orc, formal verification of correctness of orchestrations amounts to devising an executable formal semantics of Orc and leveraging existing tool support. Despite its simplicity and elegance, giving formal semantics to Orc capturing precisely its intended behaviors is far from trivial primarily due to the challenges posed by concurrency, timing and the distinction between internal and external actions. This paper presents a semantics-based approach for formally verifying Orc orchestrations using the

${\mathbb K}$

framework. Unlike previously developed operational semantics of Orc, the

${\mathbb K}$

semantics is not directly based on the interleaving semantics given by Orc’s SOS specification. Instead, it is based on concurrent rewriting enabled by

${\mathbb K}$

. It also utilizes various

${\mathbb K}$

facilities to arrive at a clean, minimal and elegant semantic specification. To demonstrate the usefulness of the proposed approach, we describe a specification for a simple robotics case study and provide initial formal verification results.

Musab A. AlTurki, Omar Alzuhaibi

Narrowing Operators on Template Abstract Domains

In the theory of abstract interpretation, a descending phase may be used to improve the precision of the analysis after a post-fixpoint has been reached. Termination is often guaranteed by using narrowing operators. This is especially true on numerical domains, since they are generally endowed with infinite descending chains which may lead to a non-terminating descending phase in the absence of narrowing. We provide an abstract semantics which improves the analysis precision and shows that, for a large class of numerical abstract domains over integer variables (such as intervals, octagons and template polyhedra), it is possible to avoid infinite descending chains and omit narrowing. Moreover, we propose a new family of narrowing operators for real variables which improves the analysis precision.

Gianluca Amato, Simone Di Nardo Di Maio, Maria Chiara Meo, Francesca Scozzari

Detection of Design Flaws in the Android Permission Protocol Through Bounded Verification

The ever increasing expansion of mobile applications into nearly every aspect of modern life, from banking to healthcare systems, is making their security more important than ever. Modern smartphone operating systems (OS) rely substantially on the permission-based security model to enforce restrictions on the operations that each application can perform. In this paper, we perform an analysis of the permission protocol implemented in Android, a popular OS for smartphones. We propose a formal model of the Android permission protocol in Alloy, and describe a fully automatic analysis that identifies potential flaws in the protocol. A study of real-world Android applications corroborates our finding that the flaws in the Android permission protocol can have severe security implications, in some cases allowing the attacker to bypass the permission checks entirely.

Hamid Bagheri, Eunsuk Kang, Sam Malek, Daniel Jackson

Privacy by Design in Practice: Reasoning about Privacy Properties of Biometric System Architectures

The work presented in this paper is the result of a collaboration between academics, industry and lawyers to show the applicability of the privacy by design approach to biometric systems and the benefit of formal methods to this end. The choice of particular techniques and the role of the components (central server, secure module, terminal, smart card, etc.) in the architecture have a strong impact on the privacy guarantees provided by a biometric system. However, existing proposals were made on a case by case basis, which makes it difficult to compare them and to provide a rationale for the choice of specific options. In this paper, we show that a general framework for the definition of privacy architectures can be used to specify these options and to reason about them in a formal way.

Julien Bringer, Hervé Chabanne, Daniel Le Métayer, Roch Lescuyer

A Specification Language for Static and Runtime Verification of Data and Control Properties

Static verification techniques can verify properties across all executions of a program, but powerful judgements are hard to achieve automatically. In contrast, runtime verification enjoys full automation, but cannot judge future and alternative runs. In this paper we present a novel approach in which data-centric and control-oriented properties may be stated in a single formalism, amenable to both static and dynamic verification techniques. We develop and formalise a specification notation,


, extending the control-flow property language used in the runtime verification tool


with pre/post-conditions and show how specifications written in this notation can be analysed both using the deductive theorem prover KeY and the runtime verification tool


. Verification is performed in two steps: KeY first partially proves the data-oriented part of the specification, simplifying the specification which is then passed on to


to check at runtime for the remaining parts of the specification including the control-centric aspects. We apply the approach to Mondex, an electronic purse application.

Wolfgang Ahrendt, Jesús Mauricio Chimento, Gordon J. Pace, Gerardo Schneider

Certificates for Parameterized Model Checking

This paper presents a technique for the certification of Cubicle, a model checker for proving safety properties of parameterized systems. To increase the confidence in its results, Cubicle now produces a proof object (or certificate) that, if proven valid, guarantees that the answer for this specific input is correct. The main challenges addressed in this paper are (1) the production of such certificates without degrading the performances of the model checker and (2) the construction of these proof objects so that they can be independently and efficiently verified by an SMT solver. Since the burden of correctness insurance now relies on this external solver, a stronger guarantee is obtained by the use of multiple backend automatic provers for redundancy. Experiments show that our approach does not impact Cubicle’s performances and that we were able to verify certificates for challenging parameterized problems. As a byproduct, these certificates allowed us to find subtle and critical implementation bugs in Cubicle.

Sylvain Conchon, Alain Mebsout, Fatiha Zaïdi

Safety, Liveness and Run-Time Refinement for Modular Process-Aware Information Systems with Dynamic Sub Processes

We study modularity, run-time adaptation and refinement under safety and liveness constraints in event-based process models with dynamic sub-process instantiation. The study is part of a larger programme to provide semantically well-founded technologies for modelling, implementation and verification of flexible, run-time adaptable processaware information systems, moved into practice via the Dynamic Condition Response (DCR) Graphs notation co-developed with our industrial partner. Our key contributions are: (1) A formal theory of dynamic subprocess instantiation for declarative, event-based processes under safety and liveness constraints, given as the DCR* process language, equipped with a compositional operational semantics and conservatively extending the DCR Graphs notation; (2) an expressiveness analysis revealing that the DCR* process language is Turing-complete, while the fragment corresponding to DCR Graphs (without dynamic sub-process instantiation) characterises exactly the languages that are the union of a regular and an omega-regular language; (3) a formalisation of run-time refinement and adaptation by composition for DCR* processes and a proof that such refinement is undecidable in general; and finally (4) a decidable and practically useful sub-class of run-time refinements. Our results are illustrated by a running example inspired by a recent Electronic Case Management solution based on DCR Graphs and delivered by our industrial partner. An online prototype implementation of the DCR* language (including examples from the paper) and its visualisation as DCR Graphs can be found at

Søren Debois, Thomas Hildebrandt, Tijs Slaats

Verifying Opacity of a Transactional Mutex Lock

Software transactional memory (STM) provides programmers with a high-level programming abstraction for synchronization of parallel processes, allowing blocks of codes that execute in an interleaved manner to be treated as an atomic block. This atomicity property is captured by a correctness criterion called


. Opacity relates histories of a sequential atomic specification with that of STM implementations.

In this paper we prove opacity of a recently proposed STM implementation (a Transactional Mutex Lock) by Dalessandro et al.. The proof is carried out within the interactive verifier KIV and proceeds via the construction of an intermediate level in between sequential specification and implementation, leveraging existing proof techniques for linearizability.

John Derrick, Brijesh Dongol, Gerhard Schellhorn, Oleg Travkin, Heike Wehrheim

A Framework for Correctness Criteria on Weak Memory Models

The implementation of weak (or relaxed) memory models is standard practice in modern multiprocessor hardware. For efficiency, these memory models allow operations to take effect in shared memory in a different order from that which they occur in a program. A number of correctness criteria have been proposed for concurrent objects operating on such memory models, each reflecting different constraints on the objects which can be proved correct. In this paper, we provide a framework in which correctness criteria are defined in terms of two components: the first defining the particular criterion (as it would be defined in the absence of a weak memory model), and the second defining the particular weak memory model. The framework facilitates the definition and comparison of correctness criteria, and encourages reuse of existing definitions. The latter enables properties of the criteria to be proved using existing proofs. We illustrate the framework via the definition of correctness criteria on the TSO (Total Store Order) weak memory model.

John Derrick, Graeme Smith

Semantics-Preserving Simplification of Real-World Firewall Rule Sets

The security provided by a firewall for a computer network almost completely depends on the rules it enforces. For over a decade, it has been a well-known and unsolved problem that the quality of many firewall rule sets is insufficient. Therefore, there are many tools to analyze them. However, we found that none of the available tools could handle typical, real-world


rulesets. This is due to the complex chain model used by


, but also to the vast amount of possible match conditions that occur in real-world firewalls, many of which are not understood by academic and open source tools.

In this paper, we provide algorithms to transform firewall rulesets. We reduce the execution model to a simple list model and use ternary logic to abstract over all unknown match conditions. These transformations enable existing tools to understand real-world firewall rules, which we demonstrate on four decently-sized rulesets. Using the Isabelle theorem prover, we formally show that all our algorithms preserve the firewall’s filtering behavior.

Cornelius Diekmann, Lars Hupel, Georg Carle

Parameter Synthesis Through Temporal Logic Specifications

Parameters are often used to tune mathematical models and capture nondeterminism and uncertainty in physical and engineering systems. This paper is concerned with parametric nonlinear dynamical systems and the problem of determining the parameter values that are consistent with some expected properties. In our previous works, we proposed a parameter synthesis algorithm limited to safety properties and demonstrated its applications for biological systems. Here we consider more general properties specified by a fragment of STL (Signal Temporal Logic), which allows us to deal with complex behavioral patterns that biological processes exhibit. We propose an algorithm for parameter synthesis w.r.t. a property specified using the considered logic. It exploits reachable set computations and forward refinements. We instantiate our algorithm in the case of polynomial dynamical systems exploiting Bernstein coefficients and we illustrate it on an epidemic model.

Thao Dang, Tommaso Dreossi, Carla Piazza

Trace-Length Independent Runtime Monitoring of Quantitative Policies in LTL

Linear temporal logic (LTL) has been widely used to specify runtime policies. Traditionally this use of LTL is to capture the qualitative aspects of the monitored systems, but recent developments in metric LTL and its extensions with aggregate operators allow some quantitative policies to be specified. Our interest in LTL-based policy languages is driven by applications in runtime Android malware detection, which requires the monitoring algorithm to be independent of the length of the system event traces so that its performance does not degrade as the traces grow. We propose a policy language based on a past-time variant of LTL, extended with an aggregate operator called the counting quantifier to specify a policy based on the number of times some sub-policies are satisfied in the past. We show that a broad class of policies, but not all policies, specified with our language can be monitored in a trace-length independent way without sacrificing completeness, and provide a concrete algorithm to do so. We implement and test our algorithm in an existing Android monitoring framework and show that our approach can effectively specify and enforce quantitative policies drawn from real-world Android malware studies.

Xiaoning Du, Yang Liu, Alwen Tiu

Probabilistic Bisimulation for Realistic Schedulers

Weak distribution bisimilarity is an equivalence notion on probabilistic automata, originally proposed for Markov automata. It has gained some popularity as the coarsest behavioral equivalence enjoying valuable properties like preservation of trace distribution equivalence and compositionality. This holds in the classical context of arbitrary schedulers, but it has been argued that this class of schedulers is unrealistically powerful. This paper studies a strictly coarser notion of bisimilarity, which still enjoys these properties in the context of realistic subclasses of schedulers: Trace distribution equivalence is implied for partial information schedulers, and compositionality is preserved by distributed schedulers. The intersection of the two scheduler classes thus spans a coarser and still reasonable compositional theory of behavioral semantics.

Christian Eisentraut, Jens Chr. Godskesen, Holger Hermanns, Lei Song, Lijun Zhang

QPMC: A Model Checker for Quantum Programs and Protocols

We present


(Quantum Program/Protocol Model Checker)

, an extension of the probabilistic model checker


to automatically verify quantum programs and quantum protocols.


distinguishes itself from the previous quantum model checkers proposed in the literature in that it works for general quantum programs and protocols, not only those using Clifford operations. A command-line version of


is available at


Yuan Feng, Ernst Moritz Hahn, Andrea Turrini, Lijun Zhang

Automated Verification of RPC Stub Code

Formal verification has been successfully applied to provide strong correctness guarantees of software systems, but its application to large code bases remains an open challenge. The technique of component-based software development, traditionally employed for engineering benefit, also aids reasoning about such systems. While there exist compositional verification techniques that leverage the separation implied by a component system architecture, they implicitly rely on the component platform correctly implementing the isolation and composition semantics they assume. Any property proven using these techniques is vulnerable to being invalidated by a bug in the code of the platform itself. In this paper, we show how this assumption can be eliminated by automatically generating machine-checked proofs of the correctness of a component platform’s generated Remote Procedure Call (RPC) code. We demonstrate how these generated proofs can be composed with hand-written proofs to yield a system-level property with equivalent assurance to an entirely hand-written proof. This technique forms the basis of a scalable approach to formal verification of large software systems.

Matthew Fernandez, June Andronick, Gerwin Klein, Ihor Kuz

Property-Driven Fence Insertion Using Reorder Bounded Model Checking

Modern architectures provide weaker memory consistency guarantees than sequential consistency. These weaker guarantees allow programs to exhibit behaviours where the program statements appear to have executed out of program order. Fortunately, modern architectures provide memory barriers (fences) to enforce the program order between a pair of statements if needed. Due to the intricate semantics of weak memory models, the placement of fences is challenging even for experienced programmers. Too few fences lead to bugs whereas overuse of fences results in performance degradation. This motivates automated placement of fences. Tools that restore sequential consistency in the program may insert more fences than necessary for the program to be correct. Therefore, we propose a property-driven technique that introduces

reorder-bounded exploration

to identify the smallest number of program locations for fence placement. We implemented our technique on top of


; however, in principle, our technique is generic enough to be used with any model checker. Our experimental results show that our technique is faster and solves more instances of relevant benchmarks than earlier approaches.

Saurabh Joshi, Daniel Kroening

Verifying the Safety of a Flight-Critical System

This paper describes our work on demonstrating verification technologies on a flight-critical system of realistic functionality, size, and complexity. Our work targeted a commercial aircraft control system named Transport Class Model (TCM), and involved several stages: formalizing and disambiguating requirements in collaboration with domain experts; processing models for their use by formal verification tools; applying compositional techniques at the architectural and component level to scale verification. Performed in the context of a major NASA milestone, this study of formal verification in practice is one of the most challenging that our group has performed.

Guillaume Brat, David Bushnell, Misty Davies, Dimitra Giannakopoulou, Falk Howar, Temesghen Kahsai

Proving Safety with Trace Automata and Bounded Model Checking

Loop under-approximation enriches C programs with additional branches that represent the effect of a (limited) range of loop iterations. While this technique can speed up bug detection significantly, it introduces redundant execution traces which may complicate the verification of the program. This holds particularly true for tools based on Bounded Model Checking, which incorporate simplistic heuristics to determine whether all feasible iterations of a loop have been considered.

We present a technique that uses

trace automata

to eliminate redundant executions after performing loop acceleration. The method reduces the diameter of the program under analysis, which is in certain cases sufficient to allow a safety proof using Bounded Model Checking. Our transformation is precise–it does not introduce false positives, nor does it mask any errors. We have implemented the analysis as a source-to-source transformation, and present experimental results showing the applicability of the technique.

Daniel Kroening, Matt Lewis, Georg Weissenbacher

Verifying Parameterized Timed Security Protocols

Quantitative timing is often explicitly used in systems for better security, e.g., the credentials for automatic website logon often has limited lifetime. Verifying timing relevant security protocols in these systems is very challenging as timing adds another dimension of complexity compared with the untimed protocol verification. In our previous work, we proposed an approach to check the correctness of the timed authentication in security protocols with fixed timing constraints. However, a more difficult question persists, i.e., given a particular protocol design, whether the protocol has security flaws in its design or it can be configured secure with proper parameter values? In this work, we answer this question by proposing a parameterized verification framework, where the quantitative parameters in the protocols can be intuitively specified as well as automatically analyzed. Given a security protocol, our verification algorithm either produces the secure constraints of the parameters, or constructs an attack that works for any parameter values. The correctness of our algorithm is formally proved. We implement our method into a tool called PTAuth and evaluate it with several security protocols. Using PTAuth, we have successfully found a timing attack in Kerberos V which is unreported before.

Li Li, Jun Sun, Yang Liu, Jin Song Dong

Abstraction of Elementary Hybrid Systems by Variable Transformation

Elementary hybrid systems (EHSs) are those hybrid systems (HSs) containing elementary functions such as

exp, ln, sin, cos

, etc. EHSs are very common in practice, especially in safety-critical domains. Due to the non-polynomial expressions which lead to undecidable arithmetic, verification of EHSs is very hard. Existing approaches based on partition of the state space or overapproximation of reachable sets suffer from state space explosion or inflation of numerical errors. In this paper, we propose a symbolic abstraction approach that reduces EHSs to polynomial hybrid systems (PHSs), by replacing all non-polynomial terms with newly introduced variables. Thus the verification of EHSs is reduced to the one of PHSs, enabling us to apply all the well-established verification techniques and tools for PHSs to EHSs. In this way, it is possible to avoid the limitations of many existing methods. We illustrate the abstraction approach and its application in safety verification of EHSs by several real world examples.

Jiang Liu, Naijun Zhan, Hengjun Zhao, Liang Zou

Using Real-Time Maude to Model Check Energy Consumption Behavior

Energy consumption is one of the primary non-functional concerns, especially for application programs running on systems that have limited battery capacity. A model-based analysis of energy consumption is introduced at early stages of development. As rigorous formal models of this, the power consumption automaton and a variant of linear temporal logic are proposed. Detecting unexpected energy consumption is then reduced to a model checking problem, which is unfortunately undecidable in general. This paper introduces some restrictions to the logic formulas representing energy consumption properties so that an automatic analysis is possible with Real-Time Maude.

Shin Nakajima

Static Differential Program Analysis for Software-Defined Networks

Networks are increasingly controlled by software, and bad updates can bring down an entire network. Network operators therefore need tools to determine the impact of changes. To address this, we present

static differential analysis

of software-defined network (SDN) controller programs. Given two versions of a controller program our tool, Chimp, builds atop Alloy to produce a set of concrete scenarios where the programs differ in their behavior. Chimp thus enables network developers to exploit the power of formal methods tools without having to be trained in formal logic or property elicitation. Furthermore, we show that there are many interesting properties that one can state about the changes themselves. Our evaluation shows that Chimp is fast, returning scenarios in under a second on several real applications.

Tim Nelson, Andrew D. Ferguson, Shriram Krishnamurthi

A Fully Verified Container Library

The comprehensive functionality and nontrivial design of realistic general- purpose container libraries pose challenges to formal verification that go beyond those of individual benchmark problems mainly targeted by the state of the art. We present our experience verifying the full functional correctness of Eiffel- Base2: a container library offering all the features customary in modern language frameworks, such as external iterators, and hash tables with generic mutable keys and load balancing. Verification uses the automated deductive verifier AutoProof, which we extended as part of the present work. Our results indicate that verification of a realistic container library (135 public methods, 8,400 LOC) is possible with moderate annotation overhead (1.4 lines of specification per LOC) and good performance (0.2 seconds per method on average).

Nadia Polikarpova, Julian Tschannen, Carlo A. Furia

Counterexamples for Expected Rewards

The computation of counterexamples for probabilistic systems has gained a lot of attention during the last few years. All of the proposed methods focus on the situation when the probabilities of certain events are too high. In this paper we investigate how counterexamples for properties concerning

expected costs

(or, equivalently, expected rewards) of events can be computed. We propose methods to extract a minimal subsystem which already leads to costs beyond the allowed bound. Besides these exact methods, we present heuristic approaches based on path search and on best-first search, which are applicable to very large systems when deriving a minimum subsystem becomes infeasible due to the system size. Experiments show that we can compute counterexamples for systems with millions of states.

Tim Quatmann, Nils Jansen, Christian Dehnert, Ralf Wimmer, Erika Ábrahám, Joost-Pieter Katoen, Bernd Becker

The Semantics of Cardinality-Based Feature Models via Formal Languages

Cardinality-based feature models provide the most expressive language among the existing feature modeling languages. We provide a reduction process, which allows us to transform a cardinality-based feature diagram to an appropriate regular expression. As for crosscutting constraints, we propose a formal language interpretation of them. In this way, we provide a formal language-based semantics for cardinality-based feature models. Accordingly, we describe a computational hierarchy of feature models, which guides us in how feature models can be constructively analyzed. We also characterize some existing analysis operations over feature models in terms of languages and discuss the corresponding decidability problems.

Aliakbar Safilian, Tom Maibaum, Zinovy Diskin

Axiomatization of Typed First-Order Logic

This paper contributes to the theory of typed first-order logic. We present a sound and complete axiomatization for a basic typed logic lifting restrictions imposed by previous results. As a second contribution, this paper provides complete axiomatizations for the type predicates






, and functions



indispensable for reasoning about object-oriented programming languages.

Peter H. Schmitt, Mattias Ulbrich

Model-Based Problem Solving for University Timetable Validation and Improvement

Constraint satisfaction problems can be expressed very elegantly in state-based formal methods such as B. However, can such specifications be directly used for solving real-life problems? We will try and answer this question in the present paper with regard to the university timetabling problem. We report on an ongoing project to build a formal model-based curriculum timetable validation tool where we use a formal specification as the basis to validate timetables from a student’s perspective and to support incremental modification of timetables. In this article we focus on expressing the problem domain, the formalization in B and our approach to execute the formal model in a production system using



David Schneider, Michael Leuschel, Tobias Witt

Certified Reasoning with Infinity

We demonstrate how infinities improve the expressivity, power, readability, conciseness, and compositionality of a program logic. We prove that adding infinities to Presburger arithmetic enables these improvements without sacrificing decidability. We develop Omega++, a Coq-certified decision procedure for Presburger arithmetic with infinity and benchmark its performance. Both the program and proof of Omega++ are parameterized over user-selected semantics for the indeterminate terms (such as 0 * ∞).

Asankhaya Sharma, Shengyi Wang, Andreea Costea, Aquinas Hobor, Wei-Ngan Chin

Direct Formal Verification of Liveness Properties in Continuous and Hybrid Dynamical Systems

This paper is concerned with proof methods for the temporal property of eventuality (a type of liveness) in systems of polynomial ordinary differential equations (ODEs) evolving under constraints. This problem is of a more general interest to hybrid system verification, where reasoning about temporal properties in the continuous fragment is often a bottleneck. Much of the difficulty in handling continuous systems stems from the fact that closed-form solutions to non-linear ODEs are rarely available. We present a general method for proving eventuality properties that works with the differential equations directly, without the need to compute their solutions. Our method is intuitively simple, yet much less conservative than previously reported approaches, making it highly amenable to use as a rule of inference in a formal proof calculus for hybrid systems.

Andrew Sogokon, Paul B. Jackson

Rigorous Estimation of Floating-Point Round-off Errors with Symbolic Taylor Expansions

Rigorous estimation of maximum floating-point round-off errors is an important capability central to many formal verification tools. Unfortunately, available techniques for this task often provide overestimates. Also, there are no available rigorous approaches that handle transcendental functions. We have developed a new approach called

Symbolic Taylor Expansions

that avoids this difficulty, and implemented a new tool called FPTaylor embodying this approach. Key to our approach is the use of rigorous global optimization, instead of the more familiar interval arithmetic, affine arithmetic, and/or SMT solvers. In addition to providing far tighter upper bounds of round-off error in a vast majority of cases, FPTaylor also emits analysis certificates in the form of HOL Light proofs. We release FPTaylor along with our benchmarks for evaluation.

Alexey Solovyev, Charles Jacobsen, Zvonimir Rakamarić, Ganesh Gopalakrishnan

Static Optimal Scheduling for Synchronous Data Flow Graphs with Model Checking

Synchronous data flow graphs (SDFGs) are widely used to model digital signal processing and streaming media applications. In this paper, we present exact methods for static optimal scheduling and mapping of SDFGs on a heterogenous multiprocessor platform. The optimization criteria we consider are throughput and energy consumption, taking into account the combination of various constraints such as auto-concurrency and buffer sizes. We present a concise and flexible (priced) timed automata semantics of system models, which include an SDFG and a multiprocessor platform, and formulate the optimization goals as temporal logic formulas. The optimization and scheduling problems are then transformed to model checking problems, which are solved by UPPAAL (CORA). Thanks to the exhaustive exploration nature of model checking and the facility of the tools, we obtain two pareto-optimal schedules, one with an optimal throughput and a best energy consumption and another with an optimal energy consumption and a best throughput. The approach is applied to two real applications with different parameters. The case studies show that our approach can deal with moderate models within reasonable execution time and reveal the impacts of different constraints on optimization goals.

Xue-Yang Zhu, Rongjie Yan, Yu-Lei Gu, Jian Zhang, Wenhui Zhang, Guangquan Zhang

Industry Track


Eliminating Static Analysis False Positives Using Loop Abstraction and Bounded Model Checking

Sound static analyzers over-approximate the input program behaviour and thus imprecisely report many correct properties as potential errors (false warnings). Manual investigation of these warnings is cost intensive and error prone. To get an insight into the causes and explore the effectiveness of current solutions, we analyzed the code structure associated with warnings reported by sound state of the art static analyzers: Polyspace and TCS Embedded Code Analyzer, over six industrial embedded applications. We observed that most of the warnings were due to variables modified inside loops with large or unknown bounds.

While earlier techniques have suggested the use of program slicing, abstraction, Iterative Context Extension (ICE) with Bounded Model Checking (BMC) to eliminate false warnings automatically, more recently an effective approach has been proposed called loop abstraction for BMC (LABMC), aimed specially at proving properties using BMC in the presence of loops with large and unknown bounds. Therefore, we experimentally evaluated a combination of program slicing, ICE and LABMC to enable practitioners to eliminate false warnings automatically. This combination successfully identified more than 70% of the static analysis warnings on the applications as false positives. We share the details of our approach and experimentation in this paper.

Bharti Chimdyalwar, Priyanka Darke, Anooj Chavda, Sagar Vaghani, Avriti Chauhan

Autofunk: An Inference-Based Formal Model Generation Framework for Production Systems

In this paper, we present


, a fast and scalable framework designed at Michelin to automatically build formal models (Symbolic Transition Systems) based on production messages gathered from production systems themselves. Our approach combines model-driven engineering with rule-based expert systems and human knowledge.

William Durand, Sébastien Salva

Software Development and Authentication for Arms Control Information Barriers

The UK-Norway initiative [1] is a joint project to investigate the technologies available for monitoring future arms control agreements. This paper describes one way in which formal methods can assist in the verification of software that is used for such a purpose.

Neil Evans

Analyzing the Restart Behavior of Industrial Control Applications

Critical infrastructure such as chemical plants, manufacturing facilities or tidal barrages are usually operated using specialized control devices. These devices are programmed using domain-specific programming languages for which static code analysis techniques are not widely used yet. This paper compares a sophisticated academic tool to a lightweight compliance check approach regarding the detection of programming errors that only occur after program restart. As this is a common problem in industrial control code, the paper proposes a way to improve the accuracy of analyses for this class of errors.

Stefan Hauck-Stattelmann, Sebastian Biallas, Bastian Schlich, Stefan Kowalewski, Raoul Jetley

Case Study: Static Security Analysis of the Android Goldfish Kernel

In this work we present an industry-driven case study of applying static program analysis to the Android kernel. In particular, we investigate the ability of open source tools as represented by


and of commercial tools as represented by


to detect security vulnerabilities. In our case study, we explore static security checking along the dimensions of setup effort, run time, quality of results and usability for large code bases. We present the results we obtained from analyzing the Android Goldfish kernel module of around 740 kLoC of C/C++ code. Moreover, we highlight some lessons learned that might serve as a guidance for future applications.

Tao Liu, Ralf Huuck

Practices for Formal Models as Documents: Evolution of VDM Application to “Mobile FeliCa” IC Chip Firmware

This paper reports on the application of VDM to the development of the third generation of firmware for the Mobile FeliCa IC chip. The practices of VDM were improved by incorporating the experience gained in the previous development. The primary focus was maintainability and understandability, as the VDM specification was used as the sole reference document for various development activities. The resulting improvements eliminated deficiencies caused by misunderstandings, while keeping costs similar to before.

Taro Kurita, Fuyuki Ishikawa, Keijiro Araki

Formal Virtual Modelling and Data Verification for Supervision Systems

This paper reports on the use of formal techniques to ensure as far as possible a safe decommissioning of a plant several decades after it was designed and built. Combination of supervised learning, formal modelling, model animation and model checking enabled the recovery of an almost lost specification and the design of a virtual supervision system that could be checked against recorded plant data.

Thierry Lecomte

Using Simulink Design Verifier for Automatic Generation of Requirements-Based Tests

In general, creating requirements-based tests that comply with standards is a time-consuming activity, especially in safety critical systems, where standards can be very strict. In this paper we present a methodology for generating requirements-based tests using Simulink Design Verifier, by representing requirements as models. With this methodology we estimate a considerable reduction of effort for creating requirements based tests that satisfy the DO-178C standard.

Bruno Miranda, Henrique Masini, Rodrigo Reis

Formalizing the Concept Phase of Product Development

We discuss the use of formal techniques to improve the concept phase of product realisation. As an industrial application, a new concept of interventional X-ray systems has been formalized, using model checking techniques and the simulation of formal models.

Mathijs Schuts, Jozef Hooman


Weitere Informationen

Premium Partner