Skip to main content

2014 | Buch

NASA Formal Methods

6th International Symposium, NFM 2014, Houston, TX, USA, April 29 – May 1, 2014. Proceedings

herausgegeben von: Julia M. Badger, Kristin Yvonne Rozier

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 6th International Symposium on NASA Formal Methods, NFM 2014, held in Houston, TX, USA, April 29 – May 1, 2014. The 20 revised regular papers presented together with 9 short papers were carefully reviewed and selected from 107 submissions. The topics include model checking, theorem proving, static analysis, model-based development, runtime monitoring, formal approaches to fault tolerance, applications of formal methods to aerospace systems, formal analysis of cyber-physical systems, including hybrid and embedded systems, formal methods in systems engineering, modeling, requirements and specifications, requirements generation, specification debugging, formal validation of specifications, use of formal methods in safety cases, use of formal methods in human-machine interaction analysis, formal methods for parallel hardware implementations, use of formal methods in automated software engineering and testing, correct-by-design, design for verification, and property based design techniques, techniques and algorithms for scaling formal methods, e.g., abstraction and symbolic methods, compositional techniques, parallel and distributed techniques, and application of formal methods to emerging technologies.

Inhaltsverzeichnis

Frontmatter
DO-333 Certification Case Studies
Abstract
RTCA DO-333, Formal Methods Supplement to DO-178C and DO-278A, provides guidance for software developers wishing to use formal methods in the certification of airborne systems and air traffic management systems. This paper presents three case studies describing the use of different classes of formal methods to satisfy DO-178C certification objectives. The case studies examine different aspects of a common avionics example, a dual-channel Flight Guidance System (FGS), which is representative of the issues encountered in actual developments. The three case studies illustrate the use of theorem proving, model checking, and abstract interpretation. Each of these techniques has strengths and weaknesses and each could be applied to different life cycle data items and different objectives than those described here. Our purpose is to illustrate a reasonable application of each of these techniques to produce the evidence needed to satisfy certification objectives in a realistic avionics application. We hope that these case studies will be useful to industry and government personnel in understanding formal methods and the benefits they can provide.
Darren Cofer, Steven Miller
A Compositional Monitoring Framework for Hard Real-Time Systems
Abstract
Runtime Monitoring of hard real-time embedded systems is a promising technique for ensuring that a running system respects timing constraints, possibly combined with faults originated by the software and/or hardware. This is particularly important when we have real-time embedded systems made of several components that must combine different levels of criticality, and different levels of correctness requirements. This paper introduces a compositional monitoring framework coupled with guarantees that include time isolation and the response time of a monitor for a predicted violation. The kind of monitors that we propose are automatically generated by synthesizing logic formulas of a timed temporal logic, and their correctness is ensured by construction.
André de Matos Pedro, David Pereira, Luís Miguel Pinho, Jorge Sousa Pinto
Leadership Election: An Industrial SoS Application of Compositional Deadlock Verification
Abstract
In distributed computing, the leadership election has been used to distributively designate a node as the central controller (leader) of a network of nodes. The complexity of the algorithm arises due to the unawareness of every node of who the current leader is. After running the algorithm, however, a unique node in the network must be elected as the leader and recognized as so by the remaining nodes. In this paper, using CSP, we formalise the leadership election algorithm used by our industrial partner. Its verification is feasible only due to the use of a pattern based strategy that allows the verification to be carried out in a fully local manner. The pattern used here is novel and a further contribution of the paper. A refinement relation together with predicate abstraction is used to describe pattern conformance. The mechanisation of the behavioural conformance is carried out using FDR.
Pedro R. G. Antonino, Marcel Medeiros Oliveira, Augusto C. A. Sampaio, Klaus E. Kristensen, Jeremy W. Bryans
Verification of Certifying Computations through AutoCorres and Simpl
Abstract
Certifying algorithms compute not only an output, but also a witness that certifies the correctness of the output for a particular input. A checker program uses this certificate to ascertain the correctness of the output. Recent work used the verification tools VCC and Isabelle to verify checker implementations and their mathematical background theory. The checkers verified stem from the widely-used algorithms library LEDA and are written in C. The drawback of this approach is the use of two different tools. The advantage is that it could be carried out with reasonable effort in 2011. In this article, we evaluate the feasibility of performing the entire verification within Isabelle. For this purpose, we consider checkers written in the imperative languages C and Simpl. We re-verify the checker for connectedness of graphs and present a verification of the LEDA checker for non-planarity of graphs. For the checkers written in C, we translate from C to Isabelle using the AutoCorres tool set and then reason in Isabelle. For the checkers written in Simpl, Isabelle is the only tool needed. We compare the new approach with the previous approach and discuss advantages and disadvantages. We conclude that the new approach provides higher trust guarantees and it is particularly promising for checkers that require domain-specific reasoning.
Lars Noschinski, Christine Rizkallah, Kurt Mehlhorn
Distinguishing Sequences for Partially Specified FSMs
Abstract
Distinguishing Sequences (DSs) are used inmany Finite State Machine (FSM) based test techniques. Although Partially Specified FSMs (PSFSMs) generalise FSMs, the computational complexity of constructing Adaptive and Preset DSs (ADSs/PDSs) for PSFSMs has not been addressed. This paper shows that it is possible to check the existence of an ADS in polynomial time but the corresponding problem for PDSs is PSPACE-complete. We also report on the results of experiments with benchmarks and over 8 ∗ 106 PSFSMs.
Robert M. Hierons, Uraz Cengiz Türker
On Proving Recoverability of Smart Electrical Grids
Abstract
Smart electrical grids refer to networked systems for distributing and transporting electricity from producers to consumers, by dynamically configuring the network through remotely controlled (dis)connectors. The consumers of the grid have typically distinct priorities, e.g., a hospital and an airport have the highest priority and the street lighting has a lower priority. This means that when electricity supply is compromised, e.g., during a storm, then the highest priority consumers should either not be affected or should be the first for whom electricity provision is recovered. In this paper, we propose a general formal model to study the provability of such a property. We have chosen Event-B as our formal framework due to its abstraction and refinement capabilities that support correct-by-construction stepwise development of models; also, Event-B is tool supported. Being able to prove various properties for such critical systems is fundamental nowadays, as our society is increasingly powered by dynamic digital solutions to traditional problems.
Seppo Horsmanheimo, Maryam Kamali, Mikko Kolehmainen, Mats Neovius, Luigia Petre, Mauno Rönkkö, Petter Sandvik
Providing Early Warnings of Specification Problems
Abstract
A formal software verification system relies upon a software engineer writing mathematically precise specifications of intended behavior. Humans often introduce defects into such specifications. Techniques and tools capable of warning about common defects can help them develop correct specifications by finding subtle issues that would permit unintended behavior. New specification-checking techniques and a tool that implements them, SpecChec, are described.
Dustin Hoffman, Aditi Tagore, Diego Zaccai, Bruce W. Weide
Mechanized, Compositional Verification of Low-Level Code
Abstract
For many safety-critical systems besides functional correctness, termination properties are especially important. Ideally, such properties are not only established for high-level representations of a system, but also for low-level representations.
In this paper, we therefore present a compositional semantics and a related proof calculus for possibly non-deterministic low-level languages. The calculus facilitates total correctness proofs about program representations given in a low-level language. We cope with the complexity inherent to such proofs by mechanizing the entire theory using the theorem prover Isabelle/HOL and exploiting the provers mechanisms for constructing well-founded relations.
Björn Bartels, Nils Jähnig
Formally Verified Computation of Enclosures of Solutions of Ordinary Differential Equations
Abstract
Ordinary differential equations (ODEs) are ubiquitous when modeling continuous dynamics. Classical numerical methods compute approximations of the solution, however without any guarantees on the quality of the approximation. Nevertheless, methods have been developed that are supposed to compute enclosures of the solution.
In this paper, we demonstrate that enclosures of the solution can be verified with a high level of rigor: We implement a functional algorithm that computes enclosures of solutions of ODEs in the interactive theorem prover Isabelle/HOL, where we formally verify (and have mechanically checked) the safety of the enclosures against the existing theory of ODEs in Isabelle/HOL.
Our algorithm works with dyadic rational numbers with statically fixed precision and is based on the well-known Euler method. We abstract discretization and round-off errors in the domain of affine forms. Code can be extracted from the verified algorithm and experiments indicate that the extracted code exhibits reasonable efficiency.
Fabian Immler
On the Quantum Formalization of Coherent Light in HOL
Abstract
During the last decade, formal methods, in particular theorem proving, have proven to be effective as analysis tools in different fields. Among them, quantum optics is a potential area of the application of theorem proving that can enhance the analysis results of traditional techniques, e.g., paper-and-pencil and lab simulation. In this paper, we present the formal definition of coherent light, which is typically a light produced by laser sources, using higher-order logic and show the effect of quantum operations on it. To this aim, we first present the formalization of underlying mathematics, in particular, finite/infinite summation over quantum states, then prove important theorems, such as uniqueness and the effect of linear operators. Thereafter, basic quantum states of light, called fock states, are formalized and many theorems are proved over such states, e.g., the effect of the quantum creation operation over fock states. Finally, the fundamental notions of coherent light are formalized and their properties also verified.
Mohamed Yousri Mahmoud, Sofiène Tahar
Refinement Types for tla  + 
Abstract
tla  +  is a specification language, mainly intended for concurrent and distributed systems. Its non-temporal fragment is based on a variant of (untyped) zf set theory. Motivated by the integration of the tla  +  Proof System with smt solvers or similar tools based on multi-sorted first-order logic, we define a type system for tla  +  and we prove its soundness. The system includes refinement types, which fit naturally in set theory. Combined with dependent function types, we obtain type annotations on top of an untyped specification language, getting the best of both the typed and untyped approaches. After implementing the type inference algorithm, we show that the resulting typing discipline improves the verification capabilities of the proof system.
Stephan Merz, Hernán Vanzetto
Using Lightweight Theorem Proving in an Asynchronous Systems Context
Abstract
As part of the development of a new real-time operating system, an asynchronous communication mechanism, for use between applications, has been implemented in a programming language with an advanced static type system. This mechanism is designed to provide desired properties of asynchronicity, coherency and freshness. We used the features of the type system, including linear and dependent types, to represent and partially prove that the implementation safely upheld coherency and freshness. We believe that the resulting program code forms a good example of how easily linear and dependent types can be applied in practice to prove useful properties of low-level concurrent systems programming, while leaving no traces of runtime overhead.
Matthew Danish, Hongwei Xi
JKelloy: A Proof Assistant for Relational Specifications of Java Programs
Abstract
Alloy is a relational specification language with a built-in transitive closure operator which makes it particularly suitable for writing concise specifications of linked data structures. Several tools support Alloy specifications for Java programs. However, they can only check the validity of those specifications with respect to a bounded domain, and thus, in general, cannot provide correctness proofs. This paper presents JKelloy, a tool for deductive verification of Java programs with Alloy specifications. It includes automatically-generated coupling axioms that bridge between specifications and Java states, and two sets of calculus rules that (1) generate verification conditions in relational logic and (2) simplify reasoning about them. All rules have been proved correct. To increase automation capabilities, proof strategies are introduced that control the application of those rules. Our experiments on linked lists and binary graphs show the feasibility of the approach.
Aboubakr Achraf El Ghazi, Mattias Ulbrich, Christoph Gladisch, Shmuel Tyszberowicz, Mana Taghdiri
Verifying Hybrid Systems Involving Transcendental Functions
Abstract
We explore uses of a link we have constructed between the KeYmaera hybrid systems theorem prover and the MetiTarski proof engine for problems involving special functions such as sin, cos, exp, etc. Transcendental functions arise in the specification of hybrid systems and often occur in the solutions of the differential equations that govern how the states of hybrid systems evolve over time. To date, formulas exchanged between KeYmaera and external tools have involved polynomials over the reals, but not transcendental functions, chiefly because of the lack of tools capable of proving such goals.
Paul Jackson, Andrew Sogokon, James Bridge, Lawrence Paulson
Verifying Nonpolynomial Hybrid Systems by Qualitative Abstraction and Automated Theorem Proving
Abstract
Few methods can automatically verify nonlinear hybrid systems that are modelled by nonpolynomial functions. Qualitative abstraction is a potential alternative to numerical reachability methods for formally verifying these systems. The QUANTUM abstracter is shown to be competitive at verifying several benchmark nonpolynomial hybrid systems.
William Denman
Combining PVSio with Stateflow
Abstract
An approach to integrating PVS executable specifications and Stateflow models is presented that uses web services to enable a seamless exchange of simulation events and data between PVS and Stateflow. Thus, it allows the wide range of applications developed in Stateflow to benefit from the rigor of PVS verification. The effectiveness of the approach is demonstrated on a medical device prototype, which consists of a user interface developed in PVS and a software controller implemented in Stateflow. Simulation on the prototype shows that simulation data produced is exchanged smoothly between in PVSio and Stateflow.
Paolo Masci, Yi Zhang, Paul Jones, Patrick Oladimeji, Enrico D’Urso, Cinzia Bernardeschi, Paul Curzon, Harold Thimbleby
Qed. Computing What Remains to Be Proved
Abstract
We propose a framework for manipulating in a efficient way terms and formulæ in classical logic modulo theories. Qed was initially designed for the generation of proof obligations of a weakest-precondition engine for C programs inside the Frama-C framework, but it has been implemented as an independent library. Key features of Qed include on-the-fly strong normalization with various theories and maximal sharing of terms in memory. Qed is also equipped with an extensible simplification engine. We illustrate the power of our framework by the implementation of non-trivial simplifications inside the Wp plug-in of Frama-C. These optimizations have been used to prove industrial, critical embedded softwares.
Loïc Correnson
Warps and Atomics: Beyond Barrier Synchronization in the Verification of GPU Kernels
Abstract
We describe the design and implementation of methods to support reasoning about data races in GPU kernels where constructs other than the standard barrier primitive are used for synchronization. At one extreme we consider kernels that exploit implicit, coarse-grained synchronization between threads in the same warp, a feature provided by many architectures. At the other extreme we consider kernels that reduce or avoid barrier synchronization through the use of atomic operations. We discuss design decisions associated with providing support for warps and atomics in GPUVerify, a formal verification tool for OpenCL and CUDA kernels. We evaluate the practical impact of these design decisions using a large set of benchmarks, showing that warps can be supported in a scalable manner, that a coarse abstraction suffices for efficient reasoning about most practical uses of atomic operations, and that a novel, refined abstraction captures an important design pattern where atomic operations are used to compute unique array indices. Our evaluation revealed two previously unknown bugs in publicly available benchmark suites.
Ethel Bardsley, Alastair F. Donaldson
Testing-Based Compiler Validation for Synchronous Languages
Abstract
In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.
Pierre-Loïc Garoche, Falk Howar, Temesghen Kahsai, Xavier Thirioux
Automated Testcase Generation for Numerical Support Functions in Embedded Systems
Abstract
We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.
Johann Schumann, Stefan-Alexander Schneider
REFINER: Towards Formal Verification of Model Transformations
Abstract
We present the Refiner tool, which offers techniques to define behavioural transformations applicable on formal models of concurrent systems, reason about semantics preservation and the preservation of safety and liveness properties of such transformations, and apply them on models. Behavioural transformations allow to change the potential behaviour of systems. This is useful for model-driven development approaches, where systems are designed and created by first developing an abstract model, and iteratively refining this model until it is concrete enough to automatically generate source code from it. Properties that hold on the initial model and should remain valid throughout the development in later models can be maintained, by which the effort of verifying those properties over and over again is avoided. The tool integrates with the existing model checking toolsets mCRL2 and Cadp, resulting in a complete model checking approach for model-driven system development.
Anton Wijs, Luc Engelen
Designing a Deadlock-Free Train Scheduler: A Model Checking Approach
Abstract
In this paper we present the approach used in the design of the scheduling kernel of an Automatic Train Supervision (ATS) system. A formal model of the railway layout and of the expected service has been used to identify all the possible critical sections of the railway layout in which a deadlock might occur. For each critical section, the prevention of the occurrence of deadlocks is achieved by constraining the set of trains allowed to occupy these sections at the same time. The identification of the critical sections and the verification of the correctness of the logic used by the ATS is carried out by exploiting a model checking verification framework locally developed at ISTI-CNR and based on the tool UMC.
Franco Mazzanti, Giorgio Oronzo Spagnolo, Alessio Ferrari
A Synthesized Algorithm for Interactive Consistency
Abstract
We revisit the interactive consistency problem introduced by Pease, Shostak and Lamport. We first show that their algorithm does not achieve interactive consistency if faults are transient, even if faults are non-malicious. We then present an algorithm that achieves interactive consistency in the presence of non-malicious, asymmetric and transient faults, but only under an additional guaranteed delayed ack assumption. We discovered our algorithm using an automated synthesis technique that is based on bounded model checking and QBF solving. Our synthesis technique is general and simple, and it is a promising approach for synthesizing distributed algorithms.
Adrià Gascón, Ashish Tiwari
Energy-Utility Quantiles
Abstract
The concept of quantiles is well-known in statistics, but its benefits for the formal quantitative analysis of probabilistic systems have been noticed only recently. To compute quantiles in Markov decision processes where the objective is a probability constraint for an until (i.e., constrained reachability) property with an upper reward bound, an iterative linear-programming (LP) approach has been proposed in a recent paper. We consider here a more general class of quantiles with probability or expectation objectives, allowing to reason about the trade-off between costs in terms of energy and some utility measure. We show how the iterative LP approach can be adapted for these types of quantiles and propose another iterative approach that decomposes the LP to be solved into smaller ones. This algorithm has been implemented and evaluated in case studies for quantiles where the objective is a probability constraint for until properties with upper reward bounds.
Christel Baier, Marcus Daum, Clemens Dubslaff, Joachim Klein, Sascha Klüppelholz
Incremental Verification of Compiler Optimizations
Abstract
Optimizations are widely used along the lifecycle of software. However, proving the equivalence between original and optimized versions is difficult. In this paper, we propose a technique to incrementally verify different versions of a program with respect to a fixed property.We exploit a safety proof of a program given by a safe inductive invariant. For each optimization, such invariants are adapted to be a valid safety proof of the optimized program (if possible). The cost of the adaptation depends on the impact of the optimization and is often less than an entire re-verification of the optimized program. We have developed a preliminary implementation of our technique in the context of Software Model Checking. Our evaluation of the technique on different classes of industrial programs and standard LLVM optimizations confirms that the optimized programs can be re-verified efficiently.
Grigory Fedyukovich, Arie Gurfinkel, Natasha Sharygina
Memory Efficient Data Structures for Explicit Verification of Timed Systems
Abstract
Timed analysis of real-time systems can be performed using continuous (symbolic) or discrete (explicit) techniques. The explicit state-space exploration can be considerably faster for models with moderately small constants, however, at the expense of high memory consumption. In the setting of timed-arc Petri nets, we explore new data structures for lowering the used memory: PTries for efficient storing of configurations and time darts for semi-symbolic description of the state-space. Both methods are implemented as a part of the tool TAPAAL and the experiments document at least one order of magnitude of memory savings while preserving comparable verification times.
Peter Gjøl Jensen, Kim Guldstrand Larsen, Jiří Srba, Mathias Grund Sørensen, Jakob Haar Taankvist
The Gradual Verifier
Abstract
Static verification traditionally produces yes/no answers. It either provides a proof that a piece of code meets a property, or a counterexample showing that the property can be violated. Hence, the progress of static verification is hard to measure. Unlike in testing, where coverage metrics can be used to track progress, static verification does not provide any intermediate result until the proof of correctness can be computed. This is in particular problematic because of the inevitable incompleteness of static verifiers.
To overcome this, we propose a gradual verification approach, GraVy. For a given piece of Java code, GraVy partitions the statements into those that are unreachable, or from which exceptional termination is impossible, inevitable, or possible. Further analysis can then focus on the latter case. That is, even though some statements still may terminate exceptionally, GraVy still computes a partial result. This allows us to measure the progress of static verification.We present an implementation of GraVy and evaluate it on several open source projects.
Stephan Arlt, Cindy Rubio-González, Philipp Rümmer, Martin Schäf, Natarajan Shankar
Synthesizing Predicates from Abstract Domain Losses
Abstract
Numeric abstract domains are key to many verification problems. Their ability to scale hinges on using convex approximations of the possible variable valuations. In certain cases, this approximation is too coarse to verify certain verification conditions, namely those that require disjunctive invariants. A common approach to infer disjunctive invariants is to track a set of states. However, this easily leads to scalability problems. In this work, we propose to augment a numeric analysis with an abstract domain of predicates. Predicates are synthesized whenever an abstract domain loses precision due to convexity. The predicate domain is able to recover this loss at a later stage by re-applying the synthesized predicates on the numeric abstract domain. This symbiosis combines the ability of numeric domains to compactly summarize states with the ability of predicate abstraction to express disjunctive invariants and non-convex spaces. We further show how predicates can be used as a tool for communication between several numeric domains.
Bogdan Mihaila, Axel Simon
Formal Verification of kLIBC with the WP Frama-C Plug-in
Abstract
This paper presents our results in the formal verification of kLIBC, a minimalistic C library, using the Frama-C/WP tool. We report how we were able to completely verify a significant number of functions from <string.h> and <stdio.h>. We discuss difficulties encountered and describe in detail a problem in the implementation of common <string.h> functions, for which we suggest alternative implementations. Our work shows that it is presently already viable to verify low-level C code, with heavy usage of pointers. Although the properties proved tend to be shallower as the code becomes of a lower-level nature, it is our view that this is an important direction towards real-world software verification, which cannot be attained by focusing on deep properties of cleaner code, written specifically to be verified.
Nuno Carvalho, Cristiano da Silva Sousa, Jorge Sousa Pinto, Aaron Tomb
Backmatter
Metadaten
Titel
NASA Formal Methods
herausgegeben von
Julia M. Badger
Kristin Yvonne Rozier
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-06200-6
Print ISBN
978-3-319-06199-3
DOI
https://doi.org/10.1007/978-3-319-06200-6

Premium Partner