Skip to main content

2017 | Buch

NASA Formal Methods

9th International Symposium, NFM 2017, Moffett Field, CA, USA, May 16-18, 2017, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 9th International Symposium on NASA Formal Methods, NFM 2017, held in Moffett Field, CA, USA, in May 2017.
The 23 full and 8 short papers presented in this volume were carefully reviewed and selected from 77 submissions. The papers focus on formal techniques and other approaches for software assurance, their theory, current capabilities and limitations, as well as their potential application to aerospace, robotics, and other NASA-relevant safety-critical systems during all stages of the software life-cycle.

Inhaltsverzeichnis

Frontmatter
An Automata-Theoretic Approach to Modeling Systems and Specifications over Infinite Data

Data-parameterized systems model finite state systems over an infinite data domain. VLTL is an extension of LTL that uses variables in order to specify properties of computations over infinite data, and as such VLTL is suitable for specifying properties of data-parameterized systems. We present Alternating Variable Büchi Word Automata (AVBWs), a new model of automata over infinite alphabets, capable of modeling a significant fragment of VLTL. While alternating and non-deterministic Büchi automata over finite alphabets have the same expressive power, we show that this is not the case for infinite data domains, as we prove that AVBWs are strictly stronger than the previously defined Non-deterministic Variable Büchi Word Automata (NVBWs). However, while the emptiness problem is easy for NVBWs, it is undecidable for AVBWs. We present an algorithm for translating AVBWs to NVBWs in cases where such a translation is possible. Additionally, we characterize the structure of AVBWs that can be translated to NVBWs with our algorithm, and identify fragments of VLTL for which a direct NVBW construction exists. Since the emptiness problem is crucial in the automata-theoretic approach to model checking, our results give rise to a model-checking algorithm for a rich fragment of VLTL and systems over infinite data domains.

Hadar Frenkel, Orna Grumberg, Sarai Sheinvald
Learning from Faults: Mutation Testing in Active Automata Learning
Mutation Testing in Active Automata Learning

System verification is often hindered by the absence of formal models. Peled et al. proposed black-box checking as a solution to this problem. This technique applies active automata learning to infer models of systems with unknown internal structure.This kind of learning relies on conformance testing to determine whether a learned model actually represents the considered system. Since conformance testing may require the execution of a large number of tests, it is considered the main bottleneck in automata learning.In this paper, we describe a randomised conformance testing approach which we extend with fault-based test selection. To show its effectiveness we apply the approach in learning experiments and compare its performance to a well-established testing technique, the partial W-method. This evaluation demonstrates that our approach significantly reduces the cost of learning – in one experiment by a factor of more than twenty.

Bernhard K. Aichernig, Martin Tappler
Parametric Model Checking Timed Automata Under Non-Zenoness Assumption

Real-time systems often involve hard timing constraints and concurrency, and are notoriously hard to design or verify. Given a model of a real-time system and a property, parametric model-checking aims at synthesizing timing valuations such that the model satisfies the property. However, the counter-example returned by such a procedure may be Zeno (an infinite number of discrete actions occurring in a finite time), which is unrealistic. We show here that synthesizing parameter valuations such that at least one counterexample run is non-Zeno is undecidable for parametric timed automata (PTAs). Still, we propose a semi-algorithm based on a transformation of PTAs into Clock Upper Bound PTAs to derive all valuations whenever it terminates, and some of them otherwise.

Étienne André, Hoang Gia Nguyen, Laure Petrucci, Jun Sun
Multi-timed Bisimulation for Distributed Timed Automata

Timed bisimulation is an important technique which can be used for reasoning about behavioral equivalence between different components of a complex real-time system. The verification of timed bisimulation is a difficult and challenging problem because the state explosion caused by both functional and timing constraints must be taken into account. Timed bisimulation was shown decidable for Timed Automata (TA). Distributed TA and TA with Independent Clocks (icTA) were introduced to model Distributed Real-time Systems. They are a variant of TA with local clocks that may not run at the same rate. In this paper, we first propose to extend the theory of Timed Labeled Transition Systems to Multi-Timed Labeled Transition Systems, and relate them by an extension of timed bisimulation to multi-timed bisimulation. We prove the decidability of multi-timed bisimulation and present an EXPTIME algorithm for deciding whether two icTA are multi-timed bisimilar. For multi-timed bisimilarity, an extension of the standard refinement algorithm is described.

James Ortiz, Moussa Amrani, Pierre-Yves Schobbens
Auto-Active Proof of Red-Black Trees in SPARK

Formal program verification can guarantee that a program is free from broad classes of errors (like reads of uninitialized data and run-time errors) and that it complies with its specification. Tools such as SPARK make it cost effective to target the former in an industrial context, but the latter is much less common in industry, owing to the cost of specifying the behavior of programs and even more the cost of achieving proof of such specifications. We have chosen in SPARK to rely on the techniques of auto-active verification for providing cost effective formal verification of functional properties. These techniques consist in providing annotations in the source code that will be used by automatic provers to complete the proof. To demonstrate the potential of this approach, we have chosen to formally specify a library of red-black trees in SPARK, and to prove its functionality using auto-active verification. To the best of our knowledge, this is the most complex use of auto-active verification so far.

Claire Dross, Yannick Moy
Analysing Security Protocols Using Refinement in iUML-B

We propose a general approach based on abstraction and refinement for constructing and analysing security protocols using formal specification and verification. We use class diagrams to specify conceptual system entities and their relationships. We use state-machines to model the protocol execution involving the entities’ interactions. Features of our approach include specifying security principles as invariants of some abstract model of the overall system. The specification is then refined to introduce implementable mechanisms for the protocol. A gluing invariant specifies why the protocol achieves the security principle. Security breaches arise as violations of the gluing invariant. We make use of both theorem proving and model checking techniques to analyse our formal model, in particular, to explore the source and consequence of the security attack. To demonstrate the use of our approach we explore the mechanism of a security attack in a network protocol.

Colin Snook, Thai Son Hoang, Michael Butler
On Learning Sparse Boolean Formulae for Explaining AI Decisions

In this paper, we consider the problem of learning Boolean formulae from examples obtained by actively querying an oracle that can label these examplesz as either positive or negative. This problem has received attention in both machine learning as well as formal methods communities, and it has been shown to have exponential worst-case complexity in the general case as well as for many restrictions. In this paper, we focus on learning sparse Boolean formulae which depend on only a small (but unknown) subset of the overall vocabulary of atomic propositions. We propose an efficient algorithm to learn these sparse Boolean formulae with a given confidence. This assumption of sparsity is motivated by the problem of mining explanations for decisions made by artificially intelligent (AI) algorithms, where the explanation of individual decisions may depend on a small but unknown subset of all the inputs to the algorithm. We demonstrate the use of our algorithm in automatically generating explanations of these decisions. These explanations will make intelligent systems more understandable and accountable to human users, facilitate easier audits and provide diagnostic information in the case of failure. The proposed approach treats the AI algorithm as a black-box oracle; hence, it is broadly applicable and agnostic to the specific AI algorithm. We illustrate the practical effectiveness of our approach on a diverse set of case studies.

Susmit Jha, Vasumathi Raman, Alessandro Pinto, Tuhin Sahai, Michael Francis
Event-Based Runtime Verification of Temporal Properties Using Time Basic Petri Nets

We introduce a formal framework to provide an efficient event-based monitoring technique, and we describe its current implementation as the MahaRAJA software tool. The framework enables the quantitative runtime verification of temporal properties extracted from occurring events on Java programs. The monitor continuously evaluates the conformance of the concrete implementation with respect to its formal specification given in terms of Time Basic Petri nets, a particular timed extension of Petri nets. The system under test is instrumented by using simple Java annotations on methods to link the implementation to its formal model. This allows a separation between implementation and specification that can be used for other purposes such as formal verification, simulation, and model-based testing. The tool has been successfully used to monitor at runtime and test a number of benchmarking case-studies. Experiments show that our approach introduces bounded overhead and effectively reduces the involvement of the monitor at run time by using negligible auxiliary memory. A comparison with a number of state-of-the-art runtime verification tools is also presented.

Matteo Camilli, Angelo Gargantini, Patrizia Scandurra, Carlo Bellettini
Model-Counting Approaches for Nonlinear Numerical Constraints

Model counting is of central importance in quantitative reasoning about systems. Examples include computing the probability that a system successfully accomplishes its task without errors, and measuring the number of bits leaked by a system to an adversary in Shannon entropy. Most previous work in those areas demonstrated their analysis on programs with linear constraints, in which cases model counting is polynomial time. Model counting for nonlinear constraints is notoriously hard, and thus programs with nonlinear constraints are not well-studied. This paper surveys state-of-the-art techniques and tools for model counting with respect to SMT constraints, modulo the bitvector theory, since this theory is decidable, and it can express nonlinear constraints that arise from the analysis of computer programs. We integrate these techniques within the Symbolic Pathfinder platform and evaluate them on difficult nonlinear constraints generated from the analysis of cryptographic functions.

Mateus Borges, Quoc-Sang Phan, Antonio Filieri, Corina S. Păsăreanu
Input Space Partitioning to Enable Massively Parallel Proof

Real-world applications often include large, empirically defined discrete-valued functions. When proving properties about these applications, the proof naturally breaks into one case per entry in the first function reached, and again into one case per entry in the next function, and continues splitting. This splitting yields a combinatorial explosion of proof cases that challenges traditional proof approaches. While each proof case represents a mathematical path from inputs to outputs through these functions, the full set of cases is not available up front, preventing a straightforward application of parallelism. Here we describe an approach that slices the input space, creating a partition based on pre-computed mathematical paths such that each slice has only a small number of proof cases. These slices are amenable to massively parallel proof. We evaluate this approach using an example model of an adaptive cruise control, where proofs are conducted in a highly parallel PVS environment.

Ashlie B. Hocking, M. Anthony Aiello, John C. Knight, Nikos Aréchiga
Compositional Model Checking of Interlocking Systems for Lines with Multiple Stations

In the railway domain safety is guaranteed by an interlocking system which translates operational decisions into commands leading to field operations. Such a system is safety critical and demands thorough formal verification during its development process. Within this context, our work has focused on the extension of a compositional model checking approach to formally verify interlocking system models for lines with multiple stations. The idea of the approach is to decompose a model of the interlocking system by applying cuts at the network modelling level. The paper introduces an alternative cut (the linear cut) to a previously proposed cut (border cut). Powered with the linear cut, the model checking approach is then applied to the verification of an interlocking system controlling a real-world multiple station line.

Hugo Daniel Macedo, Alessandro Fantechi, Anne E. Haxthausen
Modular Model-Checking of a Byzantine Fault-Tolerant Protocol

With proof techniques like IC3 and k-induction, model-checking scales further than ever before. Still, fault-tolerant distributed systems are particularly challenging to model-check given their large state spaces and non-determinism. The typical approach to controlling complexity is to construct ad-hoc abstractions of faults, message-passing, and behaviors. However, these abstractions come at the price of divorcing the model from its implementation and making refactoring difficult. In this work, we present a model for fault-tolerant distributed system verification that combines ideas from the literature including calendar automata, symbolic fault injection, and abstract transition systems, and then use it to model-check various implementations of the Hybrid Oral Messages algorithm that differ in the fault model, timing model, and local node behavior. We show that despite being implementation-level models, the verifications are scalable and modular, insofar as isolated changes to an implementation require isolated changes to the model and proofs. This work is carried out in the SAL model-checker.

Benjamin F. Jones, Lee Pike
Improved Learning for Stochastic Timed Models by State-Merging Algorithms

The construction of faithful system models for quantitative analysis, e.g., performance evaluation, is challenging due to the inherent systems’ complexity and unknown operating conditions. To overcome such difficulties, we are interested in the automated construction of system models by learning from actual execution traces. We focus on the timing aspects of systems that are assumed to be of stochastic nature. In this context, we study a state-merging procedure for learning stochastic timed models and we propose several enhancements at the level of the learned model structure and the underlying algorithms. The results obtained on different examples show a significant improvement of timing accuracy of the learned models.

Braham Lotfi Mediouni, Ayoub Nouri, Marius Bozga, Saddek Bensalem
Verifying Safety and Persistence Properties of Hybrid Systems Using Flowpipes and Continuous Invariants

We propose a method for verifying persistence of nonlinear hybrid systems. Given some system and an initial set of states, the method can guarantee that system trajectories always eventually evolve into some specified target subset of the states of one of the discrete modes of the system, and always remain within this target region. The method also computes a time-bound within which the target region is always reached. The approach combines flow-pipe computation with deductive reasoning about invariants and is more general than each technique alone. We illustrate the method with a case study concerning showing that potentially destructive stick-slip oscillations of an oil-well drill eventually die away for a certain choice of drill control parameters. The case study demonstrates how just using flow-pipes or just reasoning about invariants alone can be insufficient. The case study also nicely shows the richness of systems that the method can handle: the case study features a mode with non-polynomial (nonlinear) ODEs and we manage to prove the persistence property with the aid of an automatic prover specifically designed for handling transcendental functions.

Andrew Sogokon, Paul B. Jackson, Taylor T. Johnson
A Relational Shape Abstract Domain

Static analyses aim at inferring semantic properties of programs. While many analyses compute an over-approximation of reachable states, some analyses compute a description of the input-output relations of programs. In the case of numeric programs, several analyses have been proposed that utilize relational numerical abstract domains to describe relations. On the other hand, designing abstractions for relations over memory states and taking shapes into account is challenging. In this paper, we propose a set of novel logical connectives to describe such relations, which are inspired by separation logic. This logic can express that certain memory areas are unchanged, freshly allocated, or freed, or that only part of the memory was modified. Using these connectives, we build an abstract domain and design a static analysis that over-approximates relations over memory states containing inductive structures. We implement this analysis and report on the analysis of a basic library of list manipulating functions.

Hugo Illous, Matthieu Lemerre, Xavier Rival
Floating-Point Format Inference in Mixed-Precision

We address the problem of determining the minimal precision on the inputs and on the intermediary results of a program containing floating-point computations in order to ensure a desired accuracy on the outputs. The first originality of our approach is to combine forward and backward static analyses, done by abstract interpretation. The backward analysis computes the minimal precision needed for the inputs and intermediary values in order to have a desired accuracy on the results, specified by the user. The second originality is to express our analysis as a set of constraints made of first order predicates and affine integer relations only, even if the analyzed programs contain non-linear computations. These constraints can be easily checked by an SMT Solver. The information collected by our analysis may help to optimize the formats used to represent the values stored in the floating-point variables of programs. Experimental results are presented.

Matthieu Martel
A Verification Technique for Deterministic Parallel Programs

A commonly used approach to develop parallel programs is to augment a sequential program with compiler directives that indicate which program blocks may potentially be executed in parallel. This paper develops a verification technique to prove correctness of compiler directives combined with functional correctness of the program. We propose syntax and semantics for a simple core language, capturing the main forms of deterministic parallel programs. This language distinguishes three kinds of basic blocks: parallel, vectorized and sequential blocks, which can be composed using three different composition operators: sequential, parallel and fusion composition. We show that it is sufficient to have contracts for the basic blocks to prove correctness of the compiler directives, and moreover that functional correctness of the sequential program implies correctness of the parallelized program. We formally prove correctness of our approach. In addition, we define a widely-used subset of OpenMP that can be encoded into our core language, thus effectively enabling the verification of OpenMP compiler directives, and we discuss automated tool support for this verification process.

Saeed Darabi, Stefan C. C. Blom, Marieke Huisman
Systematic Predicate Abstraction Using Variable Roles

Heuristics for discovering predicates for abstraction are an essential part of software model checkers. Picking the right predicates affects the runtime of a model checker, or determines if a model checker is able to solve a verification task at all. In this paper we present a method to systematically specify heuristics for generating program-specific abstractions. The heuristics can be used to generate initial abstractions, and to guide abstraction refinement through templates provided for Craig interpolation. We describe the heuristics using variable roles, which allow us to pick domain-specific predicates according to the program under analysis. Variable roles identify typical variable usage patterns and can be computed using lightweight static analysis, for instance with the help of off-the-shelf logical programming engines. We implemented a prototype tool which extracts initial predicates and templates for C programs and passes them to the Eldarica model checker in the form of source code annotations. For evaluation, we defined a set of heuristics, motivated by Eldarica’s previous built-in heuristics and typical verification benchmarks from the literature and SV-COMP. We evaluate our approach on a set of more than 500 programs, and observe an overall increase in the number of solved tasks by 11.2%, and significant speedup on certain benchmark families.

Yulia Demyanova, Philipp Rümmer, Florian Zuleger
specgen: A Tool for Modeling Statecharts in CSP

We present specgen, a tool for translating statecharts to the Communicating Sequential Processes language (CSP), where they may be explored and verified using FDR, the CSP model checker. We build on earlier algorithms for translating statecharts to CSP by supporting additional features, simplifying the generated models, and implementing a practical tool for statecharts built in Enterprise Architect, a commercially available modeling environment. We demonstrate the tool on a standard example.

Brandon Shapiro, Chris Casinghino
HyPro: A C++ Library of State Set Representations for Hybrid Systems Reachability Analysis

In this tool paper we introduce HyPro, our free and open-source C++ programming library, which offers implementations for the most prominent state set representations used by flowpipe-construction-based reachability analysis techniques for hybrid systems.

Stefan Schupp, Erika Ábrahám, Ibtissem Ben Makhlouf, Stefan Kowalewski
Asm2C++: A Tool for Code Generation from Abstract State Machines to Arduino

This paper presents Asm2C++, a tool that automatically generates executable C++ code for Arduino from a formal specification given as Abstract State Machines (ASMs). The code generation process follows the model-driven engineering approach, where the code is obtained from a formal abstract model by applying certain transformation rules. The translation process is highly configurable in order to correctly integrate the underlying hardware. The advantage of the Asm2C++ tool is that it is part of the Asmeta framework that allows to analyze, verify, and validate the correctness of a formal model.

Silvia Bonfanti, Marco Carissoni, Angelo Gargantini, Atif Mashkoor
SPEN: A Solver for Separation Logic

Spen is a solver for a fragment of separation logic (SL) with inductively-defined predicates covering both (nested) list structures as well as various kinds of trees, possibly extended with data. The main functionalities of Spen are deciding the satisfiability of a formula and the validity of an entailment between two formulas, which are essential for verification of heap manipulating programs. The solver also provides models for satisfiable formulas and diagnosis for invalid entailments. Spen combines several concepts in a modular way, such as boolean abstractions of SL formulas, SAT and SMT solving, and tree automata membership testing. The solver has been successfully applied to a rather large benchmark of various problems issued from program verification tools.

Constantin Enea, Ondřej Lengál, Mihaela Sighireanu, Tomáš Vojnar
From Hazard Analysis to Hazard Mitigation Planning: The Automated Driving Case

Vehicle safety depends on (a) the range of identified hazards and (b) the operational situations for which mitigations of these hazards are acceptably decreasing risk. Moreover, with an increasing degree of autonomy, risk ownership is likely to increase for vendors towards regulatory certification. Hence, highly automated vehicles have to be equipped with verified controllers capable of reliably identifying and mitigating hazards in all possible operational situations. To this end, available methods for the design and verification of automated vehicle controllers have to be supported by models for hazard analysis and mitigation.In this paper, we describe (1) a framework for the analysis and design of planners (i.e., high-level controllers) capable of run-time hazard identification and mitigation, (2) an incremental algorithm for constructing planning models from hazard analysis, and (3) an exemplary application to the design of a fail-operational controller based on a given control system architecture. Our approach equips the safety engineer with concepts and steps to (2a) elaborate scenarios of endangerment and (2b) design operational strategies for mitigating such scenarios.

Mario Gleirscher, Stefan Kugele
Event-B at Work: Some Lessons Learnt from an Application to a Robot Anti-collision Function

The technical and academic aspects of the Event-B method, and the abstract description of its application in industrial contexts are the subjects of numerous publications. In this paper, we describe the experience of development engineers non familiar with Event-B to getting to grips with this method. We describe in details how we used the formalism, the refinement method, and its supporting toolset to develop the simple anti-collision function embedded in a small rolling robot. We show how the model has been developed from a set of high-level requirements and refined down to the software specification. For each phase of the development, we explain how we used the method, expose the encountered difficulties, and draw some practical lessons from this experiment.

Arnaud Dieumegard, Ning Ge, Eric Jenn
Reasoning About Safety-Critical Information Flow Between Pilot and Computer

This paper presents research results that develop a dynamic logic for reasoning about safety-critical information flow among humans and computers. The logic advances previous efforts to develop logics of agent knowledge, which make assumptions that are too strong for realistic human agents. We introduce Dynamic Agent Safety Logic (DASL), based on Dynamic Epistemic Logic (DEL), with extensions to account for safe actions, belief, and the logical relationships among knowledge, belief, and safe action. With this logic we can infer which safety-critical information a pilot is missing when executing an unsafe action. We apply the logic to the Air France 447 incident as a case study and provide a mechanization of the case study in the Coq proof assistant.

Seth Ahrenbach
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components

Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks.

Tommaso Dreossi, Alexandre Donzé, Sanjit A. Seshia
Verifying a Class of Certifying Distributed Programs

A certifying program produces in addition to each output a witness that certifies the output’s correctness. An accompanying checker program checks whether the computed witness is correct. Such a checker is usually simpler than the original program, and its verification is often feasible while the verification of the original program is too costly. By verifying the checker and by giving a machine-checked proof that the witness certifies the output’s correctness, we get formal instance correctness, i.e. a machine-checked proof that a particular input-output pair is correct. This verification method was demonstrated on sequential programs. In contrast, we are concerned with the correctness of distributed programs which behave fundamentally differently. In this paper, we present a verification method to obtain formal instance correctness for one class of certifying distributed programs. Moreover, we demonstrate our method on the leader election problem using the theorem prover Coq.

Kim Völlinger, Samira Akili
Compact Proof Witnesses

Proof witnesses are proof artifacts showing correctness of programs wrt. safety properties. The recent past has seen a rising interest in witnesses as (a) proofs in a proof-carrying-code context, (b) certificates for the correct functioning of verification tools, or simply (c) exchange formats for (partial) verification results. As witnesses in all theses scenarios need to be stored and processed, witnesses are required to be as small as possible. However, software verification tools – the prime suppliers of witnesses – do not necessarily construct small witnesses.In this paper, we present a formal account of proof witnesses. We introduce the concept of weakenings, reducing the complexity of proof witnesses while preserving the ability of witnessing safety. We develop a weakening technique for a specific class of program analyses, and prove it to be sound. Finally, we experimentally demonstrate our weakening technique to indeed achieve a size reduction of proof witnesses.

Marie-Christine Jakobs, Heike Wehrheim
Qualification of a Model Checker for Avionics Software Verification

Formal methods tools have been shown to be effective at finding defects in safety-critical systems, including avionics systems in commercial aircraft. The publication of DO-178C and the accompanying formal methods supplement DO-333 provide guidance for aircraft manufacturers and equipment suppliers who wish to obtain certification credit for the use of formal methods for software development and verification.However, there are still a number of issues that must be addressed before formal methods tools can be injected into the design process for avionics systems. DO-178C requires that a tool used to meet certification objectives be qualified to demonstrate that its output can be trusted. The qualification of formal methods tools is a relatively new concept presenting unique challenges for both formal methods researchers and software developers in the aerospace industry.This paper presents the results of a recent project studying the qualification of formal methods tools. We have identified potential obstacles to their qualification and proposed mitigation strategies. We have conducted two case studies based on different qualification approaches for an open source formal verification tool, the Kind 2 model checker. The first case study produced a qualification package for Kind 2. The second demonstrates the feasibility of independently verifying the output of Kind 2 through the generation of proof certificates and verifying these certificates with a qualified proof checker, in lieu of qualifying the model checker itself.

Lucas Wagner, Alain Mebsout, Cesare Tinelli, Darren Cofer, Konrad Slind
SpeAR v2.0: Formalized Past LTL Specification and Analysis of Requirements

This paper describes current progress on SpeAR, a novel tool for capturing and analyzing requirements in a domain specific language designed to read like natural language. Using SpeAR, systems engineers capture requirements, environmental assumptions, and critical system properties using the formal semantics of Past LTL. SpeAR analyzes requirements for logical consistency and uses model checking to prove that assumptions and requirements entail stated properties. These analyses build confidence in the correctness of the formally captured requirements.

Aaron W. Fifarek, Lucas G. Wagner, Jonathan A. Hoffman, Benjamin D. Rodes, M. Anthony Aiello, Jennifer A. Davis
Just Formal Enough? Automated Analysis of EARS Requirements

EARS is a technique used by Rolls-Royce and many other organizations around the world to capture requirements in natural language in a precise manner. In this paper we describe the EARS-CTRL tool for writing and analyzing EARS requirements for controllers. We provide two levels of analysis of requirements written in EARS-CTRL: firstly our editor uses projectional editing as well as typing (based on a glossary of controller terms) to ensure as far as possible well-formedness by construction of the requirements; secondly we have used a controller synthesis tool to check whether a set of EARS-CTRL requirements is realizable as an actual controller. In the positive case, the tool synthesizes and displays the controller as a synchronous dataflow diagram. This information can be used to examine the specified behavior and to iteratively correct, improve or complete a set of EARS-CTRL requirements.

Levi Lúcio, Salman Rahman, Chih-Hong Cheng, Alistair Mavin
Backmatter
Metadaten
Titel
NASA Formal Methods
herausgegeben von
Clark Barrett
Misty Davies
Temesghen Kahsai
Copyright-Jahr
2017
Electronic ISBN
978-3-319-57288-8
Print ISBN
978-3-319-57287-1
DOI
https://doi.org/10.1007/978-3-319-57288-8

Premium Partner