Skip to main content

2018 | Buch

NASA Formal Methods

10th International Symposium, NFM 2018, Newport News, VA, USA, April 17-19, 2018, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 10th International Symposium on NASA Formal Methods, NFM 2018, held in Newport News, VA, USA, in April 2018.
The 24 full and 7 short papers presented in this volume were carefully reviewed and selected from 92 submissions. The papers focus on formal techniques and other approaches for software assurance, their theory, current capabilities and limitations, as well as their potential application to aerospace, robotics, and other NASA-relevant safety-critical systems during all stages of the software life-cycle.

Inhaltsverzeichnis

Frontmatter
Incremental Construction of Realizable Choreographies

This paper proposes a correct-by-construction method to build realizable choreographies described using conversation protocols ($$ CP $$CPs). We define a new language consisting of an operators set for incremental construction of CPs. We suggest an asynchronous model described with the Event-B method and its refinement strategy, ensuring the scalability of our approach.

Sarah Benyagoub, Meriem Ouederni, Yamine Aït-Ameur, Atif Mashkoor
Formal Assurance for Cooperative Intelligent Autonomous Agents

Developing trust in intelligent agents requires understanding the full capabilities of the agent, including the boundaries beyond which the agent is not designed to operate. This paper focuses on applying formal verification methods to identify these boundary conditions in order to ensure the proper design for the effective operation of the human-agent team. The approach involves creating an executable specification of the human-machine interaction in a cognitive architecture, which incorporates the expression of learning behavior. The model is then translated into a formal language, where verification and validation activities can occur in an automated fashion. We illustrate our approach through the design of an intelligent copilot that teams with a human in a takeoff operation, while a contingency scenario involving an engine-out is potentially executed. The formal verification and counterexample generation enables increased confidence in the designed procedures and behavior of the intelligent copilot system.

Siddhartha Bhattacharyya, Thomas C. Eskridge, Natasha A. Neogi, Marco Carvalho, Milton Stafford
Ghosts for Lists: A Critical Module of Contiki Verified in Frama-C

Internet of Things (IoT) applications are becoming increasingly critical and require rigorous formal verification. In this paper we target Contiki, a widely used open-source OS for IoT, and present a verification case study of one of its most critical modules: that of linked lists. Its API and list representation differ from the classical linked list implementations, and are particularly challenging for deductive verification. The proposed verification technique relies on a parallel view of a list through a companion ghost array. This approach makes it possible to perform most proofs automatically using the Frama-C/WP tool, only a small number of auxiliary lemmas being proved interactively in the Coq proof assistant. We present an elegant segment-based reasoning over the companion array developed for the proof. Finally, we validate the proposed specification by proving a few functions manipulating lists.

Allan Blanchard, Nikolai Kosmatov, Frédéric Loulergue
An Executable Formal Framework for Safety-Critical Human Multitasking

When a person is concurrently interacting with different systems, the amount of cognitive resources required (cognitive load) could be too high and might prevent some tasks from being completed. When such human multitasking involves safety-critical tasks, for example in an airplane, a spacecraft, or a car, failure to devote sufficient attention to the different tasks could have serious consequences. To study this problem, we define an executable formal model of human attention and multitasking in Real-Time Maude. It includes a description of the human working memory and the cognitive processes involved in the interaction with a device. Our framework enables us to analyze human multitasking through simulation, reachability analysis, and LTL and timed CTL model checking, and we show how a number of prototypical multitasking problems can be analyzed in Real-Time Maude. We illustrate our modeling and analysis framework by studying the interaction with a GPS navigation system while driving, and apply model checking to show that in some cases the cognitive load of the navigation system could cause the driver to keep the focus away from driving for too long.

Giovanna Broccia, Paolo Milazzo, Peter Csaba Ölveczky
Simpler Specifications and Easier Proofs of Distributed Algorithms Using History Variables

This paper studies specifications and proofs of distributed algorithms when only message history variables are used, using Basic Paxos and Multi-Paxos for distributed consensus as precise case studies. We show that not using and maintaining other state variables yields simpler specifications that are more declarative and easier to understand. It also allows easier proofs to be developed by needing fewer invariants and facilitating proof derivations. Furthermore, the proofs are mechanically checked more efficiently.We show that specifications in TLA+ and proofs in TLA+ Proof System (TLAPS) are reduced by 25% and 27%, respectively, for Basic Paxos, and 46% (from about 100 lines to about 50 lines) and 48% (from about 1000 lines to about 500 lines), respectively, for Multi-Paxos. Overall we need 54% fewer manually written invariants and our proofs have 46% fewer obligations. Our proof for Basic Paxos takes 26% less time than Lamport et al.’s for TLAPS to check, and our proofs for Multi-Paxos are checked by TLAPS within 1.5 min whereas prior proofs for Multi-Paxos fail to be checked in the new version of TLAPS.

Saksham Chand, Yanhong A. Liu
Don’t Miss the End: Preventing Unsafe End-of-File Comparisons

Reading from an InputStream or Reader in Java either returns the read byte/character or $$-1$$-1 if the end-of-file (EOF) has been reached. To support the additional $$-1$$-1 as return value, the read methods return an int. For correct usage, the return value should be compared to $$-1$$-1 before being converted to byte or char. If the conversion was performed before the comparison, it can cause a read-until-EOF-loop to either exit prematurely or be stuck in an infinite loop. The SEI CERT Oracle Coding Standard for Java rule FIO08-J “Distinguish between characters or bytes read from a stream and $$-1$$-1” describes this issue in detail. This paper presents a type system that prevents unsafe EOF value comparisons statically and is implemented for Java using the Checker Framework. In an evaluation of 35 projects (9 million LOC) it detected 3 defects in production software, 8 bad coding practices, and no false positives. The overall annotation effort is very low. Overrides for the read methods needed to be annotated, requiring a total of 44 annotations. Additionally, 3 annotations for fields and method parameters needed to be added. To the best of our knowledge this is the first open source tool to prevent this security issue.

Charles Zhuo Chen, Werner Dietl
An Efficient Rewriting Framework for Trace Coverage of Symmetric Systems

Verification coverage is an important metric in any hardware verification effort. Coverage models are proposed as a set of events the hardware may exhibit, intended to be possible under a test scenario. At the system level, these events each correspond to a visited state or taken transition in a transition system that represents the underlying hardware. A more sophisticated approach is to check that tests exercise specific sequences of events, corresponding to traces through the transition system. However, such trace-based coverage models are inherently expensive to consider in practice, as the number of traces is exponential in trace length. We present a novel framework that combines the approaches of conservative abstraction with rewriting to construct a concise trace-based coverage model of a class of parameterized symmetric systems. First, we leverage both symmetry and rewriting to construct abstractions that can be tailored by users’ defined rewriting. Then, under this abstraction, a coverage model for a larger system can be generated from traces for a smaller system. This coverage model is of tractable size, is tractable to generate, and can be used to identify coverage-holes in large systems. Our experiments on the cache coherence protocol implementation from the multi-billion transistors IBM POWER™ Processor demonstrate the viability and effectiveness of this approach.

Flavio M. De Paula, Arvind Haran, Brad Bingham
Verification of Fault-Tolerant Protocols with Sally

Sally is a model checker for infinite-state systems that implements several verification algorithms, including a variant of IC3/PDR called Property-Directed K-induction. We present an application of Sally to automated verification of fault-tolerant distributed algorithms.

Bruno Dutertre, Dejan Jovanović, Jorge A. Navas
Output Range Analysis for Deep Feedforward Neural Networks

Given a neural network (NN) and a set of possible inputs to the network described by polyhedral constraints, we aim to compute a safe over-approximation of the set of possible output values. This operation is a fundamental primitive enabling the formal analysis of neural networks that are extensively used in a variety of machine learning tasks such as perception and control of autonomous systems. Increasingly, they are deployed in high-assurance applications, leading to a compelling use case for formal verification approaches. In this paper, we present an efficient range estimation algorithm that iterates between an expensive global combinatorial search using mixed-integer linear programming problems, and a relatively inexpensive local optimization that repeatedly seeks a local optimum of the function represented by the NN. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate applications of our approach to computing flowpipes for neural network-based feedback controllers. We show that the use of local search in conjunction with mixed-integer linear programming solvers effectively reduces the combinatorial search over possible combinations of active neurons in the network by pruning away suboptimal nodes.

Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, Ashish Tiwari
Formal Dynamic Fault Trees Analysis Using an Integration of Theorem Proving and Model Checking

Dynamic fault trees (DFTs) have emerged as an important tool for capturing the dynamic behavior of system failure. These DFTs are analyzed qualitatively and quantitatively using stochastic or algebraic methods. Model checking has been proposed to conduct the failure analysis of systems using DFTs. However, it has not been used for DFT qualitative analysis. Moreover, its analysis time grows exponentially with the number of states and its reduction algorithms are usually not formally verified. To overcome these limitations, we propose a methodology to perform the formal analysis of DFTs using an integration of theorem proving and model checking. We formalize the DFT gates in higher-order logic and formally verify many algebraic simplification properties using theorem proving. Based on this, we prove the equivalence between raw DFTs and their reduced forms to enable the formal qualitative analysis (determine the cut sets and sequences) of DFTs with theorem proving. We then use model checking to perform the quantitative analysis (compute probabilities of failure) of the formally verified reduced DFT. We applied our methodology on five benchmarks and the results show that the formally verified reduced DFT was analyzed using model checking with up to six times less states and up to 133000 times faster.

Yassmeen Elderhalli, Osman Hasan, Waqar Ahmad, Sofiène Tahar
Twenty Percent and a Few Days – Optimising a Bitcoin Majority Attack

Bitcoin is a distributed online payment system that organises transactions into blocks. The size of blocks is limited to 1 megabyte, which also limits the number of transactions per second that can be confirmed. This year several attempts have been made to create a fork or a split that removes this restriction. One such alternative is Bitcoin Unlimited (BTU). Proponents of BTU have suggested to use a type of majority attack to force other Bitcoin miners to adopt BTU.In this paper we model this attack in Uppaal, and analyse how long it will take for an attack to succeed, depending on the share the attacker has of the total network, and the so-called confirmation depth. The analysis shows that with a share of 20% an attack will be successful within a few days. This paper also looks at the effect of increasing the confirmation depth as a countermeasure.

Ansgar Fehnker, Kaylash Chaudhary
An Even Better Approach – Improving the B.A.T.M.A.N. Protocol Through Formal Modelling and Analysis

This paper considers a network routing protocol known as Better Approach to Mobile Adhoc Networks (B.A.T.M.A.N.). The protocol has two aims: first, discovery of all bidirectional links, and second, identification of the best-next-hop to the other nodes. A key mechanism of the protocol is to flood the network at regular intervals with so-called originator messages.In previous work we formalised the B.A.T.M.A.N. protocol in Uppaal and found several ambiguities and inconsistencies [2]. More importantly, explicit choices in the RFC had, unfortunately, a negative impact on route discovery. This previous work compared a literal model based of the RFC with an incremental improvement. This paper goes one step further and proposes an alternative that departs from the RFC. We compare the performance using simulations in Uppaal, for static as well as dynamic topologies. The analysis shows that the proposed alternative reduces the number of suboptimal routes significantly, and recovers better from routing errors that are introduced by mobility.

Ansgar Fehnker, Kaylash Chaudhary, Vinay Mehta
Towards a Formal Safety Framework for Trajectories

Trajectory generation is at the heart of an autonomous Unmanned Aerial Vehicle (UAV). Given a navigation context, the UAV has to conceive a trajectory that fulfills its mission goal while avoiding collisions with obstacles and surrounding traffic. This intended trajectory is an idealization of the various actual physical trajectories that the UAV may perform during flight. The validation of actual physical trajectories with respect to their intended counterparts is challenging due to the interaction over time of several uncontrolled factors such as the environmental conditions, measurement errors and the cyber-physical components of the UAV. For this reason, the most common validation technique for trajectory generators is flight simulation, which is not exhaustive and thus cannot prove the actual absence of collisions.This paper presents a preliminary formal framework to reason about the safety of UAV trajectories with respect to static-obstacle collision avoidance taking account of the uncertainties derived from uncontrolled factors. The proposed framework was formally verified in a mechanical theorem prover. Its application as an oracle for black-box testing validation of trajectory generators is also proposed. Such testing strategy would allow the safety evaluation of trajectories while removing the need for simulation procedures, thus reducing the cost of the validation process.

Marco A. Feliú, Mariano M. Moscato
Static Value Analysis of Python Programs by Abstract Interpretation

We propose a static analysis by abstract interpretation for a significant subset of Python to infer variable values, run-time errors, and uncaught exceptions. Python is a high-level language with dynamic typing, a class-based object system, complex control structures such as generators, and a large library of builtin objects. This makes static reasoning on Python programs challenging. The control flow is highly dependent on the type of values, which we thus infer accurately.As Python lacks a formal specification, we first present a concrete collecting semantics of reachable program states. We then propose a non-relational flow-sensitive type and value analysis based on simple abstract domains for each type, and handle non-local control such as exceptions through continuations. We show how to infer relational numeric invariants by leveraging the type information we gather. Finally, we propose a relational abstraction of generators to count the number of available elements and prove that no StopIteration exception is raised.Our prototype implementation is heavily in development; it does not support some Python features, such as recursion nor the compile builtin, and it handles only a small part of the builtin objects and standard library. Nevertheless, we are able to present preliminary experimental results on analyzing actual, if small, Python code from a benchmarking application and a regression test suite.

Aymeric Fromherz, Abdelraouf Ouadjaout, Antoine Miné
Model-Based Testing for General Stochastic Time

Many systems are inherently stochastic: they interact with unpredictable environments or use randomised algorithms. Then classical model-based testing is insufficient: it only covers functional correctness. In this paper, we present a new model-based testing framework that additionally covers the stochastic aspects in hard and soft real-time systems. Using the theory of stochastic automata for specifications, test cases and a formal notion of conformance, it provides clean mechanisms to represent underspecification, randomisation, and stochastic timing. Supporting arbitrary continuous and discrete probability distributions, the framework generalises previous work based on purely Markovian models. We cleanly define its theoretical foundations, and then outline a practical algorithm for statistical conformance testing based on the Kolmogorov-Smirnov test. We exemplify the framework’s capabilities and tradeoffs by testing timing aspects of the Bluetooth device discovery protocol.

Marcus Gerhold, Arnd Hartmanns, Mariëlle Stoelinga
Strategy Synthesis for Autonomous Agents Using PRISM

We present probabilistic models for autonomous agent search and retrieve missions derived from Simulink models for an Unmanned Aerial Vehicle (UAV) and show how probabilistic model checking and the probabilistic model checker PRISM can be used for optimal controller generation. We introduce a sequence of scenarios relevant to UAVs and other autonomous agents such as underwater and ground vehicles. For each scenario we demonstrate how it can be modelled using the PRISM language, give model checking statistics and present the synthesised optimal controllers. We conclude with a discussion of the limitations when using probabilistic model checking and PRISM in this context and what steps can be taken to overcome them. In addition, we consider how the controllers can be returned to the UAV and adapted for use on larger search areas.

Ruben Giaquinta, Ruth Hoffmann, Murray Ireland, Alice Miller, Gethin Norman
The Use of Automated Theory Formation in Support of Hazard Analysis

Model checking and simulation are powerful techniques for developing and verifying the design of reactive systems. Here we propose the use of a complementary technique – automated theory formation. In particular, we report on an experiment in which we used a general purpose automated theory formation tool, HR, to explore properties of a model written in Promela. Our use of HR is constrained by meta-knowledge about the model that is relevant to hazard analysis. Moreover, we argue that such meta-knowledge will enable us to explore how safety properties could be violated.

Andrew Ireland, Maria Teresa Llano, Simon Colton
Distributed Model Checking Using ProB

Model checking larger specifications can take a lot of time, from several minutes up to weeks. Naturally, this renders the development of a correct specification very cumbersome. If the model offers enough non-determinism, however, we can distribute the workload onto multiple computers in order to reduce the runtime.In this paper, we present distb, a distributed version of ProB’s model checker. Furthermore, we show possible speed-ups for real-life formal models on both a single workstation and a high-performance cluster.

Philipp Körner, Jens Bendisposto
Optimal Storage of Combinatorial State Spaces

Efficiently deciding reachability for model checking problems requires storing the entire state space. We provide an information theoretical lower bound for these storage requirements and demonstrate how it can be reached using a binary tree in combination with a compact hash table. Experiments confirm that the lower bound is reached in practice in a majority of cases, confirming the combinatorial nature of state spaces.

Alfons Laarman
Stubborn Transaction Reduction

The exponential explosion of parallel interleavings remains a fundamental challenge to model checking of concurrent programs. Both partial-order reduction (POR) and transaction reduction (TR) decrease the number of interleavings in a concurrent system. Unlike POR, transactions also reduce the number of intermediate states. Modern POR techniques, on the other hand, offer more dynamic ways of identifying commutative behavior, a crucial task for obtaining good reductions.We show that transaction reduction can use the same dynamic commutativity as found in stubborn set POR. We also compare reductions obtained by POR and TR, demonstrating with several examples that these techniques complement each other. With an implementation of the dynamic transactions in the model checker LTSmin, we compare its effectiveness with the original static TR and two POR approaches. Several inputs, including realistic case studies, demonstrate that the new dynamic TR can surpass POR in practice.

Alfons Laarman
Certified Foata Normalization for Generalized Traces

Mazurkiewicz traces are a well-known model of concurrency with a notion of equivalence for interleaving executions. Interleaving executions of a concurrent system are represented as strings over an alphabet equipped with an independence relation, and two strings are taken to be equivalent if they can be transformed into each other by repeatedly commuting independent consecutive letters. Analyzing all behaviors of the system can be reduced to analyzing one canonical representative from each equivalence class; normal forms such as the Foata normal form can be used for this purpose. In some applications, it is useful to have commutability of two adjacent letters in a string depend on their left context. We develop Foata normal forms and normalization for Sassone et al.’s context-dependent generalization of traces, formalize this development in the dependently typed programming language Agda and show generalized Foata normalization in action on an example from relaxed shared-memory concurrency (local reads in TSO).

Hendrik Maarand, Tarmo Uustalu
On the Timed Analysis of Big-Data Applications

Apache Spark is one of the best-known frameworks for executing big-data batch applications over a cluster of (virtual) machines. Defining the cluster (i.e., the number of machines and CPUs) to attain guarantees on the execution times (deadlines) of the application is indeed a trade-off between the cost of the infrastructure and the time needed to execute the application. Sizing the computational resources, in order to prevent cost overruns, can benefit from the use of formal models as a means to capture the execution time of applications.Our model of Spark applications, based on the CLTLoc logic, is defined by considering the directed acyclic graph around which Spark programs are organized, the number of available CPUs, the number of tasks elaborated by the application, and the average execution times of tasks. If the outcome of the analysis is positive, then the execution is feasible—that is, it can be completed within a given time span. The analysis tool has been implemented on top of the Zot formal verification tool. A preliminary evaluation shows that our model is sufficiently accurate: the formal analysis identifies execution times that are close (the error is less than 10%) to those obtained by actually running the applications.

Francesco Marconi, Giovanni Quattrocchi, Luciano Baresi, Marcello M. Bersani, Matteo Rossi
Tuning Permissiveness of Active Safety Monitors for Autonomous Systems

Robots and autonomous systems have become a part of our everyday life, therefore guaranteeing their safety is crucial. Among the possible ways to do so, monitoring is widely used, but few methods exist to systematically generate safety rules to implement such monitors. Particularly, building safety monitors that do not constrain excessively the system’s ability to perform its tasks is necessary as those systems operate with few human interventions. We propose in this paper a method to take into account the system’s desired tasks in the specification of strategies for monitors and apply it to a case study. We show that we allow more strategies to be found and we facilitate the reasoning about the trade-off between safety and availability.

Lola Masson, Jérémie Guiochet, Hélène Waeselynck, Kalou Cabrera, Sofia Cassel, Martin Törngren
Sound Black-Box Checking in the LearnLib

In Black-Box Checking (BBC) incremental hypotheses of a system are learned in the form of finite automata. On these automata LTL formulae are verified, or their counterexamples validated on the actual system. We extend the LearnLib’s system-under-learning API for sound BBC, by means of state equivalence, that contrasts the original proposal where an upper-bound on the number of states in the system is assumed. We will show how LearnLib’s new BBC algorithms can be used in practice, as well as how one could experiment with different model checkers and BBC algorithms. Using the RERS 2017 challenge we provide experimental results on the performance of all LearnLib’s active learning algorithms when applied in a BBC setting. The performance of learning algorithms was unknown for this setting. We will show that the novel incremental algorithms TTT, and ADT perform the best.

Jeroen Meijer, Jaco van de Pol
Model-Checking Task Parallel Programs for Data-Race

Data-race detection is the problem of determining if a concurrent program has a data-race in some execution and input; it has been long studied and often solved. The research in this paper reprises the problem in the context of task parallel programs with the intent to prove, via model checking, the absence of data-race on any feasible schedule for a given input. Many of the correctness properties afforded by task parallel programming models such as OpenMP, Cilk, X10, Chapel, Habanero, etc. rely on data-race freedom. Model checking for data-race, presented here, is in contrast to recent work using run-time monitoring, log analysis, or static analysis which are complete or sound but never both. The model checking algorithm builds a happens-before relation from the program execution and uses that relation to detect data-race similar to many solutions that reason over a single observed execution. Unlike those solutions, model checking generates additional program schedules sufficient to prove data-race freedom over all schedules on the given input. The approach is evaluated in a Java implementation of Habanero using the JavaPathfinder model checker. The results, when compared to existing data-race detectors in Java Pathfinder, show a significant reduction in the time required for proving data race freedom.

Radha Nakade, Eric Mercer, Peter Aldous, Jay McCarthy
Consistency of Property Specification Patterns with Boolean and Constrained Numerical Signals

Property Specification Patterns (PSPs) have been proposed to solve recurring specification needs, to ease the formalization of requirements, and enable automated verification thereof. In this paper, we extend PSPs by considering Boolean as well as atomic numerical assertions. This extension enables us to reason about functional requirements which would not be captured by basic PSPs. We contribute an encoding from constrained PSPs to LTL formulae, and we show experimental results demonstrating that our approach scales on requirements of realistic size generated using a probabilistic model. Finally, we show that our extension enables us to prove (in)consistency of requirements about an embedded controller for a robotic manipulator.

Massimo Narizzano, Luca Pulina, Armando Tacchella, Simone Vuotto
Automatic Generation of DO-178 Test Procedures

The work presented in this paper takes place in the context of the testing activities of safety critical Air Management Systems for civilian and military aircraft. The applicative software of such systems is developed following DO-178 guidelines, using a model-based approach built on the SCADE modeling language. In the current V&V process, Test Cases (TCs) specify test conditions and expected outcomes on internal data-flows of the SCADE model. TCs are then implemented in the form of concrete Test Procedures (TPs) that are run against the executable object code and can thus only drive the main inputs of the program. TP implementation is a complex task, today performed manually. This paper proposes an approach to assist the generation of TPs, based on a purpose-built domain specific language for test case specification, from which synchronous observers are generated and composed with the applicative software SCADE model. TPs are then obtained by using a model checker to refute the observer output, yielding, after some post-processing a trace of main input values extended with expected outcome checks.

César Ochoa Escudero, Rémi Delmas, Thomas Bochot, Matthieu David, Virginie Wiels
Using Test Ranges to Improve Symbolic Execution

Symbolic execution is a powerful systematic technique for checking programs, which has received a lot of research attention during the last decade. In practice however, the technique remains hard to scale. This paper introduces SynergiSE, a novel approach to improve symbolic execution by tackling a key bottleneck to its wider adoption: costly and incomplete constraint solving. To mitigate the cost, SynergiSE introduces a succinct encoding of constraint solving results, thereby enabling symbolic execution to be distributed among different workers while sharing and re-using constraint solving results among them without having to communicate databases of constraint solving results. To mitigate the incompleteness, SynergiSE introduces an integration of complementary approaches for testing, e.g., search-based test generation, with symbolic execution, thereby enabling symbolic execution and other techniques to apply in tandem. Experimental results using a suite of Java programs show that SynergiSE presents a promising approach for improving symbolic execution.

Rui Qiu, Sarfraz Khurshid, Corina S. Păsăreanu, Junye Wen, Guowei Yang
Symbolic Execution and Reachability Analysis Using Rewriting Modulo SMT for Spatial Concurrent Constraint Systems with Extrusion

The usual high degree of assurance in safety-critical systems is being challenged by a new incarnation of distributed systems exposed to the presence of hierarchical computation (e.g., virtualization resources such as container and virtual machine technology). This paper addresses the issue of symbolically specifying and verifying properties of distributed hierarchical systems using rewriting modulo SMT, a symbolic approach for rewriting logic that seamlessly combines rewriting modulo theories, SMT solving, and model checking. It presents a rewrite theory $$\mathcal {R}$$R implementing a symbolic executable semantics of an algebraic model of spatially constrained concurrent process with extrusion. The underlying constraint system in $$\mathcal {R}$$R is materialized with the help of SMT-solving technology, where the constraints are quantifier-free formulas interpreted over the Booleans and integers, and information entailment is queried via semantic inference. Symbolic rewrite steps with $$\rightarrow _\mathcal {R}$$→R capture all possible traces from ground instances of the source state to the ground instances of the target state. This approach, as illustrated with some examples in the paper, is well-suited for specifying and proving (or disproving) existential reachability properties of distributed hierarchical systems, such as fault-tolerance, consistency, and privacy.

Miguel Romero, Camilo Rocha
Experience Report: Application of Falsification Methods on the UxAS System

In this report, we present our experiences in applying falsification methods over the Unmanned Systems Autonomy Services (UxAS) system. UxAS is a collection of software modules that enables complex mission planning for multiple vehicles. To test the system, we utilized the tool S-TaLiRo to generate mission scenarios for both UxAS and the underlying vehicle simulators, with the goal of finding behaviors which do not meet system specifications.

Cumhur Erkan Tuncali, Bardh Hoxha, Guohui Ding, Georgios Fainekos, Sriram Sankaranarayanan
MoDeS3: Model-Based Demonstrator for Smart and Safe Cyber-Physical Systems

We present MoDeS3, a complex research demonstrator illustrating the combined use of model-driven development, formal verification, safety engineering and IoT technologies for smart and safe cyber-physical systems. MoDeS3 represents a smart transportation system-of-systems composed of a model railway and a crane which may automatically load and unload cargo from trains where both subsystems need to fulfill functional and safety requirements. The demonstrator is built by using the model-based software engineering principle, while the system level safety is ensured by the combined use of design-time and runtime verification and validation techniques.

András Vörös, Márton Búr, István Ráth, Ákos Horváth, Zoltán Micskei, László Balogh, Bálint Hegyi, Benedek Horváth, Zsolt Mázló, Dániel Varró
Backmatter
Metadaten
Titel
NASA Formal Methods
herausgegeben von
Dr. Aaron Dutle
César Muñoz
Anthony Narkawicz
Copyright-Jahr
2018
Electronic ISBN
978-3-319-77935-5
Print ISBN
978-3-319-77934-8
DOI
https://doi.org/10.1007/978-3-319-77935-5

Premium Partner