Skip to main content

2014 | Buch

FM 2014: Formal Methods

19th International Symposium, Singapore, May 12-16, 2014. Proceedings

herausgegeben von: Cliff Jones, Pekka Pihlajasaari, Jun Sun

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 19th International Symposium on Formal Methods, FM 2014, held in Singapore, May 2014. The 45 papers presented together with 3 invited talks were carefully reviewed and selected from 150 submissions. The focus of the papers is on the following topics: Interdisciplinary Formal Methods, Practical Applications of Formal Methods in Industrial and Research Settings, Experimental Validation of Tools and Methods as well as Construction and Evolution of Formal Methods Tools.

Inhaltsverzeichnis

Frontmatter
Validity Checking of Putback Transformations in Bidirectional Programming

A bidirectional transformation consists of pairs of transformations —a forward transformation

get

produces a target view from a source, while a putback transformation

put

puts back modifications on the view to the source— satisfying sensible roundtrip properties. Existing bidirectional approaches are

get

-based in that one writes (an artifact resembling) a forward transformation and a corresponding backward transformation can be automatically derived. However, the unavoidable ambiguity that stems from the underspecification of

put

often leads to unpredictable bidirectional behavior, making it hard to solve nontrivial practical synchronization problems with existing bidirectional transformation approaches. Theoretically, this ambiguity problem could be solved by writing

put

directly and deriving

get

, but differently from programming with

get

it is easy to write invalid

put

functions. An open challenge is how to check whether the definition of a putback transformation is valid, while guaranteeing that the corresponding unique

get

exists. In this paper, we propose, as far as we are aware, the first

safe

language for supporting putback-based bidirectional programming. The key to our approach is a simple but powerful language for describing primitive putback transformations. We show that validity of putback transformations in this language is decidable and can be automatically checked. A particularly elegant and strong aspect of our design is that we can simply reuse and apply standard results for treeless functions and tree transducers in the specification of our checking algorithms.

Zhenjiang Hu, Hugo Pacheco, Sebastian Fischer
Proof Engineering Considered Essential

In this talk, I will give an overview of the various formal verification projects around the evolving seL4 microkernel, and discuss our experience in large scale proof engineering and maintenance.

In particular, the presentation will draw a picture of what these verifications mean and how they fit together into a whole. Among these are a number of firsts: the first code-level functional correctness proof of a general-purpose OS kernel, the first non-interference proof for such a kernel at the code-level, the first binary-level functional verification of systems code of this complexity, and the first sound worst-case execution-time profile for a protected-mode operating system kernel.

Taken together, these projects produced proof artefacts on the order of 400,000 lines of Isabelle/HOL proof scripts. This order of magnitude brings engineering aspects to proofs that we so far mostly associate with software and code. In the second part of the talk, I will report on our experience in proof engineering methods and tools, and pose a number of research questions that we think will be important to solve for the wider scale practical application of such formal methods in industry.

Gerwin Klein
Engineering UToPiA
Formal Semantics for CML

We describe the semantic domains for

Compass Modelling Language

(

CML

), using Hoare & He’s Unifying Theories of Programming (UTP).

CML

has been designed to specify, design, compose, simulate, verify, test, and validate industrial systems of systems.

CML

is a semantically heterogeneous language, with state-rich imperative constructs based on VDM, communication and concurrency based on CSP, object orientation with object references, and discrete time based on Timed CSP. A key objective is to be semantically open, allowing further paradigms to be added, such as process mobility, continuous physical models, and stochastic processes. Our semantics deals separately with each paradigm, composing them with Galois connections, leading to a natural contract language for all constructs in all paradigms. The result is a compositional formal definition of a complex language, with the individual parts being available for reuse in other language definitions. The work backs our claim that use of UTP scales up to industrial-strength languages: Unifying Theories of Programming in Action (

UToPiA

).

Jim Woodcock
40 Years of Formal Methods
Some Obstacles and Some Possibilities?

In this

“40 years of formal methods”

essay we shall first delineate, Sect. 1, what we mean by method, formal method, computer science, computing science, software engineering, and model-oriented and algebraic methods. Based on this, we shall characterize a spectrum from specification-oriented methods to analysis-oriented methods. Then, Sect. 2, we shall provide a “survey”: which are the ‘prerequisite works’ that have enabled formal methods, Sect. 2.1, and which are, to us, the, by now, classical ‘formal methods’, Sect. 2.2. We then ask ourselves the question: have formal methods for software development, in the sense of this paper been successful? Our answer is, regretfully, no! We motivate this answer, in Sect. 3.2, by discussing eight obstacles or hindrances to the proper integration of formal methods in university research and education as well as in industry practice. This “looking back” is complemented, in Sect. 3.4, by a “looking forward” at some promising developments — besides the alleviation of the (eighth or more) hindrances!

Dines Bjørner, Klaus Havelund
A Refinement Based Strategy for Local Deadlock Analysis of Networks of CSP Processes

Based on a characterisation of process networks in the CSP process algebra, we formalise a set of behavioural restrictions used for local deadlock analysis. Also, we formalise two patterns, originally proposed by Roscoe, which avoid deadlocks in cyclic networks by performing only local analyses on components of the network; our formalisation systematises the behavioural and structural constraints imposed by the patterns. A distinguishing feature of our approach is the use of refinement expressions for capturing notions of pattern conformance, which can be mechanically checked by CSP tools like FDR. Moreover, three examples are introduced to demonstrate the effectiveness of our strategy, including a performance comparison between FDR default deadlock assertion and the verification of local behavioural constraints induced by our approach, also using FDR.

Pedro Antonino, Augusto Sampaio, Jim Woodcock
Algebraic Principles for Rely-Guarantee Style Concurrency Verification Tools

We provide simple equational principles for deriving rely-guarantee-style inference rules and refinement laws based on idempotent semirings. We link the algebraic layer with concrete models of programs based on languages and execution traces. We have implemented the approach in Isabelle/HOL as a lightweight concurrency verification tool that supports reasoning about the control and data flow of concurrent programs with shared variables at different levels of abstraction. This is illustrated on a simple verification example.

Alasdair Armstrong, Victor B. F. Gomes, Georg Struth
Definition, Semantics, and Analysis of Multirate Synchronous AADL

Many cyber-physical systems are hierarchical distributed control systems whose components operate with different rates, and that should behave in a virtually synchronous way. Designing such systems is hard due to asynchrony, skews of the local clocks, and network delays; furthermore, their model checking is typically unfeasible due to state space explosion. Multirate PALS reduces the problem of designing and verifying virtually synchronous multirate systems to the much simpler tasks of specifying and verifying their underlying synchronous design. To make the Multirate PALS design and verification methodology available within an industrial modeling environment, we define in this paper the modeling language

Multirate Synchronous AADL

, which can be used to specify multirate synchronous designs using the AADL modeling standard. We then define the formal semantics of Multirate Synchronous AADL in Real-Time Maude, and integrate Real-Time Maude verification into the OSATE tool environment for AADL. Finally, we show how an algorithm for smoothly turning an airplane can be modeled and analyzed using Multirate Synchronous AADL.

Kyungmin Bae, Peter Csaba Ölveczky, José Meseguer
TrustFound: Towards a Formal Foundation for Model Checking Trusted Computing Platforms

Trusted computing relies on formally verified trusted computing platforms to achieve high security assurance. In practice, however, new platforms are often proposed without a comprehensive formal evaluation and explicitly defined underlying assumptions. In this work, we propose

TRUSTFOUND

, a formal foundation and framework for model checking trusted computing platforms.

TRUSTFOUND

includes a logic for formally modeling platforms, a model of trusted computing techniques and a broad spectrum of threat models. It can be used to check platforms on security properties (e.g., confidentiality and attestability) and uncover the implicit assumptions that must be satisfied to guarantee the security properties. In our experiments,

TRUSTFOUND

is used to encode and model check two trusted platforms. It has identified a total of six implicit assumptions and two severe previously-unknown logic flaws from them.

Guangdong Bai, Jianan Hao, Jianliang Wu, Yang Liu, Zhenkai Liang, Andrew Martin
The VerCors Tool for Verification of Concurrent Programs

The VerCors tool implements thread-modular static verification of concurrent programs, annotated with functional properties and heap access permissions. The tool supports both generic multithreaded and vector-based programming models. In particular, it can verify multithreaded programs written in Java, specified with JML extended with separation logic. It can also verify parallelizable programs written in a toy language that supports the characteristic features of OpenCL. The tool verifies programs by first encoding the specified program into a much simpler programming language and then applying the Chalice verifier to the simplified program. In this paper we discuss both the implementation of the tool and the features of its specification language.

Stefan Blom, Marieke Huisman
Knowledge-Based Automated Repair of Authentication Protocols

In this paper, we introduce a technique for repairing bugs in authentication protocols automatically. Although such bugs can be identified through sophisticated testing or verification methods, the state of the art falls short in fixing bugs in security protocols in an automated fashion. Our method takes as input a protocol and a logical property that the protocol does not satisfy and generates as output another protocol that satisfies the property. We require that the generated protocol must refine the original protocol in cases where the bug is not observed; i.e., repairing a protocol should not change the existing healthy behavior of the protocol. We use

epistemic logic

to specify and reason about authentication properties in protocols. We demonstrate the application of our method in repairing the 3-step Needham-Schroeder’s protocol. To our knowledge, this is the first application of epistemic logic in automated repair of security protocols.

Borzoo Bonakdarpour, Reza Hajisheykhi, Sandeep S. Kulkarni
A Simplified Z Semantics for Presentation Interaction Models

Creating formal models of interactive systems requires that we understand not just the functionality of the system, but also the interface and interaction possibilities. The benefits of fully modelling these systems is that we can ensure behavioural properties of all aspects of the system are correct and prove properties of correctness of the whole system. In the case of safety-critical interactive systems this is important as errors of interactive behaviours can be just as devastating as functional errors. In previous works we have developed models which enable us to perform these tasks - notably presentation models and presentation interaction models (PIMs) and have shown that by using the

μ

Charts language to describe PIMs we can use its underlying Z semantics to produce specifications of both functionality and interface/interaction. In this paper we revisit the Z semantics of PIMs and propose an alternative (and simpler) semantics along with explanations of why this is more useful and appropriate for particular modelling situations.

Judy Bowen, Steve Reeves
Log Analysis for Data Protection Accountability

Accountability is increasingly recognised as a cornerstone of data protection, notably in European regulation, but the term is frequently used in a vague sense. For accountability to bring tangible benefits, the expected properties of personal data handling logs (used as “accounts”) and the assumptions regarding the logging process must be defined with accuracy. In this paper, we provide a formal framework for accountability and show the correctness of the log analysis with respect to abstract traces used to specify privacy policies. We also show that compliance with respect to data protection policies can be checked based on logs free of personal data, and describe the integration of our formal framework in a global accountability process.

Denis Butin, Daniel Le Métayer
Automatic Compositional Synthesis of Distributed Systems

Given the recent advances in synthesizing finite-state controllers from temporal logic specifications, the natural next goal is to synthesize more complex systems that consist of multiple distributed processes. The synthesis of distributed systems is, however, a hard and, in many cases, undecidable problem. In this paper, we investigate the synthesis problem for specifications that admit dominant strategies, i.e., strategies that perform at least as well as the best alternative strategy, although they do not necessarily win the game. We show that for such specifications, distributed systems can be synthesized compositionally, considering one process at a time. The compositional approach has dramatically better complexity and is uniformly applicable to all system architectures

Werner Damm, Bernd Finkbeiner
Automated Real Proving in PVS via MetiTarski

This paper reports the development of a proof strategy that integrates the MetiTarski theorem prover as a trusted external decision procedure into the PVS theorem prover. The strategy automatically discharges PVS sequents containing real-valued formulas, including transcendental and special functions, by translating the sequents into first order formulas and submitting them to MetiTarski. The new strategy is considerably faster and more powerful than other strategies for nonlinear arithmetic available to PVS

William Denman, César Muñoz
Quiescent Consistency: Defining and Verifying Relaxed Linearizability

Concurrent data structures like stacks, sets or queues need to be highly optimized to provide large degrees of parallelism with reduced contention. Linearizability, a key consistency condition for concurrent objects, sometimes limits the potential for optimization. Hence algorithm designers have started to build concurrent data structures that are not linearizable but only satisfy relaxed consistency requirements.

In this paper, we study

quiescent consistency

as proposed by Shavit and Herlihy, which is one such relaxed condition. More precisely, we give the first formal definition of quiescent consistency, investigate its relationship with linearizability, and provide a proof technique for it based on (coupled) simulations. We demonstrate our proof technique by verifying quiescent consistency of a (non-linearizable) FIFO queue built using a diffraction tree.

John Derrick, Brijesh Dongol, Gerhard Schellhorn, Bogdan Tofan, Oleg Travkin, Heike Wehrheim
Temporal Precedence Checking for Switched Models and Its Application to a Parallel Landing Protocol

This paper presents an algorithm for checking temporal precedence properties of nonlinear switched systems. This class of properties subsume bounded safety and capture requirements about visiting a sequence of predicates within given time intervals. The algorithm handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems. It is sound and complete for nonlinear switch systems that robustly satisfy the given property. The algorithm is implemented in the Compare Execute Check Engine (C2E2) using validated simulations. As a case study, a simplified model of an alerting system for closely spaced parallel runways is considered. The proposed approach is applied to this model to check safety properties of the alerting logic for different operating conditions such as initial velocities, bank angles, aircraft longitudinal separation, and runway separation.

Parasara Sridhar Duggirala, Le Wang, Sayan Mitra, Mahesh Viswanathan, César Muñoz
Contracts in Practice

Contracts are a form of lightweight formal specification embedded in the program text. Being executable parts of the code, they encourage programmers to devote proper attention to specifications, and help maintain consistency between specification and implementation as the program evolves. The present study investigates how contracts are used in the practice of software development. Based on an extensive empirical analysis of 21 contract-equipped Eiffel, C#, and Java projects totaling more than 260 million lines of code over 7700 revisions, it explores, among other questions: 1) which kinds of contract elements (preconditions, postconditions, class invariants) are used more often; 2) how contracts evolve over time; 3) the relationship between implementation changes and contract changes; and 4) the role of inheritance in the process. It has found, among other results, that: the percentage of program elements that include contracts is above 33% for most projects and tends to be stable over time; there is no strong preference for a certain type of contract element; contracts are quite stable compared to implementations; and inheritance does not significantly affect qualitative trends of contract usage.

H. -Christian Estler, Carlo A. Furia, Martin Nordio, Marco Piccioni, Bertrand Meyer
When Equivalence and Bisimulation Join Forces in Probabilistic Automata

Probabilistic automata were introduced by Rabin in 1963 as language acceptors. Two automata are equivalent if and only if they accept each word with the same probability. On the other side, in the process algebra community, probabilistic automata were re-proposed by Segala in 1995 which are more general than Rabin’s automata. Bisimulations have been proposed for Segala’s automata to characterize the equivalence between them. So far the two notions of equivalences and their characteristics have been studied most independently. In this paper, we consider Segala’s automata, and propose a novel notion of distribution-based bisimulation by joining the existing equivalence and bisimilarities. Our bisimulation bridges the two closely related concepts in the community, and provides a uniform way of studying their characteristics. We demonstrate the utility of our definition by studying distribution-based bisimulation metrics, which gives rise to a robust notion of equivalence for Rabin’s automata.

Yuan Feng, Lijun Zhang
Precise Predictive Analysis for Discovering Communication Deadlocks in MPI Programs

The Message Passing Interface (MPI) is the standard API for high-performance and scientific computing. Communication deadlocks are a frequent problem in MPI programs, and this paper addresses the problem of discovering such deadlocks. We begin by showing that if an MPI program is

single-path

, the problem of discovering communication deadlocks is NP-complete. We then present a novel propositional encoding scheme which captures the existence of communication deadlocks. The encoding is based on modelling executions with partial orders, and implemented in a tool called MOPPER. The tool executes an MPI program, collects the trace, builds a formula from the trace using the propositional encoding scheme, and checks its satisfiability. Finally, we present experimental results that quantify the benefit of the approach in comparison to a dynamic analyser and demonstrate that it offers a scalable solution.

Vojtěch Forejt, Daniel Kroening, Ganesh Narayanaswamy, Subodh Sharma
Proof Patterns for Formal Methods

Design patterns

represent a highly successful technique in software engineering, giving a reusable ‘best practice’ solution to commonly occurring problems in software design. Taking inspiration from this approach, this paper introduces

proof patterns

, which aim to provide a common vocabulary for solving formal methods proof obligations by capturing and describing solutions to common patterns of proof.

Leo Freitas, Iain Whiteside
Efficient Runtime Monitoring with Metric Temporal Logic: A Case Study in the Android Operating System

We present a design and an implementation of a security policy specification language based on metric linear-time temporal logic (MTL). MTL features temporal operators that are indexed by time intervals, allowing one to specify timing-dependent security policies. The design of the language is driven by the problem of runtime monitoring of applications in mobile devices. A main case of the study is the privilege escalation attack in the Android operating system, where an app gains access to certain resource or functionalities that are not explicitly granted to it by the user, through indirect control flow. To capture these attacks, we extend MTL with recursive definitions, that are used to express call chains betwen apps. We then show how the metric operators of MTL, in combination with recursive definitions, can be used to specify policies to detect privilege escalation, under various fine grained constraints. We present a new algorithm, extending that of linear time temporal logic, for monitoring safety policies written in our specification language. The monitor does not need to store the entire history of events generated by the apps, something that is crucial for practical implementations. We modified the Android OS kernel to allow us to insert our generated monitors modularly. We have tested the modified OS on an actual device, and show that it is effective in detecting policy violations.

Hendra Gunadi, Alwen Tiu
iscasMc: A Web-Based Probabilistic Model Checker

We introduce the web-based model checker

iscas

M

c

for probabilistic systems (see

http://iscasmc.ios.ac.cn/IscasMC

). This Java application offers an easy-to-use web interface for the evaluation of Markov chains and decision processes against PCTL and PCTL

*

specifications. Compared to PRISM or MRMC,

iscas

M

c

is particularly efficient in evaluating the probabilities of LTL properties.

Ernst Moritz Hahn, Yi Li, Sven Schewe, Andrea Turrini, Lijun Zhang
Invariants, Well-Founded Statements and Real-Time Program Algebra

Program algebras based on Kleene algebra abstract the essential properties of programming languages in the form of algebraic laws. The proof of a refinement law may be expressed in terms of the algebraic properties of programs required for the law to hold, rather than directly in terms of the semantics of a language. This has the advantage that the law is then valid for any programming language that satisfies the axioms of the algebra.

In this paper we explore the notion of well-founded statements and their relationship to well-founded relations and iterations. The laws about well-founded statements and relations are combined with invariants to derive a simpler proof of a while-loop introduction law. The algebra is then applied to a real-time programming language. The main difference is that tests within conditions and loops take time to evaluate and during that time the values of program inputs may change. This requires new definitions for conditionals and while loops but the proofs of the introduction laws for these constructs can still make use of the more basic algebraic properties of iterations.

Ian J. Hayes, Larissa Meinicke
Checking Liveness Properties of Presburger Counter Systems Using Reachability Analysis

Counter systems are a well-known and powerful modeling notation for specifying infinite-state systems. In this paper we target the problem of checking liveness properties in counter systems. We propose two semi decision techniques towards this, both of which return a formula that encodes the set of reachable states of the system that satisfy a given liveness property. A novel aspect of our techniques is that they use reachability analysis techniques, which are well studied in the literature, as black boxes, and are hence able to compute precise answers on a much wider class of systems than previous approaches for the same problem. Secondly, they compute their results by iterative expansion or contraction, and hence permit an approximate solution to be obtained at any point. We state the formal properties of our techniques, and also provide experimental results using standard benchmarks to show the usefulness of our approaches. Finally, we sketch an extension of our liveness checking approach to check general CTL properties

K. Vasanta Lakshmi, Aravind Acharya, Raghavan Komondoor
A Symbolic Algorithm for the Analysis of Robust Timed Automata

We propose an algorithm for the analysis of robustness of timed automata, that is, the correctness of a model in the presence of small drifts of the clocks. The algorithm is an extension of the region-based algorithm of Puri and uses the idea of stable zones as introduced by Daws and Kordy. Similarly to the assumptions made by Puri, we restrict our analysis to the class of timed automata with closed guards, progress cycles, and bounded clocks. We have implemented the algorithm and applied it to several benchmark specifications. The algorithm is a depth-first search based on on-the-fly reachability using zones.

Piotr Kordy, Rom Langerak, Sjouke Mauw, Jan Willem Polderman
Revisiting Compatibility of Input-Output Modal Transition Systems

Modern software systems are typically built of components that communicate through their external interfaces. The external behavior of a component can be effectively described using finite state automata-based formalisms. Such component models can then used for varied analyses. For example, interface automata, which model the behavior of components in terms of component states and transitions between them, can be used to check whether the resulting system is compatible. By contrast, partial-behavior modeling formalisms, such as modal transition systems, can be used to capture and then verify properties of sets of prospective component implementations that satisfy an incomplete requirements specification. In this paper, we study how pairwise compatibility should be defined for partial-behavior models. To this end, we describe the limitations of the existing compatibility definitions, propose a set of novel compatibility notions for modal interface automata, and propose efficient, correct, and complete compatibility checking procedures

Ivo Krka, Nicolás D’Ippolito, Nenad Medvidović, Sebastián Uchitel
Co-induction Simply
Automatic Co-inductive Proofs in a Program Verifier

This paper shows that an SMT-based program verifier can support reasoning about co-induction—handling infinite data structures, lazy function calls, and user-defined properties defined as greatest fix-points, as well as letting users write co-inductive proofs. Moreover, the support can be packaged to provide a simple user experience. The paper describes the features for co-induction in the language and verifier Dafny, defines their translation into input for a first-order SMT solver, and reports on some encouraging initial experience.

K. Rustan M. Leino, Michał Moskal
Management of Time Requirements in Component-Based Systems

In component-based systems, a number of existing software components are combined in order to achieve business goals. Some of such goals may include system-level (global) timing requirements (GTR). It is essential to refine GTR into a set of component-level (local) timing requirements (LTRs) so that if a set of candidate components collectively meets them, then the corresponding GTR is also satisfied. Existing techniques for computing LTRs produce monolithic representations, that have dependencies over multiple components. Such representations do not allow for effective component selection and repair. In this paper, we propose an approach for building under-approximated LTRs (

u

LTR) consisting of independent constraints over components. We then show how

u

LTR can be used to improve the design, monitoring and repair of component-based systems under time requirements. We also report on the implementation of this approach and its evaluation using real-world case studies in Web service composition. The results demonstrate its practical value and advantages over existing techniques.

Yi Li, Tian Huat Tan, Marsha Chechik
Compositional Synthesis of Concurrent Systems through Causal Model Checking and Learning

Formal verification such as model checking can be used to verify whether a system model satisfies a given specification. However, if model checking shows that the system model violates the specification, the designer has to manually refine the system model. To automate this refinement process, we propose a learning-based synthesis framework that can automatically eliminate all counterexamples from a system model based on causality semantics such that the synthesized model satisfies a given safety specification. Further, the framework for synthesis is not only automatic, but is also an iterative compositional process based on the L

*

algorithm, i.e., the global state space of the system is never generated in the synthesis process. We also prove the correctness and termination of the synthesis framework.

Shang-Wei Lin, Pao-Ann Hsiung
Formal Verification of Operational Transformation

Operational Transformation (OT) is a technology to provide consistency maintenance and concurrency control in real-time collaborative editing systems. The correctness of OT is critical due to its foundation role in supporting a wide range of real world applications. In this work, we formally model the OT-based collaborative editing systems and establish their correctness, w.r.t. convergence and intention preservation, using a set of well-defined transformation conditions and properties. We then use model checking to verify the transformation properties for basic data and operational models. To the best of our knowledge, this is the first work to conduct a complete verification of OT including control algorithms and transformation functions. Our evaluation confirmed the correctness of existing OT systems and transformation functions with important discoveries.

Yang Liu, Yi Xu, Shao Jie Zhang, Chengzheng Sun
Verification of a Transactional Memory Manager under Hardware Failures and Restarts

We present our formal verification of the persistent memory manager in IBM’s 4765 secure coprocessor. Its task is to achieve a transactional semantics of memory updates in the face of restarts and hardware failures and to provide resilience against the latter. The inclusion of hardware failures is novel in this area and incurs a significant jump in system complexity. We tackle the resulting verification challenge by a combination of a monad-based model, an abstraction that reduces the system’s non-determinism, and stepwise refinement. We propose novel proof rules for handling repeated restarts and nested metadata transactions. Our entire development is formalized in Isabelle/HOL.

Ognjen Marić, Christoph Sprenger
SCJ: Memory-Safety Checking without Annotations

The development of Safety-Critical Java (SCJ) has introduced a novel programming paradigm designed specifically to make Java applicable to safety-critical systems. Unlike in a Java program, memory management is an important concern under the control of the programmer in SCJ. It is, therefore, not possible to apply tools and techniques for Java programs to SCJ. We describe a new technique that uses an abstract language and inference rules to guarantee memory safety. Our approach does not require user-added annotations and automatically checks programs at the source-code level, although it can give false negatives.

Chris Marriott, Ana Cavalcanti
Refactoring, Refinement, and Reasoning
A Logical Characterization for Hybrid Systems

Refactoring of code is a common device in software engineering. As cyber-physical systems (CPS) become ever more complex, similar engineering practices become more common in CPS development. Proper safe developments of CPS designs are accompanied by a proof of correctness. Since the inherent complexities of CPS practically mandate iterative development, frequent changes of models are standard practice, but require reverification of the resulting models after every change.

To overcome this issue, we develop

proof-aware refactorings for CPS

. That is, we study model transformations on CPS and show how they correspond to relations on correctness proofs. As the main technical device, we show how the impact of model transformations on correctness can be characterized by different notions of refinement in differential dynamic logic. Furthermore, we demonstrate the application of refinements on a series of safety-preserving and liveness-preserving refactorings. For some of these we can give strong results by proving on a meta-level that they are correct. Where this is impossible, we construct proof obligations for showing that the refactoring respects the refinement relation.

Stefan Mitsch, Jan-David Quesel, André Platzer
Object Propositions

The presence of aliasing makes modular verification of object-oriented code difficult. If multiple clients depend on the properties of an object, one client may break a property that others depend on.

We have developed a modular verification approach based on the novel abstraction of

object propositions

, which combine predicates and information about object aliasing. In our methodology, even if shared data is modified, we know that an object invariant specified by a client holds. Our permission system allows verification using a mixture of linear and nonlinear reasoning. We thus offer an alternative to separation logic verification approaches. Object propositions can be more modular in some cases than separation logic because they can more effectively hide the exact aliasing relationships within a module. We validate the practicality of our approach by verifying an instance of the composite pattern. We implement our methodology in the intermediate verification language Boogie (of Microsoft Research), for the composite pattern example.

Ligia Nistor, Jonathan Aldrich, Stephanie Balzer, Hannes Mehnert
Flexible Invariants through Semantic Collaboration

Modular reasoning about class invariants is challenging in the presence of collaborating objects that need to maintain global consistency. This paper presents

semantic collaboration

: a novel methodology to specify and reason about class invariants of sequential object-oriented programs, which models dependencies between collaborating objects by semantic means. Combined with a simple ownership mechanism and useful default schemes, semantic collaboration achieves the flexibility necessary to reason about complicated inter-object dependencies but requires limited annotation burden when applied to standard specification patterns. The methodology is implemented in AutoProof, our program verifier for the Eiffel programming language (but it is applicable to any language supporting some form of representation invariants). An evaluation on several challenge problems proposed in the literature demonstrates that it can handle a variety of idiomatic collaboration patterns, and is more widely applicable than the existing invariant methodologies.

Nadia Polikarpova, Julian Tschannen, Carlo A. Furia, Bertrand Meyer
Efficient Tight Field Bounds Computation Based on Shape Predicates

Tight field bounds contribute to verifying the correctness of object oriented programs in bounded scenarios, by restricting the values that fields can take to feasible cases only, during automated analysis. Tight field bounds are computed from formal class specifications. Their computation is costly, and existing approaches use a cluster of computers to obtain the bounds, from declarative (JML) formal specifications.

In this article we address the question of whether the language in which class specifications are expressed may affect the efficiency with which tight field bounds can be computed. We introduce a novel technique that generates tight field bounds from data structure descriptions provided in terms of shape predicates, expressed using separation logic. Our technique enables us to compute tight field bounds faster on a single workstation, than the alternative approaches which use a cluster, in wall-clock time terms. Although the computed tight bounds differ in the canonical ordering in which data structure nodes are labeled, our computed tight field bounds are also effective. We incorporate the field bounds computed with our technique into a state-of-the-art SAT based analysis tool, and show that, for various case studies, our field bounds allow us to handle scopes in bounded exhaustive analysis comparable to those corresponding to bounds computed with previous techniques.

Pablo Ponzio, Nicolás Rosner, Nazareno Aguirre, Marcelo Frias
A Graph-Based Transformation Reduction to Reach UPPAAL States Faster

On-line model checking is a recent technique to overcome limitations of model checking if accurate system models are not available. At certain times during on-line model checking it is necessary to adjust the current model state to the real-world state and to do so in an efficient way. This paper presents a general, graph-based transformation reduction and applies it to reduce the length of transformation sequences needed to reach particular states in the model checker UPPAAL. Our evaluation shows that, generally, for the length of those sequences upper bounds exist independently from the elapsed time in the system. It follows that our proposed method is capable of fulfilling the real-time requirements imposed by on-line model checking.

Jonas Rinast, Sibylle Schupp, Dieter Gollmann
Computing Quadratic Invariants with Min- and Max-Policy Iterations: A Practical Comparison

Policy iterations have been known in static analysis since a small decade. Despite the impressive results they provide - achieving a precise fixpoint without the need of widening/narrowing mechanisms of abstract interpretation - their use is not yet widespread. Furthermore, there are basically two dual approaches: min-policies and max-policies, but they have not yet been practically compared.

Multiple issues could explain their relative low adoption in the research communities: implementation of the theory is not obvious; initialization is rarely addressed; integration with other abstraction or fixpoint engine not mentionned; etc. This paper tries to present a Policy Iteration Primer, summarizing the approaches from the practical side, focusing on their implementation and use.

We implemented both of them for a specific setting: the computation of quadratic templates, which appear useful to analyze controllers such as found in civil aircrafts or UAVs.

Pierre Roux, Pierre-Loïc Garoche
Efficient Self-composition for Weakest Precondition Calculi

This paper contributes to deductive verification of language based secure information flow. A popular approach in this area is

self-composition

in combination with off-the-shelf software verification systems to check for secure information flow. This approach is appealing, because (1) it is highly precise and (2) existing sophisticated software verification systems can be harnessed. On the other hand, self-composition is commonly considered to be inefficient.

We show how the efficiency of self-composition style reasoning can be increased. It is sufficient to consider programs only once, if the used verification technique is based on a weakest precondition calculus with an explicit heap model. Additionally, we show that in many cases the number of final symbolic states to be considered can be reduced considerably. Finally, we propose a comprehensive solution of the technical problem of applying software contracts within the self-composition approach. So far this problem had only been solved partially.

Christoph Scheben, Peter H. Schmitt
Towards a Formal Analysis of Information Leakage for Signature Attacks in Preferential Elections

Electronic voting is rich with paradoxes. How can a voter verify that his own vote has been correctly counted, but at the same time be prevented from revealing his vote to a third party? Not only is there no generally recognised solution to those problems, it is not generally agreed how to

specify

precisely what the problems are, and what exact threats they pose. Such a situation is ripe for the application of Formal Methods.

In this paper we explore so-called signature attacks, where an apparently secure system can nevertheless be manipulated to reveal a voter’s choice in unexpected and subtle ways. We describe two examples in detail, and from that make proposals about where formal techniques might apply.

Roland Wen, Annabelle McIver, Carroll Morgan
Analyzing Clinical Practice Guidelines Using a Decidable Metric Interval-Based Temporal Logic

A Clinical Practice Guideline defines best practices to be followed by clinicians to manage a particular disease. Checking the quality of such guidelines is a very important issue, e.g., designers of the guidelines should ensure their consistency. A formal modelling approach is an appropriate choice due to the complexity of these guidelines. In this paper, we develop a metric interval-based temporal logic, which is suitable for such modelling and then propose a method for checking the satisfiability of such guidelines, to assure their consistency. As a case study, we use the logic to model a real-life guideline, the Active Tuberculosis Diagnosis guideline.

Morteza Yousef Sanati, Wendy MacCaull, Thomas S. E. Maibaum
A Modular Theory of Object Orientation in Higher-Order UTP

Hoare and He’s Unifying Theories of Programming (UTP) is a framework that facilitates the integration of relational theories. None of the UTP theories of object orientation, however, supports recursion, dynamic binding, and compositional method definitions all at the same time. In addition, most of them are defined for a fixed language and do not lend themselves easily for integration with other UTP theories. Here, we present a novel theory of object orientation in the UTP that supports all of the aforementioned features while permitting its integration with other UTP theories. Our new theory also provides novel insights into how higher-order programming can be used to reason about object-oriented programs in a compositional manner. We exemplify its use by creating an object-oriented variant of a refinement language for real-time systems.

Frank Zeyda, Thiago Santos, Ana Cavalcanti, Augusto Sampaio
Formalizing and Verifying a Modern Build Language

CloudMake

is a software utility that automatically builds executable programs and libraries from source code—a modern M

ake

utility. Its design gives rise to a number of possible optimizations, like cached builds, and the executables to be built are described using a functional programming language. This paper formally and mechanically verifies the correctness of central

CloudMake

algorithms.

The paper defines the

CloudMake

language using an operational semantics, but with a twist: the central operation

exec

is defined axiomatically, making it pluggable so that it can be replaced by calls to compilers, linkers, and other tools. The formalization and proofs of the central

CloudMake

algorithms are done entirely in

Dafny

, the proof engine of which is an SMT-based program verifier.

Maria Christakis, K. Rustan M. Leino, Wolfram Schulte
The Wireless Fire Alarm System: Ensuring Conformance to Industrial Standards through Formal Verification

The design of distributed, safety critical real-time systems is challenging due to their high complexity, the potentially large number of components, and complicated requirements and environment assumptions. Our case study shows that despite those challenges, the automated formal verification of such systems is not only possible, but practicable even in the context of small to medium-sized enterprises. We considered a wireless fire alarm system and uncovered severe design flaws. For an improved design, we provided dependable verification results which in particular ensure that conformance tests for a relevant regulation standard will be passed. In general we observe that if system tests are specified by generalized test procedures, then

verifying

that a system will pass any test following these test procedures is a cost-efficient approach to improve product quality based on formal methods.

Sergio Feo-Arenis, Bernd Westphal, Daniel Dietsch, Marco Muñiz, Ahmad Siyar Andisha
Formally Verifying Graphics FPU
An Intel® Experience

Verification of a Floating Point Unit (FPU) has always been a challenging task and its completeness is always a question. Formal verification (FV) guarantees 100% coverage and is usually the sign-off methodology for FPU verification. At Intel

®

, Symbolic Trajectory Evaluation (STE) FV has been used for over two decades to verify CPU FPUs. With the ever-increasing workload share between core-CPU and Graphics Processing Unit (GPU) and the augmented set of data standards that GPU has to comply with, the complexity of graphics FPU is exploding. This has made use of FV imperative to avoid any bug escapes. STE which has proved to be the state of the art methodology for CPU’s FPU verification was leveraged in verifying Intel

®

Graphics FPU. There were many roadblocks along the way because of the extra flexibility provided in graphics FPU instructions. This paper presents our experience in formally verifying the graphics FPU.

Aarti Gupta, M. V. Achutha KiranKumar, Rajnish Ghughal
MDP-Based Reliability Analysis of an Ambient Assisted Living System

The proliferation of ageing population creates heavy burdens to all industrialised societies. Smart systems equipped with ambient intelligence technologies, also known as Ambient Assisted Living (AAL) Systems are in great needs to improve the elders’ independent living and alleviate the pressure on caregivers/family members. In practice, these systems are expected to meet a certain reliability requirement in order to guarantee the usefulness. However, this is challenging due to the facts that AAL systems come with complex behaviours, dynamic environments and unreliable communications. In this work, we report our experience on analysing reliability of a smart healthcare system named AMUPADH for elderly people with dementia, which is deployed in a Singapore-based nursing home. Using Markov Decision Process (MDP) as the reliability model, we perform reliability analysis in three aspects. Firstly, we judge the AAL system design by calculating the overall system reliability based on the reliability value of each component. Secondly, to achieve the required system reliability, we perform the reliability distribution to calculate the reliability requirement for each component. Lastly, sensitivity analysis is applied to find which component affects the system reliability most significantly. Our evaluation shows that the overall reliability of reminders to be sent correctly in AMUPADH system is below 40%, and improving the reliability of Wi-Fi network would be more effective to improve the overall reliability than other components.

Yan Liu, Lin Gui, Yang Liu
Diagnosing Industrial Business Processes: Early Experiences

Modern day enterprises rely on streamlined business processes for their smooth operation. However, lot of these processes contain errors, many of which are control flow related,

e.g.

, deadlock and lack of synchronization. This can provide hindrance to downstream analysis like correct simulation, code generation etc. For real-life process models other kind of errors are quite common, - these are syntactic errors which arise due to poor modeling practices. Detecting and identifying the location of occurrence of errors are equally important for correct modeling of business processes. We consider industrial business processes modeled in Business Process Modeling Notation (BPMN) and use graph-theoretic techniques and Petri net-based analyses to detect syntactic and control flow related errors respectively. Subsequently based on this, we diagnose different types of errors. We are further able to discover how error frequencies change with error depth and how they correlate with the size of the subprocesses and swim-lane interactions in the models. Such diagnostic details are vital for business process designers to detect, identify and rectify errors in their models.

Suman Roy, A. S. M. Sajeev, Srivibha Sripathy
Formal Verification of Lunar Rover Control Software Using UPPAAL

This paper reports our formal verification of Chinese Lunar Rover control software, an embedded real-time multitasking software system running over a home-made real-time operating system (RTOS). The main purpose of the verification is to validate if the system satisfies a time-related functional property. We modeled the RTOS, application tasks and physical environment as timed automata and analyzed the system using statistical model checking (SMC) of UPPAAL. Verification result showed that our model was able to track down undesired behavior in the multitasking system. Moreover, as the modeling framework we designed is general and extensible, it can be a reference method for verifying other real-time multitasking systems.

Lijun Shan, Yuying Wang, Ning Fu, Xingshe Zhou, Lei Zhao, Lijng Wan, Lei Qiao, Jianxin Chen
Formal Verification of a Descent Guidance Control Program of a Lunar Lander

We report on our recent experience in applying formal methods to the verification of a descent guidance control program of a lunar lander. The powered descent process of the lander gives a specific hybrid system (HS), i.e. a sampled-data control system composed of the physical plant and the embedded control program. Due to its high complexity, verification of such a system is very hard. In the paper, we show how this problem can be solved by several different techniques including simulation, bounded model checking (BMC) and theorem proving, using the tools Simulink/Stateflow, iSAT-ODE and Flow

*

, and HHL Prover, respectively. In particular, for the theorem-proving approach to work, we study the invariant generation problem for HSs with general elementary functions. As a preliminary attempt, we perform verification by focusing on one of the 6 phases, i.e. the slow descent phase, of the powered descent process. Through such verification, trustworthiness of the lunar lander’s control program is enhanced.

Hengjun Zhao, Mengfei Yang, Naijun Zhan, Bin Gu, Liang Zou, Yao Chen
Backmatter
Metadaten
Titel
FM 2014: Formal Methods
herausgegeben von
Cliff Jones
Pekka Pihlajasaari
Jun Sun
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-06410-9
Print ISBN
978-3-319-06409-3
DOI
https://doi.org/10.1007/978-3-319-06410-9