Skip to main content
Top

2024 | Book

Integrated Formal Methods

18th International Conference, IFM 2023, Leiden, The Netherlands, November 13–15, 2023, Proceedings

insite
SEARCH

About this book

This volume LNCS 14300 constitutes the refereed proceedings of the 18th International Conference, IFM 2023, in November 2023, held in Leiden, The Netherlands.

The 16 full papers presented together with 2 short papers were carefully reviewed and selected from 51 submissions. The conference focuses on all aspects of the design of integrated techniques, including language design, verification and validation, automated tool support, and the use of such techniques in software engineering practice.

Table of Contents

Frontmatter
Correction to: Integrated Formal Methods
Paula Herber, Anton Wijs

Invited Presentations

Frontmatter
SMT: Something You Must Try
Abstract
SMT (Satisfiability Modulo Theories) solving is a technology for the fully automated solution of logical formulas. Due to their impressive efficiency, SMT solvers are nowadays frequently used in a wide variety of applications. These tools are general purpose and as off-the-shelf solvers, their usage is truly integrated. A typical application (i) encodes real-world problems as logical formulas, (ii) check these formulas for satisfiability with the help of SMT solvers, and - in case of satisfiability - (iii) decodes their solutions back to solutions of the original real-world problem.
In this extended abstract we give some insights into the working mechanisms of SMT solving, discuss a few areas of application, and present a novel application from the domain of simulation.
Erika Ábrahám, József Kovács, Anne Remke

Analysis and Verification

Frontmatter
Automated Sensitivity Analysis for Probabilistic Loops
Abstract
We present an exact approach to analyze and quantify the sensitivity of higher moments of probabilistic loops with symbolic parameters, polynomial arithmetic and potentially uncountable state spaces. Our approach integrates methods from symbolic computation, probability theory, and static analysis in order to automatically capture sensitivity information about probabilistic loops. Sensitivity information allows us to formally establish how value distributions of probabilistic loop variables influence the functional behavior of loops, which can in particular be helpful when choosing values of loop variables in order to ensure efficient/expected computations. Our work uses algebraic techniques to model higher moments of loop variables via linear recurrence equations and introduce the notion of sensitivity recurrences. We show that sensitivity recurrences precisely model loop sensitivities, even in cases where the moments of loop variables do not satisfy a system of linear recurrences. As such, we enlarge the class of probabilistic loops for which sensitivity analysis was so far feasible. We demonstrate the success of our approach while analyzing the sensitivities of probabilistic loops.
Marcel Moosbrugger, Julian Müllner, Laura Kovács
diffDP: Using Data Dependencies and Properties in Difference Verification with Conditions
Abstract
To deal with frequent software changes, as e.g., caused by agile software development processes, software verification tools must be incremental. While many existing incremental verification approaches are tailored to and coupled with a specific verifier, Beyer et al. recently introduced a verifier agnostic approach. The underlying idea of their approach is to (1) detect an overapproximation of those paths in the modified software that may cause regression bugs (i.e., property violations that do not exist in the original, unmodified software), (2) encode the detected paths in a condition, and (3) apply conditional model checking to restrict the verifier’s exploration to the detected paths. So far, only a simple, syntax-based difference detector exists. In this paper, we propose a more complicated difference detector diffDP, which amongst others takes data dependencies and correctness properties (more concretely, (un)reachability properties) into account to determine which changes may cause regression bugs where. Our extensive evaluation confirms that for all considered conditional model checkers our proposed difference detector improves the effectiveness and efficiency of difference verification with condition on several tasks.
Marie-Christine Jakobs, Tim Pollandt
CHC Model Validation with Proof Guarantees
Abstract
Formal verification tooling increasingly relies on logic solvers as automated reasoning engines. A point of commonality among these solvers is the high complexity of their codebases, which makes bug occurrence disturbingly frequent. Tool competitions have showcased many examples of state-of-the-art solvers disagreeing on the satisfiability of logic formulas, be them solvers for Boolean satisfiability (SAT), satisfiability modulo theories (SMT), or constrained Horn clauses (CHC). The validation of solvers’ results is thus of paramount importance, in order to increase the confidence not only in the solvers themselves, but also in the tooling which they underpin. Among the formalisms commonly used by modern verification tools, CHC is one that has seen, at the same time, extensive practical usage and very little efforts in result validation. As one of the initial steps in addressing this issue, we propose and evaluate a two-layered validation approach for witnesses of CHC satisfiability. Our approach relies, first, on a proof producing SMT solver to validate a CHC model via a series of SMT queries, and, second, on a proof checker to validate the SMT solver’s results. We developed a modular evaluation framework and assessed the approach’s viability via large scale experimentation, comparing three CHC solvers, five SMT solvers, and four proof checkers. Our results indicate that the approach is feasible, with potential to be incorporated into CHC-based tooling, and also confirm the need for validation, with nine bugs being found in the tools used.
Rodrigo Otoni, Martin Blicha, Patrick Eugster, Natasha Sharygina
Verify This: Memcached—A Practical Long-Term Challenge for the Integration of Formal Methods
Abstract
Challenging benchmarks are a major driver for sharpening our tools and theories. This paper introduces the second VerifyThis long-term challenge: The specification and verification of a remote key-value cache, inspired by and acting as compatible drop-in replacement of the memcached software package, which is widely used in industry. We identify open gaps in the formal specification and verification of systems. Goal of the challenge is therefore to foster collaboration in order to advance the capabilities of current methods and also to verify a realistic and industrially-relevant software application. This challenge has it all: high-level modeling of communication protocols, intricate algorithms and data structure, code level verification, safety and liveness properties as well as security challenges. It emphasizes the opportunity and need to integrate approaches and tools across the entire spectrum of formal methods.
Gidon Ernst, Alexander Weigl

Deductive Verification

Frontmatter
Towards Formal Verification of a TPM Software Stack
Abstract
The Trusted Platform Module (TPM) is a cryptoprocessor designed to protect integrity and security of modern computers. Communications with the TPM go through the TPM Software Stack (TSS), a popular implementation of which is the open-source library tpm2-tss. Vulnerabilities in its code could allow attackers to recover sensitive information and take control of the system. This paper describes a case study on formal verification of tpm2-tss using the Frama-C verification platform. Heavily based on linked lists and complex data structures, the library code appears to be highly challenging for the verification tool. We present several issues and limitations we faced, illustrate them with examples and present solutions that allowed us to verify functional properties and the absence of runtime errors for a representative subset of functions. We describe verification results and desired tool improvements necessary to achieve a full formal verification of the target code.
Yani Ziani, Nikolai Kosmatov, Frédéric Loulergue, Daniel Gracia Pérez, Téo Bernier
Reasoning About Exceptional Behavior at the Level of Java Bytecode
Abstract
A program’s exceptional behavior can substantially complicate its control flow, and hence accurately reasoning about the program’s correctness. On the other hand, formally verifying realistic programs is likely to involve exceptions—a ubiquitous feature in modern programming languages.
In this paper, we present a novel approach to verify the exceptional behavior of Java programs, which extends our previous work on ByteBack. ByteBack works on a program’s bytecode, while providing means to specify the intended behavior at the source-code level; this approach sets ByteBack apart from most state-of-the-art verifiers that target source code. To explicitly model a program’s exceptional behavior in a way that is amenable to formal reasoning, we introduce Vimp: a high-level bytecode representation that extends the Soot framework’s Grimp with verification-oriented features, thus serving as an intermediate layer between bytecode and the Boogie intermediate verification language. Working on bytecode through this intermediate layer brings flexibility and adaptability to new language versions and variants: as our experiments demonstrate, ByteBack can verify programs involving exceptional behavior in all versions of Java, as well as in Scala and Kotlin (two other popular JVM languages).
Marco Paganoni, Carlo A. Furia
Analysis and Formal Specification of OpenJDK’s BitSet
Abstract
This paper uses a combination of formal specification and testing, to analyse OpenJDK’s BitSet class. This class represents a vector of bits that grows as required. During our analysis, we uncovered a number of bugs. We propose and compare various solutions, supported by our formal specification. While a full mechanical verification of the BitSet class is not yet possible due to limited support for bitwise operations in the KeY theorem prover, we show initial steps taken to formally verify the challenging get(int,int) method, and discuss some required extensions to the theorem prover.
Andy S. Tatman, Hans-Dieter A. Hiep, Stijn de Gouw
Joining Forces! Reusing Contracts for Deductive Verifiers Through Automatic Translation
Abstract
Deductive verifiers can be used to prove the correctness of programs by specifying the program’s intended behaviour using annotations such as pre- and postconditions. Unfortunately, most verifiers use their own unique specification language for those contract-based annotations. While many of them have similar concepts and syntax, there are numerous semantic differences and subtleties that make it very difficult to reuse specifications between verifiers. But reusing specifications could help overcome one of the bottlenecks of deductive verification, namely writing specifications. Therefore, we present the Specification Translator, a tool to automatically translate annotations for deductive verifiers. It currently supports Java programs annotated for OpenJML, Krakatoa and VerCors. Using the Specification Translator, we show that we can reuse 81% of the annotations, which would otherwise need to be manually translated. Moreover, it allows to reuse tools such as Daikon that generate annotations only in the syntax of one specific tool.
Lukas Armborst, Sophie Lathouwers, Marieke Huisman

Hardware and Memory Verification

Frontmatter
Lifting the Reasoning Level in Generic Weak Memory Verification
Abstract
Weak memory models specify the semantics of concurrent programs on multi-core architectures. Reasoning techniques for weak memory models are often specialized to one fixed model and verification results are hence not transferable to other memory models. A recent proposal of a generic verification technique based on axioms on program behaviour expressed via weakest preconditions aims at overcoming this specialization to dedicated models. Due to the usage of weakest preconditions, reasoning however takes place on a very low level requiring the application of numerous axioms for deriving program properties, even for a single statement.
In this paper, we lift reasoning in this generic verification approach to a more abstract level. Based on a view-based assertion language, we provide a number of novel proof rules for directly reasoning on the level of program constructs. We prove soundness of our proof rules and exemplify them on the write-to-read causality (WRC) litmus test. A comparison to the axiom-based low-level proof reveals a significant reduction in the number of required proof steps.
Lara Bargmann, Heike Wehrheim
Automatic Formal Verification of RISC-V Pipelined Microprocessors with Fault Tolerance by Spatial Redundancy at a High Level of Abstraction
Abstract
Presented are abstraction techniques for efficient modeling of pipelined microprocessors of the RISC-V architecture, including designs with fault tolerance by spatial redundancy. This is done at a high-level of abstraction, allowing us to formally verify such processors extremely efficiently by using the property of Positive Equality. To the best of our knowledge, this is the first work on formal verification of RISC-V pipelined processors at a high level of abstraction, and the first work on formal verification of pipelined microprocessors with fault tolerance by spatial redundancy.
Miroslav N. Velev
Refinement and Separation: Modular Verification of Wandering Trees
Abstract
Flash memory does not allow in-place updates like conventional hard disks. Therefore all file systems must maintain an index that maps identifiers for files and directories to the address of their most recently written version. For efficiency, the index is typically implemented as a Wandering Search Tree. However, the verification of Wandering Trees is challenging since it has to deal with multiple aspects at once: the algorithmic complexity of search trees, trees in RAM that are partially loaded from snapshots on flash, where only modified parts are incrementally saved, and the efficient representation of trees as pointer structures. This paper proposes a modular solution that allows verifying each aspect separately. The solution has been mechanized in the theorem prover KIV.
Gerhard Schellhorn, Stefan Bodenmüller, Wolfgang Reif

Verification and Learning

Frontmatter
Performance Fuzzing with Reinforcement-Learning and Well-Defined Constraints for the B Method
Abstract
The B method is a formal method supported by a variety of tools. Those tools, like any complex piece of software, may suffer from performance issues and vulnerabilities, especially for potentially undiscovered, pathological cases. To find such cases and assess their performance impacts within a single tool, we leverage the performance fuzzing algorithm BanditFuzz for the constraint solving backends of the ProB model checker. BanditFuzz utilises two multi-armed bandits to generate and mutate benchmark inputs for the ProB backends in a targeted manner. We describe how we adapted BanditFuzz for the B method, which differences exist to the original implementation for the SMT-LIB standard, and how we ensure well-definedness of the randomly generated benchmarks. Our experiments successfully uncovered performance issues in specific backends and even external tooling, providing valuable insights into areas which required improvement.
Jannik Dunkelau, Michael Leuschel
Reinforcement Learning Under Partial Observability Guided by Learned Environment Models
Abstract
Reinforcement learning and planning under partial observability is notoriously difficult. In this setting, decision-making agents need to perform a sequence of actions with incomplete information about the underlying state of the system. As such, methods that can act in the presence of incomplete state information are of special interest to machine learning, planning, and control communities. In the scope of this paper, we consider environments that behave like a partially observable Markov decision process (POMDP) with known discrete actions, while assuming no knowledge about its structure or transition probabilities.
We propose an approach for reinforcement learning (RL) in such partially observable environments. Our approach combines Q-learning with IoAlergia, an automata learning method that can learn Markov decision processes (MDPs). By learning MDP models of the environment from the experiences of the RL agent, we enable RL in partially observable domains without explicit, additional memory to track previous interactions for dealing with ambiguities stemming from partial observability. We instead provide the RL agent with additional observations in the form of abstract environment states. By simulating new experiences on a learned model we extend the agent’s internal state representation, which in turn enables better decision-making in the presence of partial observability. In our evaluation we report on the validity of our approach and its promising performance in comparison to six state-of-the-art deep RL techniques with recurrent neural networks and fixed memory.
Edi Muškardin, Martin Tappler, Bernhard K. Aichernig, Ingo Pill

Temporal Logics

Frontmatter
Mission-Time LTL (MLTL) Formula Validation via Regular Expressions
Abstract
Mission-time Linear Temporal Logic (MLTL) represents the most practical fragment of Metric Temporal Logic; MLTL resembles the popular logic Linear Temporal Logic (LTL) with finite closed-interval integer bounds on the temporal operators. Increasingly, many tools reason over MLTL specifications, yet these tools are useful only when system designers can validate the input specifications. We design an automated characterization of the structure of the computations that satisfy a given MLTL formula using regular expressions. We prove soundness and completeness of our structure. We also give an algorithm for automated MLTL formula validation and analyze its complexity both theoretically and experimentally. Additionally, we generate a test suite using control flow diagrams to robustly test our implementation and release an open-source tool with a user-friendly graphical interface. The result of our contributions are improvements to existing algorithms for MLTL analysis, and are applicable to many other tools for automated, efficient MLTL formula validation. Our updated tool may be found at https://​temporallogic.​org/​research/​WEST.
Jenna Elwing, Laura Gamboa-Guzman, Jeremy Sorkin, Chiara Travesset, Zili Wang, Kristin Yvonne Rozier
Symbolic Model Checking of Relative Safety LTL Properties
Abstract
A well-known classification in formal methods distinguishes between safety and liveness properties. A generalization of these concepts defines safety and liveness relative to another property, usually considered an assumption on the traces of the language. Safety properties have the advantage that their model checking problem can be reduced to simple reachability. However, the generalization of such reduction to the case of relative safety has not yet been investigated.
In this paper, we study the problem of model checking relative safety properties in the context of Linear-time Temporal Logic. We show that the problem can be reduced to reachability when the system model is free of livelocks related to the assumptions. More in general, we provide an algorithm that removes such livelocks from the system model driven by counterexamples to the relative safety property. We compare the approach with other reduction to safety algorithms on several benchmarks.
Alberto Bombardelli, Alessandro Cimatti, Stefano Tonetta, Marco Zamboni
Extending PlusCal for Modeling Distributed Algorithms
Abstract
PlusCal is a language for describing algorithms at a high level of abstraction. The PlusCal translator generates a TLA+ specification that can be verified using the TLA+ model checkers or proof assistant. We describe Distributed PlusCal, an extension of PlusCal that is intended to facilitate the description of distributed algorithms. Distributed PlusCal adds two orthogonal concepts to PlusCal: (i) processes can consist of several threads that share process-local variables, and (ii) Distributed PlusCal provides communication channels with associated primitives for sending and receiving messages. The existing PlusCal translator has been extended to support these concepts, and we report on initial experience with the use of Distributed PlusCal.
Horatiu Cirstea, Stephan Merz

Autonomous Systems

Frontmatter
Formal Modelling and Analysis of a Self-Adaptive Robotic System
Abstract
Self-adaptation is a crucial feature of autonomous systems that must cope with uncertainties in, e.g., their environment and their internal state. Self-adaptive systems are often modelled as two-layered systems with a managed subsystem handling the domain concerns and a managing subsystem implementing the adaptation logic. We consider a case study of a self-adaptive robotic system; more concretely, an autonomous underwater vehicle (AUV) used for pipeline inspection. In this paper, we model and analyse it with the feature-aware probabilistic model checker ProFeat. The functionalities of the AUV are modelled in a feature model, capturing the AUV’s variability. This allows us to model the managed subsystem of the AUV as a family of systems, where each family member corresponds to a valid feature configuration of the AUV. The managing subsystem of the AUV is modelled as a control layer capable of dynamically switching between such valid feature configurations, depending both on environmental and internal conditions. We use this model to analyse probabilistic reward and safety properties for the AUV.
Juliane Päßler, Maurice H. ter Beek, Ferruccio Damiani, Silvia Lizeth Tapia Tarifa, Einar Broch Johnsen
CAN-verify: A Verification Tool For BDI Agents
Abstract
CAN-verify is an automated tool that aids the development, verification, and analysis of BDI agents written in the Conceptual Agent Notation (Can) language. It does not require users to be familiar with verification techniques. CAN-verify supports syntactic error checking, interpretation of programs (running agents), and exhaustive exploration of all possible executions (agent verification and analysis) to check against both generic agent requirements, such as if a task can be achieved successfully, and user-defined requirements, such as whether a certain belief eventually holds. Simple examples of Unmanned Aerial Vehicles (UAV) and autonomous patrol robots illustrate the tool in action.
Mengwei Xu, Thibault Rivoalen, Blair Archibald, Michele Sevegnani

PhD Symposium Presentations

Frontmatter
Scalable and Precise Refinement Types for Imperative Languages
Abstract
In formal verification, there is a dichotomy between tools which are scalable and easy to use but imprecise, like most pluggable type systems, and tools which are expressive and precise but badly scalable and difficult to use, like deductive verification. Our previous research to create a formal link between refinement types and deductive verification allows programmers to use a scalable type system for most of the program, while automatically generating specifications for a deductive verifier for the difficult-to-prove parts. However, this is currently limited to immutable objects. In this work, we thus present an approach which combines uniqueness and refinement type systems in an imperative language. Unlike existing similar approaches, which limit refinements such that they cannot contain any reference-typed program variables, our approach allows more general refinements, because even if a type constraint is not provable in the type system itself, it can be translated and proven by a deductive verifier.
Florian Lanzinger, Joshua Bachmeier, Mattias Ulbrich, Werner Dietl
Shuffling Posets on Trajectories
Abstract
Choreographies describe possible sequences of interactions among a set of agents. We aim to join two lines of research on choreographies: the use of the shuffle on trajectories operator to design more expressive choreographic languages, and the use of models featuring partial orders, to compactly represent concurrency between agents. Specifically, in this paper, we explore the application of the shuffle on trajectories operator to individual posets, and we give a characterisation of shuffles of posets which again yield an individual poset.
Luc Edixhoven
A Framework for Verifying the Collision Freeness of Collaborative Robots (Work in Progress)
Abstract
Collision avoidance is a major problem when robotic devices are being deployed to perform complex collaborative tasks. We present a vision for a framework that makes it convenient to program collaborative robots and to verify that their behaviour is collision-free. It consists of a domain-specific language that is shallowly embedded in the ROS (Robot Operating System) framework and a translation into a programming language that is deeply embedded in the Isabelle/HOL theorem prover.
Artur Graczyk, Marialena Hadjikosti, Andrei Popescu
Backmatter
Metadata
Title
Integrated Formal Methods
Editors
Paula Herber
Anton Wijs
Copyright Year
2024
Electronic ISBN
978-3-031-47705-8
Print ISBN
978-3-031-47704-1
DOI
https://doi.org/10.1007/978-3-031-47705-8

Premium Partner