Formal Methods
26th International Symposium, FM 2024, Milan, Italy, September 9–13, 2024, Proceedings, Part I
- Open Access
- 2025
- Open Access
- Book
- Editors
- André Platzer
- Kristin Yvonne Rozier
- Matteo Pradella
- Matteo Rossi
- Book Series
- Lecture Notes in Computer Science
- Publisher
- Springer Nature Switzerland
About this book
The open access book set LNCS 14933 + 14934 constitutes the refereed proceedings of the 26th International Symposium on Formal Methods, FM 2024, which took place in Milan, Italy, in September 2024.
The 51 full and 4 short papers included in these proceedings were carefully reviewed and selected from 219 submissions. They also include 2 invited talks in full paper length and 10 tutorial papers. The contributions were organized in topical sections as follows:
Part I: Invited papers; fundamentals of formal verification; foundations; learn and repair; programming languages.- logic and automata;
Part II: Tools and case studies; embedded systems track; industry day track; tutorial papers.
Table of Contents
-
Invited Papers
-
Frontmatter
-
Adversarial Robustness Certification for Bayesian Neural Networks
- Open Access
Download PDF-versionThe chapter discusses the vulnerability of neural networks to adversarial attacks and the necessity of developing methods to mitigate these vulnerabilities. It introduces Bayesian Neural Networks (BNNs) as a promising approach for probabilistically principled evaluation of model uncertainty. The main focus is on the development of a probabilistic verification framework to quantify the adversarial robustness of BNNs. The framework addresses two key properties: probabilistic robustness, which measures the probability that a network sampled from the posterior distribution is robust, and decision robustness, which evaluates the robustness of the optimal decision made by a BNN. The chapter presents algorithms for computing lower and upper bounds for both properties, using techniques such as Interval Bound Propagation (IBP) and Linear Bound Propagation (LBP). Empirical validation is provided through case studies in airborne collision avoidance, medical image recognition, and autonomous driving. The framework is shown to be effective in certifying BNNs with multiple hidden layers and hundreds of neurons per layer, and in providing non-trivial certificates of adversarial robustness and predictive uncertainty properties. The chapter also highlights the challenges and limitations of the Bayesian approach, particularly the need to bound and partition at the weight space level. Overall, the work provides a comprehensive and practical technique for the verification of probabilistic and decision robustness in Bayesian Neural Networks, with potential applications in safety-critical scenarios.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations. Specifically, we define two notions of robustness for BNNs in an adversarial setting: probabilistic robustness and decision robustness. The former deals with the probabilistic behaviour of the network, that is, it ensures robustness across different stochastic realisations of the network, while the latter provides guarantees for the overall (output) decision of the BNN. Although these robustness properties cannot be computed analytically, we present a unified computational framework for efficiently and formally bounding them. Our approach is based on weight interval sampling, integration and bound propagation techniques, and can be applied to BNNs with a large number of parameters independently of the (approximate) inference method employed to train the BNN. We evaluate the effectiveness of our method on tasks including airborne collision avoidance, medical imaging and autonomous driving, demonstrating that it can compute non-trivial guarantees on medium size images (i.e., over 16 thousand input parameters). -
Getting Chip Card Payments Right
- Open Access
Download PDF-versionThe chapter 'Getting Chip Card Payments Right' delves into the EMV standard for smart card payments, focusing on the new C8 kernel. EMV, named after Europay, Mastercard, and Visa, is the dominant protocol for in-person payments worldwide. The C8 kernel introduces advanced security features such as modern cryptographic algorithms and privacy protection mechanisms. The chapter explores the use of the Tamarin prover to analyze the security of C8, identifying both secure and insecure configurations. Notable findings include vulnerabilities in configurations lacking certain authentication methods and the effectiveness of relay resistance protocols. The analysis underscores the importance of adhering to EMVCo's requirements for secure implementation. The chapter concludes by emphasizing the need for rigorous formal modeling and proof construction in ensuring the security of complex payment protocols.AI Generated
This summary of the content was generated with the help of AI.
AbstractEMV is the international protocol standard for smart card payments and is used in billions of payment cards worldwide. Despite the standard’s advertised security, various issues have been previously uncovered, deriving from logical flaws that are hard to spot in EMV’s lengthy and complex specification. We have formalized various models of EMV in Tamarin, a symbolic model checker for cryptographic protocols. Tamarin was extremely effective in finding critical flaws, both known and new, and in many cases exploitable on actual cards. We report on these past problems as well as followup work where we verified the latest, improved version of the protocol, the EMV kernel C8. This work puts C8’s correctness on a firm, formal basis, and clarifies which guarantees hold for C8 and under which assumptions. Overall our work supports the thesis that cryptographic protocol model checkers like Tamarin have an essential role to play in improving the security of real-world payment protocols and that they are up to this challenge.
-
-
Fundamentals of Formal Verification
-
Frontmatter
-
A Local Search Algorithm for MaxSMT(LIA)
- Open Access
Download PDF-versionThe chapter discusses the MaxSMT(LIA) problem, which is an optimization version of the satisfiability problem with linear integer arithmetic constraints. It introduces PairLS, a local search algorithm that employs a novel pairwise operator to enrich the search space and a compensation-based picking heuristic to determine effective operations. The algorithm is evaluated on extensive benchmarks, demonstrating competitive performance against state-of-the-art solvers. The chapter also highlights the extensibility of the pairwise operator, showing its potential for enhancing local search algorithms in related domains.AI Generated
This summary of the content was generated with the help of AI.
AbstractMaxSAT modulo theories (MaxSMT) is an important generalization of Satisfiability modulo theories (SMT) with various applications. In this paper, we focus on MaxSMT with the background theory of Linear Integer Arithmetic, denoted as MaxSMT(LIA). We design the first local search algorithm for MaxSMT(LIA) called PairLS, based on the following novel ideas. A novel operator called pairwise operator is proposed for integer variables. It extends the original local search operator by simultaneously operating on two variables, enriching the search space. Moreover, a compensation-based picking heuristic is proposed to determine and distinguish the pairwise operations. Experiments are conducted to evaluate our algorithm on massive benchmarks. The results show that our solver is competitive with state-of-the-art MaxSMT solvers. Furthermore, we also apply the pairwise operation to enhance the local search algorithm of SMT, which shows its extensibility. -
Integrating Loop Acceleration Into Bounded Model Checking
- Open Access
Download PDF-versionThe chapter explores the integration of loop acceleration into Bounded Model Checking (BMC) to enhance the detection of deep counterexamples. It introduces a new technique called Accelerated Bounded Model Checking (ABMC) that leverages acceleration to compute shortcuts in transition formulas, significantly improving BMC’s ability to find long error traces. The approach is particularly useful for systems with deeply nested loops, where traditional BMC methods struggle. The chapter also discusses the use of blocking clauses to prevent redundant work and to prove safety in cases where BMC would otherwise fail. The method is evaluated through experiments, demonstrating its effectiveness in finding long counterexamples and proving safety in various scenarios. The chapter concludes by highlighting the potential for future work, including support for additional theories and non-linear constraints.AI Generated
This summary of the content was generated with the help of AI.
AbstractBounded Model Checking (BMC) is a powerful technique for proving unsafety. However, finding deep counterexamples that require a large bound is challenging for BMC. On the other hand, acceleration techniques compute “shortcuts” that “compress” many execution steps into a single one. In this paper, we tightly integrate acceleration techniques into SMT-based bounded model checking. By adding suitable “shortcuts” on the fly, our approach can quickly detect deep counterexamples. Moreover, using so-called blocking clauses, our approach can prove safety of examples where BMC diverges. An empirical comparison with other state-of-the-art techniques shows that our approach is highly competitive for proving unsafety, and orthogonal to existing techniques for proving safety. -
Nonlinear Craig Interpolant Generation Over Unbounded Domains by Separating Semialgebraic Sets
- Open Access
Download PDF-versionThe chapter delves into the synthesis of Craig interpolants for nonlinear polynomial formulas over unbounded domains, a crucial aspect of formal verification and automated theorem proving. It introduces a novel approach utilizing homogenization techniques to revive the Archimedean condition, enabling the separation of semialgebraic sets and the generation of interpolants. The authors provide SOS programming procedures for finding polynomial and semialgebraic interpolants, demonstrating the effectiveness and efficiency of their method through concrete examples. This work extends the existing research by removing restrictions on formula forms and variable domains, offering a more general and practical solution for interpolant generation in verification tasks.AI Generated
This summary of the content was generated with the help of AI.
AbstractInterpolation-based techniques become popular in recent years, as they can improve the scalability of existing verification techniques due to their inherent modularity and local reasoning capabilities. Synthesizing Craig interpolants is the cornerstone of these techniques. In this paper, we investigate nonlinear Craig interpolant synthesis for two polynomial formulas of the general form, essentially corresponding to the underlying mathematical problem to separate two disjoint semialgebraic sets. By combining the homogenization approach with existing techniques, we prove the existence of a novel class of non-polynomial interpolants called semialgebraic interpolants. These semialgebraic interpolants subsume polynomial interpolants as a special case. To the best of our knowledge, this is the first existence result of this kind. Furthermore, we provide complete sum-of-squares characterizations for both polynomial and semialgebraic interpolants, which can be efficiently solved as semidefinite programs. Examples are provided to demonstrate the effectiveness and efficiency of our approach. -
Practical Approximate Quantifier Elimination for Non-linear Real Arithmetic
- Open Access
Download PDF-versionThe chapter focuses on the challenging problem of quantifier elimination (QE) in non-linear real arithmetic (NRA), a crucial area in computer algebra and real algebraic geometry. It introduces a practical algorithm that approximates NRA constraints using Boolean combinations of real interval constraints, enabling efficient approximate QE over NRA. The approach leverages Handelman's Theorem and dynamic adaptive gridding to reduce the approximation problem to multiple linear programming instances, discharged by a state-of-the-art LP solver. The algorithm provides strong guarantees of approximation and allows users to trade off precision for performance. The chapter also demonstrates the practical effectiveness of the algorithm through experiments with POQER, a tool implementing the algorithm, showing significant improvements over existing tools in solving QE problems and enabling the use of approximate QE in combinations of theories including NRA. Additionally, the chapter highlights the potential of the algorithm in solving quantified SMT problems over mixed theories, such as NRA+ADT, which are beyond the reach of modern SMT solvers.AI Generated
This summary of the content was generated with the help of AI.
AbstractQuantifier Elimination (QE) concerns finding a quantifier-free formula that is semantically equivalent to a quantified formula in a given logic. For the theory of non-linear arithmetic over reals (NRA), QE is known to be computationally challenging. In this paper, we show how QE over NRA can be solved approximately and efficiently in practice using a Boolean combination of constraints in the linear arithmetic over reals (LRA). Our approach works by approximating the solution space of a set of NRA constraints when all real variables are bounded. It combines adaptive dynamic gridding with application of Handelman’s Theorem to obtain the approximation efficiently via a sequence of linear programs (LP). We provide rigorous approximation guarantees, and also proofs of soundness and completeness (under mild assumptions) of our algorithm. Interestingly, our work allows us to bootstrap on earlier work (viz. [38]) and solve quantified SMT problems over a combination of NRA and other theories, that are beyond the reach of state-of-the-art solvers. We have implemented our approach in a preprocessor for Z3 called POQER. Our experiments show that POQER+Z3EG outperforms state-of-the-art SMT solvers on non-trivial problems, adapted from a suite of benchmarks. -
A Divide-and-Conquer Approach to Variable Elimination in Linear Real Arithmetic
- Open Access
Download PDF-versionThe chapter introduces a divide-and-conquer approach to variable elimination in linear real arithmetic (LRA), a crucial problem in various applications such as static program analysis, scheduling, and neural network verification. The method, derived from the FMplex method, focuses on polyhedron projection, which involves projecting a convex polyhedron onto a subspace. The authors present a new algorithm that returns the result as a set of linear constraints, demonstrating its correctness, complexity, and improvements over existing methods. The chapter also discusses interesting relations to other methods like virtual term substitution and provides an experimental evaluation showing the superior performance of the new approach compared to established tools. Additionally, it highlights potential future improvements and the parallelizability of the method.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe introduce a novel variable elimination method for conjunctions of linear real arithmetic constraints. In prior work, we derived a variant of the Fourier-Motzkin elimination, which uses case splitting to reduce the procedure’s complexity from doubly to singly exponential. This variant, which we call FMplex, was originally developed for satisfiability checking, and it essentially performs a depth-first search in a tree of sub-problems. It can be adapted straightforwardly for the task of quantifier elimination, but it returns disjunctions of conjunctions, even though the solution space can always be defined by a single conjunction. Our main contribution is to show how to efficiently extract an equivalent conjunction from the search tree. Besides the theoretical foundations, we explain how the procedure relates to other methods for quantifier elimination and polyhedron projection. An experimental evaluation demonstrates that our implementation is competitive with established tools.
-
-
Foundations
-
Frontmatter
-
Free Facts: An Alternative to Inefficient Axioms in Dafny
- Open Access
Download PDF-versionThe chapter discusses the challenges of proof brittleness in software verification using Dafny, a programming language with an automated verifier. It introduces 'free facts' as an alternative to universally quantified axioms, which can lead to instability in verification results. Free facts are automatically generated properties based on concrete instances of program code, reducing the complexity and brittleness of proof obligations. The chapter provides a detailed implementation and evaluation of free facts in Dafny, demonstrating their potential to improve verification time and resource efficiency. It also compares free facts with traditional axioms, highlighting their advantages in specific use cases. The evaluation is based on large-scale subject systems, showcasing the practical applicability of free facts in reducing proof brittleness and enhancing the overall verification process.AI Generated
This summary of the content was generated with the help of AI.
AbstractFormal software verification relies on properties of functions and built-in operators. Unless these properties are handled directly by decision procedures, an automated verifier includes them in verification conditions by supplying them as universally quantified axioms or theorems. The use of quantifiers sometimes leads to bad performance, especially if automation causes the quantifiers to be instantiated many times.This paper proposes free facts as an alternative to some axioms. A free fact is a pre-instantiated axiom that is generated alongside the formulas in a verification condition that can benefit from the facts. Replacing an axiom with free facts thus reduces the number of quantifiers in verification conditions. Free facts are statically triggered by syntactic occurrences of certain patterns in the proof terms. This is less powerful than the dynamically triggered patterns used during proof construction. However, the paper shows that free facts perform well in practice. -
Understanding Synthesized Reactive Systems Through Invariants
- Open Access
Download PDF-versionThe chapter delves into the challenges of scalability and trust in reactive synthesis, where systems are automatically generated from specifications. It proposes a method to identify and explain invariants in synthesized systems, focusing on generalized reactivity(1) specifications. The approach involves computing Boolean functions that characterize input/output combinations to avoid, enhancing the understandability of system behavior. The chapter also discusses the application of this method in two case studies, demonstrating its effectiveness in specification debugging and synthesizing understandable invariants. Additionally, it highlights the potential of this approach to improve the scalability and trustworthiness of reactive synthesis in practical applications.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn many applications for which reactive synthesis is attractive, computed implementations need to have understandable behavior. While some existing synthesis approaches compute finite-state machines with a structure that supports their understandability, such approaches do not scale to specifications that can only be realized with a large number of states. Furthermore, asking the engineer to understand the internal structure of the implementation is unnecessary when only the behavior of the implementation is to be understood.In this paper, we present an approach to computing understandable safety invariants that every implementation satisfying a generalized reactivity(1) specification needs to fulfill. Together with the safety part of the specification, the invariants completely define which transitions between input and output proposition valuations any correct implementation can take. We apply the approach in two case studies and demonstrate that the computed invariants highlight the strategic decisions that implementations for the given specification need to make, which not only helps the system designer with understanding what the specification entails, but also supports specification debugging. -
Combining Classical and Probabilistic Independence Reasoning to Verify the Security of Oblivious Algorithms
- Open Access
Download PDF-versionThe chapter introduces a program logic designed to formally verify the security of probabilistic oblivious algorithms, which aim to protect secrets by hiding their access patterns. It combines classical and probabilistic independence reasoning to tackle complex semantics and invariants that previous methods could not handle. The logic is applied to several non-trivial case studies, demonstrating its effectiveness in verifying the security of real-world oblivious algorithms. The chapter highlights the unique challenges posed by probabilistic oblivious algorithms and the innovative approach of the proposed logic in addressing them.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe consider the problem of how to verify the security of probabilistic oblivious algorithms formally and systematically. Unfortunately, prior program logics fail to support a number of complexities that feature in the semantics and invariants needed to verify the security of many practical probabilistic oblivious algorithms. We propose an approach based on reasoning over perfectly oblivious approximations, using a program logic that combines both classical Hoare logic reasoning and probabilistic independence reasoning to support all the needed features. We formalise and prove our new logic sound in Isabelle/HOL and apply our approach to formally verify the security of several challenging case studies beyond the reach of prior methods for proving obliviousness. -
Efficient Formally Verified Maximal End Component Decomposition for MDPs
- Open Access
Download PDF-versionThe chapter introduces a formally verified Maximal End Component (MEC) decomposition algorithm for Markov Decision Processes (MDPs), crucial for probabilistic model checking. It outlines the importance of MECs in ensuring the correctness of model checking algorithms and the challenges in achieving formal verification. The authors propose a bottom-up approach to integrate verified components into an existing model checker, mcsta, demonstrating the practicality of their method. The algorithm is refined using the Isabelle Refinement Framework and integrated into mcsta, showcasing its efficiency and reliability. The chapter also includes experimental evaluations comparing the performance of the verified implementation with existing tools, highlighting its competitive edge and potential for future advancements in formal verification.AI Generated
This summary of the content was generated with the help of AI.
AbstractIdentifying a Markov decision process’s maximal end components is a prerequisite for applying sound probabilistic model checking algorithms. In this paper, we present the first mechanized correctness proof of a maximal end component decomposition algorithm, which is an important algorithm in model checking, using the Isabelle/HOL theorem prover. We iteratively refine the high-level algorithm and proof into an imperative LLVM bytecode implementation that we integrate into the Modest Toolset ’s existing mcsta model checker. We bring the benefits of interactive theorem proving into practice by reducing the trusted code base of a popular probabilistic model checker and we experimentally show that our new verified maximal end component decomposition in mcsta performs on par with the tool’s previous unverified implementation. -
Introducing SWIRL: An Intermediate Representation Language for Scientific Workflows
- Open Access
Download PDF-versionThe chapter 'Introducing SWIRL: An Intermediate Representation Language for Scientific Workflows' addresses the limitations of current Workflow Management Systems (WMSs) in handling modern scientific applications. It introduces SWIRL, a low-level language designed for distributed workflow execution plans, enabling interoperability and automatic optimisation of data communications. SWIRL models workflows using send/receive communication primitives and provides a formal method for encoding workflow instances into distributed execution plans. The chapter also includes a set of rewriting rules for optimising data transfers and discusses the implementation of a SWIRL compiler toolchain. The SWIRL language aims to improve the efficiency and effectiveness of scientific workflows, making it a promising solution for large-scale, distributed scientific computations.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn the ever-evolving landscape of scientific computing, properly supporting the modularity and complexity of modern scientific applications requires new approaches to workflow execution, like seamless interoperability between different workflow systems, distributed-by-design workflow models, and automatic optimisation of data movements. In order to address this need, this article introduces SWIRL, an intermediate representation language for scientific workflows. In contrast with other product-agnostic workflow languages, SWIRL is not designed for human interaction but to serve as a low-level compilation target for distributed workflow execution plans. The main advantages of SWIRL semantics are low-level primitives based on the send/receive programming model and a formal framework ensuring the consistency of the semantics and the specification of translating workflow models represented by Directed Acyclic Graphs (DAGs) into SWIRL workflow descriptions. Additionally, SWIRL offers rewriting rules designed to optimise execution traces, accompanied by corresponding equivalence. An open-source SWIRL compiler toolchain has been developed using the ANTLR Python3 bindings. -
Fast Attack Graph Defense Localization via Bisimulation
- Open Access
Download PDF-versionThe chapter 'Fast Attack Graph Defense Localization via Bisimulation' explores the use of bisimulation to analyze and optimize analytical attack graphs (AAGs). AAGs represent potential cyber attack paths within a network, but their size can be prohibitively large for practical analysis. The authors introduce the concept of an AAG-fold, a compact representation of an AAG that preserves essential topological properties. By applying bisimulation, the AAG-fold allows for faster and more efficient computation of defense cores, which are minimal sets of rules whose removal prevents attacks. The chapter also discusses the implementation and evaluation of these methods, demonstrating their practical applicability in real-world scenarios. The innovative use of bisimulation sets this work apart from previous approaches, offering a scalable and efficient solution for cybersecurity defense.AI Generated
This summary of the content was generated with the help of AI.
AbstractSystem administrators, network engineers, and IT managers can learn much about the vulnerabilities of an organization’s cyber system by constructing and analyzing analytical attack graphs (AAGs). An AAG consists of logical rule nodes, fact nodes, and derived fact nodes. It provides a graph-based representation that describes ways by which an attacker can achieve progress towards a desired goal, a.k.a. a crown jewel. Given an AAG, different types of analyses can be performed to identify attacks on a target goal, measure the vulnerability of the network, and gain insights on how to make it more secure. However, as the size of the AAGs representing real-world systems may be very large, existing analyses are slow or practically impossible. In this paper, we introduce and show how to compute an AAG’s defense core: a locally minimal subset of the AAG’s rules whose removal will prevent an attacker from reaching a crown jewel. Most importantly, in order to scale-up the performance of the detection of a defense core, we introduce a novel application of the well-known notion of bisimulation to AAGs. Our experiments show that the use of bisimulation results in significantly smaller graphs and in faster detection of defense cores, making them practical.
-
- Title
- Formal Methods
- Editors
-
André Platzer
Kristin Yvonne Rozier
Matteo Pradella
Matteo Rossi
- Copyright Year
- 2025
- Publisher
- Springer Nature Switzerland
- Electronic ISBN
- 978-3-031-71162-6
- Print ISBN
- 978-3-031-71161-9
- DOI
- https://doi.org/10.1007/978-3-031-71162-6
Accessibility information for this book is coming soon. We're working to make it available as quickly as possible. Thank you for your patience.