Skip to main content
Top

2023 | Book

NASA Formal Methods

15th International Symposium, NFM 2023, Houston, TX, USA, May 16–18, 2023, Proceedings

insite
SEARCH

About this book

This book constitutes the proceedings of the 15th International Symposium on NASA Formal Methods, NFM 2023, held in Houston, Texas, USA, during May 16-18, 2023.

The 26 full and 3 short papers presented in this volume were carefully reviewed and selected from 75 submissions.

The papers deal with advances in formal methods, formal methods techniques, and formal methods in practice.

Table of Contents

Frontmatter
Verification of LSTM Neural Networks with Non-linear Activation Functions
Abstract
Recurrent neural networks are increasingly employed in safety-critical applications, such as control in cyber-physical systems, and therefore their verification is crucial for guaranteeing reliability and correctness. We present a novel approach for verifying the dynamic behavior of Long short-term memory networks (LSTMs), a popular type of recurrent neural network (RNN). Our approach employs the satisfiability modulo theories (SMT) solver iSAT solving complex Boolean combinations of linear and non-linear constraint formulas (including transcendental functions), and it therefore is able to verify safety properties of these networks.
Farzaneh Moradkhani, Connor Fibich, Martin Fränzle
Open- and Closed-Loop Neural Network Verification Using Polynomial Zonotopes
Abstract
We present a novel approach to efficiently compute tight non-convex enclosures of the image through neural networks with ReLU, sigmoid, or hyperbolic tangent activation functions. In particular, we abstract the input-output relation of each neuron by a polynomial approximation, which is evaluated in a set-based manner using polynomial zonotopes. While our approach can also can be beneficial for open-loop neural network verification, our main application is reachability analysis of neural network controlled systems, where polynomial zonotopes are able to capture the non-convexity caused by the neural network as well as the system dynamics. This results in a superior performance compared to other methods, as we demonstrate on various benchmarks.
Niklas Kochdumper, Christian Schilling, Matthias Althoff, Stanley Bak
Verifying Attention Robustness of Deep Neural Networks Against Semantic Perturbations
Abstract
It is known that deep neural networks (DNNs) classify an input image by paying particular attention to certain specific pixels; a graphical representation of the magnitude of attention to each pixel is called a saliency-map. Saliency-maps are used to check the validity of the classification decision basis, e.g., it is not a valid basis for classification if a DNN pays more attention to the background rather than the subject of an image. Semantic perturbations can significantly change the saliency-map. In this work, we propose the first verification method for attention robustness, i.e., the local robustness of the changes in the saliency-map against combinations of semantic perturbations. Specifically, our method determines the range of the perturbation parameters (e.g., the brightness change) that maintains the difference between the actual saliency-map change and the expected saliency-map change below a given threshold value. Our method is based on activation region traversals, focusing on the outermost robust boundary for scalability on larger DNNs. We empirically evaluate the effectiveness and performance of our method on DNNs trained on popular image classification datasets.
Satoshi Munakata, Caterina Urban, Haruki Yokoyama, Koji Yamamoto, Kazuki Munakata
Formalizing Piecewise Affine Activation Functions of Neural Networks in Coq
Abstract
Verification of neural networks relies on activation functions being piecewise affine (pwa)—enabling an encoding of the verification problem for theorem provers. In this paper, we present the first formalization of pwa activation functions for an interactive theorem prover tailored to verifying neural networks within Coq using the library Coquelicot for real analysis. As a proof-of-concept, we construct the popular pwa activation function ReLU. We integrate our formalization into a Coq model of neural networks, and devise a verified transformation from a neural network \(\mathcal {N}\) to a pwa function representing \(\mathcal {N}\) by composing pwa functions that we construct for each layer. This representation enables encodings for proof automation, e.g. Coq ’s tactic lra – a decision procedure for linear real arithmetic. Further, our formalization paves the way for integrating Coq in frameworks of neural network verification as a fallback prover when automated proving fails.
Andrei Aleksandrov, Kim Völlinger
Verifying an Aircraft Collision Avoidance Neural Network with Marabou
Abstract
In this case study, we have explored the use of a neural network model checker to analyze the safety characteristics of a neural network trained using reinforcement learning to compute collision avoidance flight plans for aircraft. We analyzed specific aircraft encounter geometries (e.g., head-on, overtake) and also examined robustness of the neural network. We verified the minimum horizontal separation property by identifying conditions where the neural network can potentially cause a transition from a safe state to an unsafe state. We show how the property verification problem is mathematically transformed and encoded as linear-constraints that can be analyzed by the Marabou model checker.
Cong Liu, Darren Cofer, Denis Osipychev
Strategy Synthesis in Markov Decision Processes Under Limited Sampling Access
Abstract
A central task in control theory, artificial intelligence, and formal methods is to synthesize reward-maximizing strategies for agents that operate in partially unknown environments. In environments modeled by gray-box Markov decision processes (MDPs), the impact of the agents’ actions are known in terms of successor states but not the stochastics involved. In this paper, we devise a strategy synthesis algorithm for gray-box MDPs via reinforcement learning that utilizes interval MDPs as internal model. To compete with limited sampling access in reinforcement learning, we incorporate two novel concepts into our algorithm, focusing on rapid and successful learning rather than on stochastic guarantees and optimality: lower confidence bound exploration reinforces variants of already learned practical strategies and action scoping reduces the learning action space to promising actions. We illustrate benefits of our algorithms by means of a prototypical implementation applied on examples from the AI and formal methods communities.
Christel Baier, Clemens Dubslaff, Patrick Wienhöft, Stefan J. Kiebel
Learning Symbolic Timed Models from Concrete Timed Data
Abstract
We present a technique for learning explainable timed automata from passive observations of a black-box function, such as an artificial intelligence system. Our method accepts a single, long, timed word with mixed input and output actions and learns a Mealy machine with one timer. The primary advantage of our approach is that it constructs a symbolic observation tree from a concrete timed word. This symbolic tree is then transformed into a human comprehensible automaton. We provide a prototype implementation and evaluate it by learning the controllers of two systems: a brick-sorter conveyor belt trained with reinforcement learning and a real-world derived smart traffic light controller. We compare different model generators using our symbolic observation tree as their input and achieve the best results using k-tails. In our experiments, we learn smaller and simpler automata than existing passive timed learners while maintaining accuracy.
Simon Dierl, Falk Maria Howar, Sean Kauffman, Martin Kristjansen, Kim Guldstrand Larsen, Florian Lorber, Malte Mauritz
Reward Shaping from Hybrid Systems Models in Reinforcement Learning
Abstract
Reinforcement learning is increasingly often used as a learning technique to implement control tasks in autonomous systems. To meet stringent safety requirements, formal methods for learning-enabled systems, such as closed-loop neural network verification, shielding, falsification, and online reachability analysis, analyze learned controllers for safety violations. Besides filtering unsafe actions during training, these approaches view verification and training largely as separate tasks. We propose an approach based on logically constrained reinforcement learning to couple formal methods and reinforcement learning more tightly by generating safety-oriented aspects of reward functions from verified hybrid systems models. We demonstrate the approach on a standard reinforcement learning environment for longitudinal vehicle control.
Marian Qian, Stefan Mitsch
Conservative Safety Monitors of Stochastic Dynamical Systems
Abstract
Generating accurate runtime safety estimates for autonomous systems is vital to ensuring their continued proliferation. However, exhaustive reasoning about future behaviors is generally too complex to do at runtime. To provide scalable and formal safety estimates, we propose a method for leveraging design-time model checking results at runtime. Specifically, we model the system as a probabilistic automaton (PA) and compute bounded-time reachability probabilities over the states of the PA at design time. At runtime, we combine distributions of state estimates with the model checking results to produce a bounded time safety estimate. We argue that our approach produces well-calibrated safety probabilities, assuming the estimated state distributions are well-calibrated. We evaluate our approach on simulated water tanks.
Matthew Cleaveland, Oleg Sokolsky, Insup Lee, Ivan Ruchkin
Code-Level Formal Verification of Ellipsoidal Invariant Sets for Linear Parameter-Varying Systems
Abstract
This paper focuses on the formal verification of invariant properties of a C code that describes the dynamics of a discrete-time linear parameter-varying system with affine parameter dependence. The C code is annotated using ACSL, and the Frama-C’s WP plugin is used to transform the annotations and code into proof objectives. The invariant properties are then formally verified in both the real and float models using the polynomial inequalities plugin of the theorem prover Alt-Ergo. The challenges of verifying the invariant properties in the float model are addressed by utilizing bounds on numerical errors and incorporating them into the real model.
Elias Khalife, Pierre-Loic Garoche, Mazen Farhood
Reasoning with Metric Temporal Logic and Resettable Skewed Clocks
Abstract
The formal verification of distributed real-time systems is challenging due to the interaction between the local computation and the communication between the different nodes of a network. The overall correctness of the system relies on local properties and timing exchange of data between components. This requires to take into account the drift of local clocks and their synchronization. The reference time of local properties may be given by skewed and resettable (thus non-monotonic) clocks.
In this paper, we consider automated reasoning over MTLSK, a variant of MTL over Resettable Skewed Clocks. We focus on metric operators with lower and upper parametric bounds. We provide an encoding into temporal logic modulo the theory of reals and we solve satisfiability with SMT-based model checking techniques. We implemented and evaluated the approach on typical properties of real-time systems.
Alberto Bombardelli, Stefano Tonetta
Centralized Multi-agent Synthesis with Spatial Constraints via Mixed-Integer Quadratic Programming
Abstract
We address the challenge of centralized multi-agent motion planning for tasks described in Signal Temporal Logic (STL) which require both adherence to spatial constraints and simultaneous execution of team behaviors. Existing methods to satisfy STL specifications including spatial constraints use decentralized planning approaches. These decentralized methods are unable to enforce temporal constraints jointly across agents and therefore cannot require multiple agents to complete simultaneous team behaviors. We present a mixed-integer quadratic program (MIQP) encoding of the search for multi-agent trajectories to satisfy team STL specifications in a gridworld environment. We experimentally evaluate the solve time of the centralized MIQP encoding against a centralized mixed-integer linear program (MILP) encoding in scenarios with different types of spatial constraints. Numerical results uncover that the solve time for the MIQP encoding is more suitable for problems with inter-agent spatial constraints, such as collision avoidance constraints, while the MILP encoding better suits constraints between agents and static objects in the environment. Our findings provide valuable design recommendations for implementation of either approach according to the type of spatial constraints which must be supported.
Alexandra Forsey-Smerek, Ho Chit Siu, Kevin Leahy
A Framework for Policy Based Negotiation
Abstract
Semantic remote attestation is a process for gathering and appraising evidence to determine the state of a remote system. Remote attestation occurs at the request of an appraiser or relying party and proceeds with a target system executing an attestation protocol that invokes attestation services in a specified order to generate and bundle evidence. The appraiser may then reason about the evidence to establish trust in the target’s state. Attestation Protocol Negotiation is the process of establishing a mutually agreed upon protocol that satisfies the appraiser’s desire for comprehensive information and the target’s desire for constrained disclosure. Here we explore formalization of negotiation focusing on a definition of system specifications through manifests, protocol sufficiency and soundness, policy representation, and negotiation structure. By using our formal models to represent and verify negotiation’s properties we can statically determine that a provably sound, sufficient, and executable protocol is produced.
Anna Fritz, Perry Alexander
Rewrite-Based Decomposition of Signal Temporal Logic Specifications
Abstract
Multi-agent coordination under Signal Temporal Logic (STL) specifications is an exciting approach for accomplishing complex temporal missions with safety requirements. Despite significant progress, these approaches still suffer from scalability limitations. Decomposition into subspecifications and corresponding subteams of agents provides a way to reduce computation and leverage modern parallel computing architectures. In this paper, we propose a rewrite-based approach for jointly decomposing an STL specification and team of agents. We provide a set of formula transformations that facilitate decomposition. Furthermore, we cast those transformations as a rewriting system and prove that it is convergent. Next, we develop an algorithm for efficiently exploring and ranking rewritten formulae as decomposition candidates, and show how to decompose the best candidate. Finally, we compare to previous work on decomposing specifications for multi-agent planning problems, and provide computing and energy grid case studies.
Kevin Leahy, Makai Mann, Cristian-Ioan Vasile
Quantitative Verification and Strategy Synthesis for BDI Agents
Abstract
Belief-Desire-Intention (BDI) agents feature probabilistic outcomes, e.g. the chance an agent tries but fails to open a door, and non-deterministic choices: what plan/intention to execute next? We want to reason about agents under both probabilities and non-determinism to determine, for example, probabilities of mission success and the strategies used to maximise this. We define a Markov Decision Process describing the semantics of the Conceptual Agent Notation (Can) agent language that supports non-deterministic event, plan, and intention selection, as well as probabilistic action outcomes. The model is derived through an encoding to Milner’s Bigraphs and executed using the BigraphER tool. We show, using probabilistic model checkers PRISM and Storm, how to reason about agents including: probabilistic and reward-based properties, strategy synthesis, and multi-objective analysis. This analysis provides verification and optimisation of BDI agent design and implementation.
Blair Archibald, Muffy Calder, Michele Sevegnani, Mengwei Xu
Multi-objective Task Assignment and Multiagent Planning with Hybrid GPU-CPU Acceleration
Abstract
Allocation and planning with a collection of tasks and a group of agents is an important problem in multiagent systems. One commonly faced bottleneck is scalability, as in general the multiagent model increases exponentially in size with the number of agents. We consider the combination of random task assignment and multiagent planning under multiple-objective constraints, and show that this problem can be decentralised to individual agent-task models. We present an algorithm of point-oriented Pareto computation, which checks whether a point corresponding to given cost and probability thresholds for our formal problem is feasible or not. If the given point is infeasible, our algorithm finds a Pareto-optimal point which is closest to the given point. We provide the first multi-objective model checking framework that simultaneously uses GPU and multi-core acceleration. Our framework manages CPU and GPU devices as a load balancing problem for parallel computation. Our experiments demonstrate that parallelisation achieves significant run time speed-up over sequential computation.
Thomas Robinson, Guoxin Su
Reasoning over Test Specifications Using Assume-Guarantee Contracts
Abstract
We establish a framework to reason about test campaigns described formally. First, we introduce the notion of a test structure—an object that carries i) the formal specifications of the system under test, and ii) the test objective, which is specified by a test engineer. We build on test structures to define test campaigns and specifications for the tester. Secondly, we use the algebra of assume-guarantee contracts to reason about constructing tester specifications, comparing test structures and test campaigns, and combining and splitting test structures. Using the composition operator, we characterize the conditions on the constituent tester specifications and test objectives for feasibly combining test structures. We illustrate the different applications of the quotient operator to split the test objective, the system into subsystems, or both. Finally, we illustrate test executions corresponding to the combined and split test structures in a discrete autonomous driving example and an aircraft formation-flying example. We anticipate that reasoning over test specifications would aid in generating optimal test campaigns.
Apurva Badithela, Josefine B. Graebener, Inigo Incer, Richard M. Murray
From the Standards to Silicon: Formally Proved Memory Controllers
Abstract
Recent research in both academia and industry has successfully used deductive verification to design hardware and prove its correctness. While tools and languages to write formally proved hardware have been proposed, applications and use cases are often overlooked. In this work, we focus on Dynamic Random Access Memories (DRAM) controllers and the DRAM itself – which has its expected temporal and functional behaviours described in the standards written by the Joint Electron Device Engineering Council (JEDEC). Concretely, we associate an existing Coq DRAM controller framework – which can be used to write DRAM scheduling algorithms that comply with a variety of correctness criteria – to a back-end system that generates proved logically equivalent hardware. This makes it possible to simultaneously enjoy the trustworthiness provided by the Coq framework and use the generated synthesizable hardware in real systems. We validate the approach by using the generated code as a plug-in replacement in an existing DDR4 controller implementation, which includes a host interface (AXI), a physical layer (PHY) from Xilinx, and a model of a memory part Micron MT40A1G8WE-075E:D. We simulate and synthesise the full system.
Felipe Lisboa Malaquias, Mihail Asavoae, Florian Brandner
Formalising Liveness Properties in Event-B with the Reflexive EB4EB Framework
Abstract
The correct-by-construction state-based Event-B formal method lacks the ability to express liveness properties using temporal logic. To address this challenge, two approaches can be envisioned. First, embed Event-B models in another formal method supporting liveness properties verification. This method is cumbersome and error-prone, and the verification result is not guaranteed on the source model. Second, extend Event-B to support the expression of and reasoning on liveness properties, and more generally temporal properties. Following the second approach, in [20], J.-R. Abrial and T. S. Hoang proposed an axiomatisation of linear temporal logic (LTL) for Event-B with a set of proof obligations (POs) allowing to verify these properties. These POs are mathematically formalised, but are neither implemented nor generated automatically. In this paper, using the reflexive EB4EB framework [37, 38] allowing for manipulation of the core concepts of Event-B, we propose to formalise and operationalise the automatic generation of proof obligations associated to liveness properties expressed in LTL. Furthermore, relying on trace-based semantics, we demonstrate the soundness of this formalisation, and provide a set of intermediate and generic theorems to increase the rate of proof automation for these properties. Finally, a case study is proposed to demonstrate the use of the defined operators for expressing and proving liveness properties.
P. Rivière, N. K. Singh, Y. Aït-Ameur, G. Dupont
Formalized High Level Synthesis with Applications to Cryptographic Hardware
Abstract
Verification of hardware-based cryptographic accelerators connects a low-level RTL implementation to the abstract algorithm itself; generally, the more optimized for performance an accelerator is, the more challenging its verification. This paper introduces a verification methodology, model validation, that uses a formalized high-level synthesis language (FHLS) as an intermediary between algorithm specification and hardware implementation. The foundation of our approach to model validation is a mechanized denotational semantics for the ReWire HLS language. Model validation proves the faithfulness of FHLS models to the RTL implementation and we summarize a model validation case study for a suite of pipelined Barrett multipliers.
William Harrison, Ian Blumenfeld, Eric Bond, Chris Hathhorn, Paul Li, May Torrence, Jared Ziegler
From Natural Language Requirements to the Verification of Programmable Logic Controllers: Integrating FRET into PLCverif
Abstract
PLCverif is an actively developed project at CERN, enabling the formal verification of Programmable Logic Controller (PLC) programs in critical systems. In this paper, we present our work on improving the formal requirements specification experience in PLCverif through the use of natural language. To this end, we integrate NASA’s FRET, a formal requirement elicitation and authoring tool, into PLCverif. FRET is used to specify formal requirements in structured natural language, which automatically translates into temporal logic formulae. FRET’s output is then directly used by PLCverif for verification purposes. We discuss practical challenges that PLCverif users face when authoring requirements and the FRET features that help alleviate these problems. We present the new requirement formalization workflow and report our experience using it on two critical CERN case studies.
Zsófia Ádám, Ignacio D. Lopez-Miguel, Anastasia Mavridou, Thomas Pressburger, Marcin Bęś, Enrique Blanco Viñuela, Andreas Katis, Jean-Charles Tournier, Khanh V. Trinh, Borja Fernández Adiego
Automata-Based Software Model Checking of Hyperproperties
Abstract
We develop model checking algorithms for Temporal Stream Logic (TSL) and Hyper Temporal Stream Logic (HyperTSL) modulo theories. TSL extends Linear Temporal Logic (LTL) with memory cells, functions and predicates, making it a convenient and expressive logic to reason over software and other systems with infinite data domains. HyperTSL further extends TSL to the specification of hyperproperties – properties that relate multiple system executions. As such, HyperTSL can express information flow policies like noninterference in software systems. We augment HyperTSL with theories, resulting in HyperTSL(T), and build on methods from LTL software verification to obtain model checking algorithms for TSL and HyperTSL(T). This results in a sound but necessarily incomplete algorithm for specifications contained in the \(\forall ^*\exists ^*\) fragment of HyperTSL(T). Our approach constitutes the first software model checking algorithm for temporal hyperproperties with quantifier alternations that does not rely on a finite-state abstraction.
Bernd Finkbeiner, Hadar Frenkel, Jana Hofmann, Janine Lohse
Condition Synthesis Realizability via Constrained Horn Clauses
Abstract
Condition synthesis takes a program in which some of the conditions in conditional branches are missing, and a specification, and automatically infers conditions to fill-in the holes such that the program meets the specification.
In this paper, we propose CoSyn, an algorithm for determining the realizability of a condition synthesis problem, with an emphasis on proving unrealizability efficiently. We use the novel concept of a doomed initial state, which is an initial state that can reach an error state along every run of the program. For a doomed initial state \(\sigma \), there is no way to make the program safe by forcing \(\sigma \) (via conditions) to follow one computation or another. CoSyn checks for the existence of a doomed initial state via a reduction to Constrained Horn Clauses (CHC).
We implemented CoSyn in SeaHorn using Spacer as the CHC solver and evaluated it on multiple examples. Our evaluation shows that CoSyn outperforms the state-of-the-art syntax-guided tool Cvc5 in proving both realizability and unrealizability. We also show that joining forces of CoSyn and Cvc5 outperforms Cvc5 alone, allowing to solve more instances, faster.
Bat-Chen Rothenberg, Orna Grumberg, Yakir Vizel, Eytan Singher
A Toolkit for Automated Testing of Dafny
Abstract
Dafny is a verification-ready programming language that is executed via compilation to C# and other mainstream languages. We introduce a toolkit for automated testing of Dafny programs, consisting of DUnit (unit testing framework), DMock (mocking framework), and DTest (automated test generation). The main component of the toolkit, DTest, repurposes the Dafny verifier to automatically generate DUnit test cases that achieve desired coverage. It supports verification-specific language features, such as pre- and postconditions, and leverages them for mocking with DMock. We evaluate the new toolkit in two ways. First, we use two open-source Dafny projects to demonstrate that DTest can generate unit tests with branch coverage that is comparable to the expectations developers set for manually written tests. Second, we show that a greedy approach to test generation often produces a number of tests close to the theoretical minimum for the given coverage criterion.
Aleksandr Fedchin, Tyler Dean, Jeffrey S. Foster, Eric Mercer, Zvonimir Rakamarić, Giles Reger, Neha Rungta, Robin Salkeld, Lucas Wagner, Cassidy Waldrip
Verified ALL(*) Parsing with Semantic Actions and Dynamic Input Validation
Abstract
Formally verified parsers are powerful tools for preventing the kinds of errors that result from ad hoc parsing and validation of program input. However, verified parsers are often based on formalisms that are not expressive enough to capture the full definition of valid input for a given application. Specifications of many real-world data formats include both a syntactic component and one or more non-context-free semantic properties that a well-formed instance of the format must exhibit. A parser for context-free grammars (CFGs) cannot determine on its own whether an input is valid according to such a specification; it must be supplemented with additional validation checks.
In this work, we present CoStar++, a verified parser interpreter with semantic features that make it highly expressive in terms of both the language specifications it accepts and its output type. CoStar++ provides support for semantic predicates, enabling the user to write semantically rich grammars that include non-context-free properties. The interpreter also supports semantic actions that convert sequential inputs to structured outputs in a principled way. CoStar++ is implemented and verified with the Coq Proof Assistant, and it is based on the ALL(*) parsing algorithm. For all CFGs without left recursion, the interpreter is provably sound, complete, and terminating with respect to a semantic specification that takes predicates and actions into account. CoStar++ runs in linear time on benchmarks for four real-world data formats, three of which have non-context-free specifications.
Sam Lasser, Chris Casinghino, Derek Egolf, Kathleen Fisher, Cody Roux
Subtropical Satisfiability for SMT Solving
Abstract
A wide range of problems from aerospace engineering and other application areas can be encoded logically and solved using satisfiability modulo theories (SMT) tools, which themselves use dedicated decision procedures for the underlying theories.
Subtropical satisfiability is such a decision procedure for the theory of real arithmetic. Though incomplete, it is a very efficient algorithm and has a high potential for SMT solving. However, yet it has been seldomly used in this context. In this paper we elaborate on possibilities for the efficient usage of subtropical satisfiability in SMT solving.
Jasper Nalbach, Erika Ábrahám
A Linear Weight Transfer Rule for Local Search
Abstract
The Divide and Distribute Fixed Weights algorithm (ddfw) is a dynamic local search SAT-solving algorithm that transfers weight from satisfied to falsified clauses in local minima. ddfw is remarkably effective on several hard combinatorial instances. Yet, despite its success, it has received little study since its debut in 2005. In this paper, we propose three modifications to the base algorithm: a linear weight transfer method that moves a dynamic amount of weight between clauses in local minima, an adjustment to how satisfied clauses are chosen in local minima to give weight, and a weighted-random method of selecting variables to flip. We implemented our modifications to ddfw on top of the solver yalsat. Our experiments show that our modifications boost the performance compared to the original ddfw algorithm on multiple benchmarks, including those from the past three years of SAT competitions. Moreover, our improved solver exclusively solves hard combinatorial instances that refute a conjecture on the lower bound of two Van der Waerden numbers set forth by Ahmed et al. (2014), and it performs well on a hard graph-coloring instance that has been open for over three decades.
Md Solimul Chowdhury, Cayden R. Codel, Marijn J. H. Heule
Adiar 1.1
Zero-Suppressed Decision Diagrams in External Memory
Abstract
We outline how support for Zero-suppressed Decision Diagrams (ZDDs) has been achieved for the external memory BDD package Adiar. This allows one to use ZDDs to solve various problems despite their size exceed the machine’s limit of internal memory.
Steffan Christ Sølvsten, Jaco van de Pol
Satisfiability of Non-linear Transcendental Arithmetic as a Certificate Search Problem
Abstract
For typical first-order logical theories, satisfying assignments have a straightforward finite representation that can directly serve as a certificate that a given assignment satisfies the given formula. For non-linear real arithmetic with transcendental functions, however, no general finite representation of satisfying assignments is available. Hence, in this paper, we introduce a different form of satisfiability certificate for this theory, formulate the satisfiability verification problem as the problem of searching for such a certificate, and show how to perform this search in a systematic fashion. This does not only ease the independent verification of results, but also allows the systematic design of new, efficient search techniques. Computational experiments document that the resulting method is able to prove satisfiability of a substantially higher number of benchmark problems than existing methods.
Enrico Lipparini, Stefan Ratschan
Backmatter
Metadata
Title
NASA Formal Methods
Editors
Kristin Yvonne Rozier
Swarat Chaudhuri
Copyright Year
2023
Electronic ISBN
978-3-031-33170-1
Print ISBN
978-3-031-33169-5
DOI
https://doi.org/10.1007/978-3-031-33170-1

Premium Partner