Skip to main content
Top

2021 | Book

Verification, Model Checking, and Abstract Interpretation

22nd International Conference, VMCAI 2021, Copenhagen, Denmark, January 17–19, 2021, Proceedings

insite
SEARCH

About this book

This book constitutes the proceedings of the 22nd International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2021, which was held virtually during January 17-19, 2021. The conference was planned to take place in Copenhagen, Denmark, but changed to an online event due to the COVID-19 pandemic.
The 23 papers presented in this volume were carefully reviewed from 48 submissions. VMCAI provides a forum for researchers working on verification, model checking, and abstract interpretation and facilitates interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas. The papers presented in this volume were organized in the following topical sections: hyperproperties and infinite-state systems; concurrent and distributed systems; checking; synthesis and repair; applications; and decision procedures.

Table of Contents

Frontmatter

Invited Papers

Frontmatter
Model Checking Algorithms for Hyperproperties (Invited Paper)
Abstract
Hyperproperties generalize trace properties by expressing relations between multiple computations. Hyperpropertes include policies from information-flow security, like observational determinism or noninterference, and many other system properties including promptness and knowledge. In this paper, we give an overview on the model checking problem for temporal hyperlogics. Our starting point is the model checking algorithm for HyperLTL, a reduction to Büchi automata emptiness. This basic construction can be extended with propositional quantification, resulting in an algorithm for HyperQPTL. It can also be extended with branching time, resulting in an algorithm for HyperCTL\(^*\). However, it is not possible to have both extensions at the same time: the model checking problem of HyperQCTL\(^*\) is undecidable. An attractive compromise is offered by MPL[E], i.e., monadic path logic extended with the equal-level predicate. The expressiveness of MPL[E] falls strictly between that of HyperCTL\(^*\) and HyperQCTL\(^*\). MPL[E] subsumes both HyperCTL\(^*\) and HyperKCTL\(^*\), the extension of HyperCTL\(^*\) with the knowledge operator. We show that the model checking problem for MPL[E] is still decidable.
Bernd Finkbeiner
Algebra-Based Synthesis of Loops and Their Invariants (Invited Paper)
Abstract
Provably correct software is one of the key challenges in our software-driven society. While formal verification establishes the correctness of a given program, the result of program synthesis is a program which is correct by construction. In this paper we overview some of our results for both of these scenarios when analysing programs with loops. The class of loops we consider can be modelled by a system of linear recurrence equations with constant coefficients, called C-finite recurrences. We first describe an algorithmic approach for synthesising all polynomial equality invariants of such non-deterministic numeric single-path loops. By reverse engineering invariant synthesis, we then describe an automated method for synthesising program loops satisfying a given set of polynomial loop invariants. Our results have applications towards proving partial correctness of programs, compiler optimisation and generating number sequences from algebraic relations.
Andreas Humenberger, Laura Kovács
Generative Program Analysis and Beyond: The Power of Domain-Specific Languages (Invited Paper)
Abstract
In this paper we position Linear Time Temporal Logic (LTL), structural operational semantics (SOS), and a graphical generalization of BNF as central DSLs for program analysis and verification tasks in order to illustrate the impact of language to the mindset: (1) Specifying program analyses in LTL changes the classical algorithmic ‘HOW’ thinking into a property-oriented ‘WHAT’ thinking that allows one to logically combine analysis goals and eases proofs. (2) Playing with the original store component in SOS configurations allows one to elegantly realize variants of abstract program interpretations, and to align different aspects, like e.g., the symbolic values of variables and path conditions. (3) Specializing languages by refining their BNF-like meta models has the power to lift certain verification tasks from the program to the programming language level. We will illustrate the advantages of the change of mindset imposed by these three DSLs, as well as the fact that these advantages come at low price due to available adequate generator technology.
Bernhard Steffen, Alnis Murtovi

Hyperproperties and Infinite-State Systems

Frontmatter
Compositional Model Checking for Multi-properties
Abstract
Hyperproperties lift conventional trace properties in a way that describes how a system behaves in its entirety, and not just based on its individual traces. We generalize this notion to multi-properties, which describe the behavior of not just a single system, but of a set of systems, which we call a multi-model. We demonstrate the usefulness of our setting with practical examples. We show that model-checking multi-properties is equivalent to model-checking hyperproperties. However, our framework has the immediate advantage of being compositional. We introduce sound and complete compositional proof rules for model-checking multi-properties, based on over- and under-approximations of the systems in the multi-model. We then describe methods of computing such approximations. The first is abstraction-refinement based, in which a coarse initial abstraction is continuously refined using counterexamples, until a suitable approximation is found. The second, tailored for models with finite traces, finds suitable approximations via the \(L^*\) learning algorithm. Our methods can produce much smaller models than the original ones, and can therefore be used for accelerating model-checking for both multi-properties and hyperproperties.
Ohad Goudsmid, Orna Grumberg, Sarai Sheinvald
Decomposing Data Structure Commutativity Proofs with -Differencing
Abstract
Commutativity of data structure methods is of ongoing interest in contexts such as parallelizing compilers, transactional memory, speculative execution and software scalability. Despite this interest, we lack effective theories and techniques to aid commutativity verification.
In this paper, we introduce a novel decomposition to improve the task of verifying method-pair commutativity conditions from data structure implementations. The key enabling insight—called \(mn\)-differencing—defines the precision necessary for an abstraction to be fine-grained enough so that commutativity of method implementations in the abstract domain entails commutativity in the concrete domain, yet can be less precise than what is needed for full-functional correctness. We incorporate this decomposition into a proof rule, as well as an automata-theoretic reduction for commutativity verification. Finally, we discuss our simple proof-of-concept implementation and experimental results showing that \(mn\)-differencing leads to more scalable commutativity verification of some simple examples.
Eric Koskinen, Kshitij Bansal
Proving the Existence of Fair Paths in Infinite-State Systems
Abstract
In finite-state systems, true existential properties admit witnesses in form of lasso-shaped fair paths. When dealing with the infinite-state case (e.g. software non-termination, model checking of hybrid automata) this is no longer the case. In this paper, we propose a compositional approach for proving the existence of fair paths of infinite-state systems. First, we describe a formal approach to prove the existence of a non-empty under-approximation of the original system that only contains fair paths. Second, we define an automated procedure that, given a set of hints (in form of basic components), searches for a suitable composition proving the existence of a fair path. We experimentally evaluate the approach on examples taken from both software and hybrid systems, showing its wide applicability and expressiveness.
Alessandro Cimatti, Alberto Griggio, Enrico Magnago
A Self-certifying Compilation Framework for WebAssembly
Abstract
A self-certifying compiler is designed to generate a correctness proof for each optimization performed during compilation. The generated proofs are checked automatically by an independent proof validator. The outcome is formally verified compilation, achieved without formally verifying the compiler. This paper describes the design and implementation of a self-certifying compilation framework for WebAssembly, a new intermediate language supported by all major browsers.
Kedar S. Namjoshi, Anton Xue

Concurrent and Distributed Systems

Frontmatter
Concurrent Correctness in Vector Space
Abstract
Correctness verification of a concurrent history is challenging and has been proven to be an NP-complete problem. The reason that verifying correctness cannot be solved in polynomial time is a consequence of the way correctness is defined. Traditional correctness conditions require a concurrent history to be equivalent to a legal sequential history. The worst case number of legal sequential histories for a concurrent history is O(n!) with respect to n methods invoked. Existing correctness verification tools improve the time complexity by either reducing the size of the possible legal sequential histories or improving the efficiency of generating the possible legal sequential histories. Further improvements to the time complexity of correctness verification can be achieved by changing the way correctness of concurrent programs is defined. In this paper, we present the first methodology to recast the correctness conditions in literature to be defined in vector space. The concurrent histories are represented as a set of method call vectors, and correctness is defined as properties over the set of vectors. The challenge with defining correctness in vector space is accounting for method call ordering and data structure semantics. We solve this challenge by incorporating a priority assignment scheme to the values of the method call vectors. Using our new definitions of concurrent correctness, we design a dynamic analysis tool that checks the vector space correctness of concurrent data structures in \(O(n^2)\) with respect to n method calls, a significant improvement over O(n!) time required to analyze legal sequential histories. We showcase our dynamic analysis tool by using it to check the vector space correctness of a variety of queues, stacks, and hashmaps.
Christina Peterson, Victor Cook, Damian Dechev
Verification of Concurrent Programs Using Petri Net Unfoldings
Abstract
Given a verification problem for a concurrent program (with a fixed number of threads) over infinite data domains, we can construct a model checking problem for an abstraction of the concurrent program through a Petri net (a problem which can be solved using McMillan’s unfoldings technique). We present a method of abstraction refinement which translates Floyd/Hoare-style proofs for sample traces into additional synchronization constraints for the Petri net.
Daniel Dietsch, Matthias Heizmann, Dominik Klumpp, Mehdi Naouar, Andreas Podelski, Claus Schätzle
Eliminating Message Counters in Synchronous Threshold Automata
Abstract
In previous work, we introduced synchronous threshold automata for the verification of synchronous fault-tolerant distributed algorithms, and presented a verification method based on bounded model checking. Modeling a distributed algorithm by a threshold automaton requires to correctly deal with the semantics for sending and receiving messages based on the fault assumption. This step was done manually so far, and required human ingenuity. Motivated by similar results for asynchronous threshold automata, in this paper we show that one can start from a faithful model of the distributed algorithm that includes the sending and receiving of messages, and then automatically obtain a threshold automaton by applying quantifier elimination on the receive message counters. In this way, we obtain a fully automated verification pipeline. We present an experimental evaluation, discovering a bug in our previous manual encoding. Interestingly, while quantifier elimination in general produces larger threshold automata than the manual encoding, the verification times are comparable and even faster in several cases, allowing us to verify benchmarks that could not be handled before.
Ilina Stoilkovska, Igor Konnov, Josef Widder, Florian Zuleger
A Reduction Theorem for Randomized Distributed Algorithms Under Weak Adversaries
Abstract
Weak adversaries are a way to model the uncertainty due to asynchrony in randomized distributed algorithms. They are a standard notion in correctness proofs for distributed algorithms, and express the property that the adversary (scheduler), which has to decide which messages to deliver to which process, has no means of inferring the outcome of random choices, and the content of the messages.In this paper, we introduce a model for randomized distributed algorithms that allows us to formalize the notion of weak adversaries. It applies to randomized distributed algorithms that proceed in rounds and are tolerant to process failures. For this wide class of algorithms, we prove that for verification purposes, the class of weak adversaries can be restricted to simple ones, so-called round-rigid adversaries, that keep the processes tightly synchronized. As recently a verification method for round-rigid adversaries has been introduced, our new reduction theorem paves the way to the parameterized verification of randomized distributed algorithms under the more realistic weak adversaries.
Nathalie Bertrand, Marijana Lazić, Josef Widder

Abstract Interpretation and Model Checking

Frontmatter
Runtime Abstract Interpretation for Numerical Accuracy and Robustness
Abstract
Verification of numerical accuracy properties in modern software remains an important and challenging task. One of its difficulties is related to unstable tests, where the execution can take different branches for real and floating-point numbers. This paper presents a new verification technique for numerical properties, named Runtime Abstract Interpretation (RAI), that, given an annotated source code, embeds into it an abstract analyzer in order to analyze the program behavior at runtime. RAI is a hybrid technique combining abstract interpretation and runtime verification that aims at being sound as the former while taking benefit from the concrete run to gain greater precision from the latter when necessary. It solves the problem of unstable tests by surrounding an unstable test by two carefully defined program points, forming a so-called split-merge section, for which it separately analyzes different executions and merges the computed domains at the end of the section. Our implementation of this technique in a toolchain called FLDBox relies on two basic tools, FLDCompiler, that performs a source-to-source transformation of the given program and defines the split-merge sections, and an instrumentation library FLDLib that provides necessary primitives to explore relevant (partial) executions of each section and propagate accuracy properties. Initial experiments show that the proposed technique can efficiently and soundly analyze numerical accuracy for industrial programs on thin numerical scenarios.
Franck Védrine, Maxime Jacquemin, Nikolai Kosmatov, Julien Signoles
Twinning Automata and Regular Expressions for String Static Analysis
Abstract
In this paper we formalize \(\textsc {Tarsis}\), a new abstract domain based on the abstract interpretation theory that approximates string values through finite state automata. The main novelty of \(\textsc {Tarsis}\) is that it works over an alphabet of strings instead of single characters. On the one hand, such an approach requires a more complex and refined definition of the widening operator, and the abstract semantics of string operators. On the other hand, it is in position to obtain strictly more precise results than state-of-the-art approaches. We implemented a prototype of \(\textsc {Tarsis}\), and we applied it to some case studies taken from some of the most popular Java libraries manipulating string values. The experimental results confirm that \(\textsc {Tarsis}\) is in position to obtain strictly more precise results than existing analyses.
Luca Negrini, Vincenzo Arceri, Pietro Ferrara, Agostino Cortesi
Unbounded Procedure Summaries from Bounded Environments
Abstract
Modular approaches to verifying interprocedural programs involve learning summaries for individual procedures rather than verifying a monolithic program. Modern approaches based on use of Satisfiability Modulo Theory (SMT) solvers have made much progress in this direction. However, it is still challenging to handle mutual recursion and to derive adequate procedure summaries using scalable methods. We propose a novel modular verification algorithm that addresses these challenges by learning lemmas about the relationships among procedure summaries and by using bounded environments in SMT queries. We have implemented our algorithm in a tool called Clover and report on a detailed evaluation that shows that it outperforms existing automated tools on benchmark programs with mutual recursion while being competitive on standard benchmarks.
Lauren Pick, Grigory Fedyukovich, Aarti Gupta
Syntax-Guided Synthesis for Lemma Generation in Hardware Model Checking
Abstract
In this work we propose to use Syntax-Guided Synthesis (SyGuS) for lemma generation in a word-level IC3/PDR framework for bit-vector problems. Hardware model checking is moving from bit-level to word-level problems, and it is expected that model checkers can benefit when such high-level information is available. However, for bit-vectors, it is challenging to find a good word-level interpolation strategy for lemma generation, which hinders the use of word-level IC3/PDR algorithms.
Our SyGuS-based procedure, SyGuS-\(\mathcal {A}\textsf {PDR}\), is tightly integrated with an existing word-level IC3/PDR framework \(\mathcal {A}\textsf {PDR}\). It includes a predefined grammar template and term production rules for generating candidate lemmas, and does not rely on any extra human inputs. Our experiments on benchmarks from the hardware model checking competition show that SyGuS-\(\mathcal {A}\textsf {PDR}\) can outperform state-of-the-art Constrained Horn Clause (CHC) solvers, including those that implement bit-level IC3/PDR. We also show that SyGuS-\(\mathcal {A}\textsf {PDR}\) and these CHC solvers can solve many instances faster than other leading word-level hardware model checkers that are not CHC-based. As a by-product of our work, we provide a translator Btor2CHC that enables the use of CHC solvers for general hardware model checking problems, and contribute representative bit-vector benchmarks to the CHC-solver community.
Hongce Zhang, Aarti Gupta, Sharad Malik

Synthesis and Repair

Frontmatter
Approximate Bit Dependency Analysis to Identify Program Synthesis Problems as Infeasible
Abstract
Bit-vector-based program synthesis is an important building block of state-of-the-art techniques in computer programming. Some of these techniques do not only rely on a synthesizer’s ability to return an appropriate program if it exists but also require a synthesizer to detect if there is no such program at all in the entire search space (i.e., the problem is infeasible), which is a computationally demanding task.
In this paper, we propose an approach to quickly identify some synthesis problems as infeasible. We observe that a specification function encodes dependencies between input and output bits that a correct program must satisfy. To exploit this fact, we present approximate analyses of essential bits and use them in two novel algorithms to check if a synthesis problem is infeasible. Our experiments show that adding our technique to applications of bit vector synthesis can save up to 33% of their time.
Marius Kamp, Michael Philippsen
Automated Repair of Heap-Manipulating Programs Using Deductive Synthesis
Abstract
We propose a novel method to automatically repairing buggy heap-manipulating programs using constraint solving and deductive synthesis. Given an input program \({\texttt {C}}\) and its formal specification in the form of a Hoare triple: \({\{{\mathcal {P}}\}}~{\texttt {C}}~{\{{\mathcal {Q}}\}}\), we use a separation-logic-based verifier to verify if program \({\texttt {C}}\) is correct w.r.t. its specifications. If program \({\texttt {C}}\) is found buggy, we then repair it in the following steps. First, we rely on the verification results to collect a list of suspicious statements of the buggy program. For each suspicious statement, we temporarily replace it with a template patch representing the desired statements. The template patch is also formally specified using a pair of unknown pre- and postcondition. Next, we use the verifier to analyze the temporarily patched program to collect constraints related to the pre- and postcondition of the template patch. Then, these constraints are solved by our constraint solving technique to discover the suitable specifications of the template patch. Subsequently, these specifications can be used to synthesize program statements of the template patch, consequently creating a candidate program. Finally, if the candidate program is validated, it is returned as the repaired program. We demonstrate the effectiveness of our approach by evaluating our implementation and a state-of-the-art approach on a benchmark of 231 buggy programs. The experimental results show that our tool successfully repairs 223 buggy programs and considerably outperforms the compared tool.
Thanh-Toan Nguyen, Quang-Trung Ta, Ilya Sergey, Wei-Ngan Chin
GPURepair: Automated Repair of GPU Kernels
Abstract
This paper presents a tool for repairing errors in GPU kernels written in CUDA or OpenCL due to data races and barrier divergence. Our novel extension to prior work can also remove barriers that are deemed unnecessary for correctness. We implement these ideas in our tool called GPURepair, which uses GPUVerify as the verification oracle for GPU kernels. We also extend GPUVerify to support CUDA Cooperative Groups, allowing GPURepair to perform inter-block synchronization for CUDA kernels. To the best of our knowledge, GPURepair is the only tool that can propose a fix for intra-block data races and barrier divergence errors for both CUDA and OpenCL kernels and the only tool that fixes inter-block data races for CUDA kernels. We perform extensive experiments on about 750 kernels and provide a comparison with prior work. We demonstrate the superiority of GPURepair through its capability to fix more kernels and its unique ability to remove redundant barriers and handle inter-block data races.
Saurabh Joshi, Gautam Muduganti

Applications

Frontmatter
A Synchronous Effects Logic for Temporal Verification of Pure Esterel
Abstract
Esterel is an imperative synchronous language that has found success in many safety-critical applications. Its precise semantics makes it natural for programming and reasoning. Existing techniques tackle either one of its main challenges: correctness checking or temporal verification. To resolve the issues simultaneously, we propose a new solution via a Hoare-style forward verifier and a term rewriting system (TRS) on Synced Effects. The first contribution is, by deploying a novel effects logic, the verifier computes the deterministic program behaviour via construction rules at the source level, defining program evaluation syntactically. As a second contribution, by avoiding the complex translation from LTL formulas to Esterel programs, our purely algebraic TRS efficiently checks temporal properties described by expressive Synced Effects. To demonstrate our method’s feasibility, we prototype this logic; prove its correctness; provide experimental results, and a number of case studies.
Yahui Song, Wei-Ngan Chin
A Design of GPU-Based Quantitative Model Checking
Abstract
In this paper, we implement a GPU-based quantitative model checker and compare its performance with a CPU-based one. Linear Temporal Logic for Control (LTLC) is a quantitative variation of LTL to describe properties of a linear system and LTLC-Checker [1] is an implementation of its model checking algorithm. In practice, its long and unpredictable execution time has been a concern in applying the technique to real-time applications such as automatic control systems. In this paper, we design an LTLC model checker using a GPGPU programming technique. The resulting model checker is not only faster than the CPU-based one especially when the problem is not simple, but it has less variation in the execution time as well. Furthermore, multiple counterexamples can be found together when the CPU-based checker can find only one.
YoungMin Kwon, Eunhee Kim

Open Access

Formal Semantics and Verification of Network-Based Biocomputation Circuits
Abstract
Network-Based Biocomputation Circuits (NBCs) offer a new paradigm for solving complex computational problems by utilizing biological agents that operate in parallel to explore manufactured planar devices. The approach can also have future applications in diagnostics and medicine by combining NBCs computational power with the ability to interface with biological material. To realize this potential, devices should be designed in a way that ensures their correctness and robust operation. For this purpose, formal methods and tools can offer significant advantages by allowing investigation of design limitations and detection of errors before manufacturing and experimentation. Here we define a computational model for NBCs by providing formal semantics to NBC circuits. We present a formal verification-based approach and prototype tool that can assist in the design of NBCs by enabling verification of a given design’s correctness. Our tool allows verification of the correctness of NBC designs for several NP-Complete problems, including the Subset Sum, Exact Cover and Satisfiability problems and can be extended to other NBC implementations. Our approach is based on defining transition systems for NBCs and using temporal logic for specifying and proving properties of the design using model checking. Our formal model can also serve as a starting point for computational complexity studies of the power and limitations of NBC systems.
Michelle Aluf-Medina, Till Korten, Avraham Raviv, Dan V. Nicolau Jr., Hillel Kugler
Netter: Probabilistic, Stateful Network Models
Abstract
We study the problem of using probabilistic network models to formally analyze their quantitative properties, such as the effect of different load-balancing strategies on the long-term traffic on a server farm. Compared to prior work, we explore a different design space in terms of tradeoffs between model expressiveness and analysis scalability, which we realize in a language we call Netter. Netter code is compiled to probabilistic automata, undergoing optimization passes to reduce the state space of the generated models, thus helping verification scale. We evaluate Netter on several case studies, including a probabilistic load balancer, a routing scheme reminiscent of MPLS, and a network defense mechanism against link-flooding attacks. Our results show that Netter can analyze quantitative properties of interesting routing schemes that prior work hadn’t addressed, for networks of small size (4–9 nodes and a few different types of flows). Moreover, when specialized to simpler, stateless networks, Netter can parallel the performance of previous state-of-the-art tools, scaling up to millions of nodes.
Han Zhang, Chi Zhang, Arthur Azevedo de Amorim, Yuvraj Agarwal, Matt Fredrikson, Limin Jia

Decision Procedures

Frontmatter
Deciding the Bernays-Schoenfinkel Fragment over Bounded Difference Constraints by Simple Clause Learning over Theories
Abstract
Simple clause learning over theories SCL(T) is a decision procedure for the Bernays-Schoenfinkel fragment over bounded difference constraints BS(BD). The BS(BD) fragment consists of clauses built from first-order literals without function symbols together with simple bounds or difference constraints, where for the latter it is required that the variables of the difference constraint are bounded from below and above. The SCL(T) calculus builds model assumptions over a fixed finite set of fresh constants. The model assumptions consist of ground foreground first-order and ground background theory literals. The model assumptions guide inferences on the original clauses with variables. We prove that all clauses generated this way are non-redundant. As a consequence, expensive testing for tautologies and forward subsumption is completely obsolete and termination with respect to a fixed finite set of constants is a consequence. We prove SCL(T) to be sound and refutationally complete for the combination of the Bernays Schoenfinkel fragment with any compact theory. Refutational completeness is obtained by enlarging the set of considered constants. For the case of BS(BD) we prove an abstract finite model property such that the size of a sufficiently large set of constants can be fixed a priori.
Martin Bromberger, Alberto Fiori, Christoph Weidenbach
Incremental Search for Conflict and Unit Instances of Quantified Formulas with E-Matching
Abstract
We present a new method to find conflicting instances of quantified formulas in the context of SMT solving. Our method splits the search for such instances in two parts. In the first part, E-matching is used to find candidate instances of the quantified formulas. In principle, any existing incremental E-matching technique can be used. The incrementality avoids duplicating work for each small change of the E-graph. Together with the candidate instance, E-matching also provides an existing node in the E-graph corresponding to each term in this instance. In the second part, these nodes are used to evaluate the candidate instance, i.e., without creating new terms. The evaluation can be done in constant time per instance. Our method detects conflicting instances and unit-propagating instances (clauses that propagate new literals). This makes our method suitable for a tight integration with the DPLL(\(\mathcal {T}\)) framework, very much in the style of an additional theory solver.
Jochen Hoenicke, Tanja Schindler
On Preprocessing for Weighted MaxSAT
Abstract
Modern competitive solvers employ various preprocessing techniques to efficiently tackle complex problems. This work introduces two preprocessing techniques to improve solving weighted partial MaxSAT problems: Generalized Boolean Multilevel Optimization (GBMO) and Trimming MaxSAT (TrimMaxSAT).
GBMO refines and extends Boolean Multilevel Optimization (BMO), thereby splitting instances due to their distribution of weights into multiple less complex subproblems, which are solved one after the other to obtain the overall solution.
The second technique, TrimMaxSAT, finds unsatisfiable soft clauses and removes them from the instance. This reduces the complexity of the MaxSAT instance and works especially well in combination with GBMO. The proposed algorithm works incrementally in a binary search fashion, testing the satisfiability of every soft clause. Furthermore, as a by-product, typically an initial weight close to the maximum is found, which is in turn advantageous w.r.t. the size of e.g. the Dynamic Polynomial Watchdog (DPW) encoding.
Both techniques can be used by all MaxSAT solvers, though our focus lies on Pseudo Boolean constraint based MaxSAT solvers. Experimental results show the effectiveness of both techniques on a large set of benchmarks from a hardware security application and from the 2019 MaxSAT Evaluation. In particular for the hardest of the application benchmarks, the solver Pacose with GBMO and TrimMaxSAT performs best compared to the MaxSAT Evaluation solvers of 2019. For the benchmarks of the 2019 MaxSAT Evaluation, we show that with the proposed techniques the top solver combination solves significantly more instances.
Tobias Paxian, Pascal Raiola, Bernd Becker
Compositional Satisfiability Solving in Separation Logic
Abstract
We introduce a novel decision procedure to the satisfiability problem in array separation logic combined with general inductively defined predicates and arithmetic. Our proposal differentiates itself from existing works by solving satisfiability through compositional reasoning. First, following Fermat’s method of infinite descent, it infers for every inductive definition a “base” that precisely characterises the satisfiability. It then utilises the base to derive such a base for any formula where these inductive predicates reside in. Especially, we identify an expressive decidable fragment for the compositionality. We have implemented the proposal in a tool and evaluated it over challenging problems. The experimental results show that the compositional satisfiability solving is efficient and our tool is effective and efficient when compared with existing solvers.
Quang Loc Le
Backmatter
Metadata
Title
Verification, Model Checking, and Abstract Interpretation
Editors
Prof. Fritz Henglein
Sharon Shoham
Yakir Vizel
Copyright Year
2021
Electronic ISBN
978-3-030-67067-2
Print ISBN
978-3-030-67066-5
DOI
https://doi.org/10.1007/978-3-030-67067-2

Premium Partner