Skip to main content

2020 | Buch

Theory and Applications of Satisfiability Testing – SAT 2020

23rd International Conference, Alghero, Italy, July 3–10, 2020, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 23rd International Conference on Theory and Applications of Satisfiability Testing, SAT 2020, which was planned to take place in Alghero, Italy, during July 5-9, 2020. Due to the coronavirus COVID-19 pandemic, the conference was held virtually.

The 25 full, 9 short, and 2 tool papers presented in this volume were carefully reviewed and selected from 69 submissions. They deal with SAT interpreted in a broad sense, including theoretical advances (such as exact algorithms, proof complexity, and other complexity issues), practical search algorithms, knowledge compilation, implementation-level details of SAT solvers and SAT-based systems, problem encodings and reformulations, applications (including both novel application domains and improvements to existing approaches), as well as case studies and reports on findings based on rigorous experimentation.

Inhaltsverzeichnis

Frontmatter
Sorting Parity Encodings by Reusing Variables

Parity reasoning is challenging for CDCL solvers: Refuting a formula consisting of two contradictory, differently ordered parity constraints of modest size is hard. Two alternative methods can solve these reordered parity formulas efficiently: binary decision diagrams and Gaussian Elimination (which requires detection of the parity constraints). Yet, implementations of these techniques either lack support of proof logging or introduce many extension variables.The compact, commonly-used encoding of parity constraints uses Tseitin variables. We present a technique for short clausal proofs that exploits these Tseitin variables to reorder the constraints within the DRAT system. The size of our refutations of reordered parity formulas is $$\mathcal {O}(n\log n)$$ .

Leroy Chew, Marijn J. H. Heule
Community and LBD-Based Clause Sharing Policy for Parallel SAT Solving

Modern parallel SAT solvers rely heavily on effective clause sharing policies for their performance. The core problem being addressed by these policies can be succinctly stated as “the problem of identifying high-quality learnt clauses”. These clauses, when shared between the worker nodes of parallel solvers, should lead to better performance. The term “high-quality clauses” is often defined in terms of metrics that solver designers have identified over years of empirical study. Some of the more well-known metrics to identify high-quality clauses for sharing include clause length, literal block distance (LBD), and clause usage in propagation.In this paper, we propose a new metric aimed at identifying high-quality learnt clauses and a concomitant clause-sharing policy based on a combination of LBD and community structure of Boolean formulas. The concept of community structure has been proposed as a possible explanation for the extraordinary performance of SAT solvers in industrial instances. Hence, it is a natural candidate as a basis for a metric to identify high-quality clauses. To be more precise, our metric identifies clauses that have low LBD and low community number as ones that are high-quality for applications such as verification and testing. The community number of a clause C measures the number of different communities of a formula that the variables in C span. We perform extensive empirical analysis of our metric and clause-sharing policy, and show that our method significantly outperforms state-of-the-art techniques on the benchmark from the parallel track of the last four SAT competitions.

Vincent Vallade, Ludovic Le Frioux, Souheib Baarir, Julien Sopena, Vijay Ganesh, Fabrice Kordon
Clause Size Reduction with all-UIP Learning

Almost all CDCL SAT solvers use the 1-UIP clause learning scheme for learning new clauses from conflicts, and our current understanding of SAT solving provides good reasons for using that scheme. In particular, the 1-UIP scheme yields asserting clauses, and these asserting clauses have minimum LBD among all possible asserting clauses. As a result of these advantages, other clause learning schemes, like i-UIP and all-UIP, that were proposed in early work are not used in modern solvers. In this paper, we propose a new technique for exploiting the all-UIP clause learning scheme. Our technique is to employ all-UIP learning under the constraint that the learnt clause’s LBD does not increase (over the minimum established by the 1-UIP clause). Our method can learn clauses that are significantly smaller than the 1-UIP clause while preserving the minimum LBD. Unlike previous clause minimization methods, our technique is not limited to learning a sub-clause of the 1-UIP clause. We show empirically that our method can improve the performance of state of the art solvers.

Nick Feng, Fahiem Bacchus
Trail Saving on Backtrack

A CDCL SAT solver can backtrack a large distance when it learns a new clause, e.g, when the new learnt clause is a unit clause the solver has to backtrack to level zero. When the length of the backtrack is large, the solver can end up reproducing many of the same decisions and propagations when it redescends the search tree. Different techniques have been proposed to reduce this potential redundancy, e.g., partial/chronological backtracking and trail saving on restarts. In this paper we present a new trail saving technique that is not restricted to restarts, unlike prior trail saving methods. Our technique makes a copy of the part of the trail that is backtracked over. This saved copy can then be used to improve the efficiency of the solver’s subsequent redescent. Furthermore, the saved trail also provides the solver with the ability to look ahead along the previous trail which can be exploited to improve its efficiency. Our new trail saving technique offers different tradeoffs in comparison with chronological backtracking and often yields superior performance. We also show that our technique is able to improve the performance of state-of-the-art solvers.

Randy Hickey, Fahiem Bacchus
Four Flavors of Entailment

We present a novel approach for enumerating partial models of a propositional formula, inspired by how theory solvers and the SAT solver interact in lazy SMT. Using various forms of dual reasoning allows our CDCL-based algorithm to enumerate partial models with no need for exploring and shrinking full models. Our focus is on model enumeration without repetition, with potential applications in weighted model counting and weighted model integration for probabilistic inference over Boolean and hybrid domains. Chronological backtracking renders the use of blocking clauses obsolete. We provide a formalization and examples. We further discuss important design choices for a future implementation related to the strength of dual reasoning, including unit propagation, using SAT or QBF oracles.

Sibylle Möhle, Roberto Sebastiani, Armin Biere
Designing New Phase Selection Heuristics

CDCL-based SAT solvers have transformed the field of automated reasoning owing to their demonstrated efficiency at handling problems arising from diverse domains. The success of CDCL solvers is owed to the design of clever heuristics that enable the tight coupling of different components. One of the core components is phase selection, wherein the solver, during branching, decides the polarity of the branch to be explored for a given variable. Most of the state-of-the-art CDCL SAT solvers employ phase-saving as a phase selection heuristic, which was proposed to address the potential inefficiencies arising from far-backtracking. In light of the emergence of chronological backtracking in CDCL solvers, we re-examine the efficiency of phase saving. Our empirical evaluation leads to a surprising conclusion: The usage of saved phase and random selection of polarity for decisions following a chronological backtracking leads to an indistinguishable runtime performance in terms of instances solved and PAR-2 score.We introduce Decaying Polarity Score (DPS) to capture the trend of the polarities attained by the variable, and upon observing lack of performance improvement due to DPS, we turn to a more sophisticated heuristic seeking to capture the activity of literals and the trend of polarities: Literal State Independent Decaying Sum (LSIDS). We find the 2019 winning SAT solver, Maple_LCM_Dist_ChronoBTv3, augmented with LSIDS solves 6 more instances while achieving a reduction of over 125 seconds in PAR-2 score, a significant improvement in the context of the SAT competition.

Arijit Shaw, Kuldeep S. Meel
On the Effect of Learned Clauses on Stochastic Local Search

There are two competing paradigms in successful SAT solvers: Conflict-driven clause learning (CDCL) and stochastic local search (SLS). CDCL uses systematic exploration of the search space and has the ability to learn new clauses. SLS examines the neighborhood of the current complete assignment. Unlike CDCL, it lacks the ability to learn from its mistakes. This work revolves around the question whether it is beneficial for SLS to add new clauses to the original formula. We experimentally demonstrate that clauses with a large number of correct literals w. r. t. a fixed solution are beneficial to the runtime of SLS. We call such clauses high-quality clauses.Empirical evaluations show that short clauses learned by CDCL possess the high-quality attribute. We study several domains of randomly generated instances and deduce the most beneficial strategies to add high-quality clauses as a preprocessing step. The strategies are implemented in an SLS solver, and it is shown that this considerably improves the state-of-the-art on randomly generated instances. The results are statistically significant.

Jan-Hendrik Lorenz, Florian Wörz
SAT Heritage: A Community-Driven Effort for Archiving, Building and Running More Than Thousand SAT Solvers

SAT research has a long history of source code and binary releases, thanks to competitions organized every year. However, since every cycle of competitions has its own set of rules and an adhoc way of publishing source code and binaries, compiling or even running any solver may be harder than what it seems. Moreover, there has been more than a thousand solvers published so far, some of them released in the early 90’s. If the SAT community wants to archive and be able to keep track of all the solvers that made its history, it urgently needs to deploy an important effort.We propose to initiate a community-driven effort to archive and to allow easy compilation and running of all SAT solvers that have been released so far. We rely on the best tools for archiving and building binaries (thanks to Docker, GitHub and Zenodo) and provide a consistent and easy way for this. Thanks to our tool, building (or running) a solver from its source (or from its binary) can be done in one line.

Gilles Audemard, Loïc Paulevé, Laurent Simon
Distributed Cube and Conquer with Paracooba

Cube and conquer is currently the most effective approach to solve hard combinatorial problems in parallel. It organizes the search in two phases. First, a look-ahead solver splits the problem into many sub-problems, called cubes, which are then solved in parallel by incremental CDCL solvers. In this tool paper we present the first fully integrated and automatic distributed cube-and-conquer solver Paracooba targeting cluster and cloud computing. Previous work was limited to multi-core parallelism or relied on manual orchestration of the solving process. Our approach uses one master per problem to initialize the solving process and automatically discovers and releases compute nodes through elastic resource usage. Multiple problems can be solved in parallel on shared compute nodes, controlled by a custom peer-to-peer based load-balancing protocol. Experiments show the scalability of our approach.

Maximilian Heisinger, Mathias Fleury, Armin Biere
Reproducible Efficient Parallel SAT Solving

In this paper, we propose a new reproducible and efficient parallel SAT solving algorithm. Unlike sequential SAT solvers, most parallel solvers do not guarantee reproducible behavior due to maximizing the performance. The unstable and non-deterministic behavior of parallel SAT solvers hinders a wider adoption of parallel solvers to the practical applications. In order to achieve robust and efficient parallel SAT solving, we propose two techniques to significantly reduce idle time in deterministic parallel SAT solving: delayed clause exchange and accurate estimation of execution time of clause exchange interval between solvers. The experimental results show that our reproducible parallel SAT solver has comparable performance to non-deterministic parallel SAT solvers even in a many-core environment.

Hidetomo Nabeshima, Katsumi Inoue
Improving Implementation of SAT Competitions 2017–2019 Winners

The results of annual SAT competitions are often viewed as the milestones showcasing the progress in SAT solvers. However, their competitive nature leads to the situation when the majority of this year’s solvers are based on previous year’s winner. And since the main focus is always on novelty, it means that there are times when some implementation details have a potential for improvement, but they are just inherited from solver to solver for several years in a row. In this study we propose small modifications of implementations of existing heuristics in several related SAT solvers. These modifications mostly consist in employing a deterministic strategy for switching between branching heuristics and in augmentations of the treatment of Tier2 and Core clauses. In our experiments we show that the proposed changes have a positive effect on solvers’ performance both individually and in combination with each other.

Stepan Kochemazov
On CDCL-Based Proof Systems with the Ordered Decision Strategy

We prove that CDCL SAT-solvers with the ordered decision strategy and the DECISION learning scheme are equivalent to ordered resolution. We also prove that, by replacing this learning scheme with its opposite, which learns the first possible non-conflict clause, they become equivalent to general resolution. In both results, we allow nondeterminism in the solver’s ability to perform unit propagation, conflict analysis, and restarts in a way that is similar to previous works in the literature. To aid the presentation of our results, and possibly future research, we define a model and language for CDCL-based proof systems – particularly those with nonstandard features – that allow for succinct and precise theorem statements.

Nathan Mull, Shuo Pang, Alexander Razborov
Equivalence Between Systems Stronger Than Resolution

In recent years there has been an increasing interest in studying proof systems stronger than Resolution, with the aim of building more efficient SAT solvers based on them. In defining these proof systems, we try to find a balance between the power of the proof system (the size of the proofs required to refute a formula) and the difficulty of finding the proofs. Among those proof systems we can mention Circular Resolution, MaxSAT Resolution with Extensions and MaxSAT Resolution with the Dual-Rail encoding.In this paper we study the relative power of those proof systems from a theoretical perspective. We prove that Circular Resolution and MaxSAT Resolution with extension are polynomially equivalent proof systems. This result is generalized to arbitrary sets of inference rules with proof constructions based on circular graphs or based on weighted clauses. We also prove that when we restrict the Split rule (that both systems use) to bounded size clauses, these two restricted systems are also equivalent. Finally, we show the relationship between these two restricted systems and Dual-Rail MaxSAT Resolution.

Maria Luisa Bonet, Jordi Levy
Simplified and Improved Separations Between Regular and General Resolution by Lifting

We give a significantly simplified proof of the exponential separation between regular and general resolution of Alekhnovich et al. (2007) as a consequence of a general theorem lifting proof depth to regular proof length in resolution. This simpler proof then allows us to strengthen the separation further, and to construct families of theoretically very easy benchmarks that are surprisingly hard for SAT solvers in practice.

Marc Vinyals, Jan Elffers, Jan Johannsen, Jakob Nordström
Mycielski Graphs and Proofs

Mycielski graphs are a family of triangle-free graphs $$M_k$$ with arbitrarily high chromatic number. $$M_k$$ has chromatic number k and there is a short informal proof of this fact, yet finding proofs of it via automated reasoning techniques has proved to be a challenging task. In this paper, we study the complexity of clausal proofs of the uncolorability of $$M_k$$ with $$k-1$$ colors. In particular, we consider variants of the $$\mathrm {PR}$$ (propagation redundancy) proof system that are without new variables, and with or without deletion. These proof systems are of interest due to their potential uses for proof search. As our main result, we present a sublinear-length and constant-width $$\mathrm {PR}$$ proof without new variables or deletion. We also implement a proof generator and verify the correctness of our proof. Furthermore, we consider formulas extended with clauses from the proof until a short resolution proof exists, and investigate the performance of CDCL in finding the short proof. This turns out to be difficult for CDCL with the standard heuristics. Finally, we describe an approach inspired by SAT sweeping to find proofs of these extended formulas.

Emre Yolcu, Xinyu Wu, Marijn J. H. Heule
Towards a Better Understanding of (Partial Weighted) MaxSAT Proof Systems

MaxSAT is a very popular language for discrete optimization with many domains of application. While there has been a lot of progress in MaxSAT solvers during the last decade, the theoretical analysis of MaxSAT inference has not followed the pace. Aiming at compensating that lack of balance, in this paper we do a proof complexity approach to MaxSAT resolution-based proof systems. First, we give some basic definitions on completeness and show that refutational completeness makes compleness redundant, as it happens in SAT. Then we take three inference rules such that adding them sequentially allows us to navigate from the weakest to the strongest resolution-based MaxSAT system available (i.e., from standalone MaxSAT resolution to the recently proposed ResE), each rule making the system stronger. Finally, we show that the strongest system captures the recently proposed concept of Circular Proof while being conceptually simpler, since weights, which are intrinsic in MaxSAT, naturally guarantee the flow condition required for the SAT case.

Javier Larrosa, Emma Rollon
Towards a Complexity-Theoretic Understanding of Restarts in SAT Solvers

Restarts are a widely-used class of techniques integral to the efficiency of Conflict-Driven Clause Learning (CDCL) Boolean SAT solvers. While the utility of such policies has been well-established empirically, a theoretical understanding of whether restarts are indeed crucial to the power of CDCL solvers is missing.In this paper, we prove a series of theoretical results that characterize the power of restarts for various models of SAT solvers. More precisely, we make the following contributions. First, we prove an exponential separation between a drunk randomized CDCL solver model with restarts and the same model without restarts using a family of satisfiable instances. Second, we show that the configuration of CDCL solver with VSIDS branching and restarts (with activities erased after restarts) is exponentially more powerful than the same configuration without restarts for a family of unsatisfiable instances. To the best of our knowledge, these are the first separation results involving restarts in the context of SAT solvers. Third, we show that restarts do not add any proof complexity-theoretic power vis-a-vis a number of models of CDCL and DPLL solvers with non-deterministic static variable and value selection.

Chunxiao Li, Noah Fleming, Marc Vinyals, Toniann Pitassi, Vijay Ganesh
On the Sparsity of XORs in Approximate Model Counting

Given a Boolean formula $$\varphi $$ , the problem of model counting, also referred to as #SAT, is to compute the number of solutions of $$\varphi $$ . The hashing-based techniques for approximate counting have emerged as a dominant approach, promising achievement of both scalability and rigorous theoretical guarantees. The standard construction of strongly 2-universal hash functions employs dense XORs (i.e., involving half of the variables in expectation), which is widely known to cause degradation in the runtime performance of state of the art $$\mathsf {SAT}$$ solvers. Consequently, the past few years have witnessed an intense activity in the design of sparse XORs as hash functions. Such constructions have been proposed with beliefs to provide runtime performance improvement along with theoretical guarantees similar to that of dense XORs.The primary contribution of this paper is a rigorous theoretical and empirical analysis to understand the effect of the sparsity of XORs. In contradiction to prior beliefs of applicability of analysis for sparse hash functions to all the hashing-based techniques, we prove a contradictory result. We show that the best-known bounds obtained for sparse XORs are still too weak to yield theoretical guarantees for a large class of hashing-based techniques, including the state of the art approach $$\mathsf {ApproxMC3}$$ . We then turn to a rigorous empirical analysis of the performance benefits of sparse hash functions. To this end, we first design, to the best of our knowledge, the most efficient algorithm called $$\mathsf {SparseCount2}$$ using sparse hash functions, which achieves at least up to two orders of magnitude performance improvement over its predecessor. Contradicting the current beliefs, we observe that $$\mathsf {SparseCount2}$$ still falls short of $$\mathsf {ApproxMC3}$$ in runtime performance despite the usage of dense XORs in $$\mathsf {ApproxMC3}$$ . In conclusion, our work showcases that the question of whether it is possible to use short XORs to achieve scalability while providing strong theoretical guarantees is still wide open.

Durgesh Agrawal, Bhavishya, Kuldeep S. Meel
A Faster Algorithm for Propositional Model Counting Parameterized by Incidence Treewidth

The propositional model counting problem (#SAT) is known to be fixed-parameter-tractable (FPT) when parameterized by the width k of a given tree decomposition of the incidence graph. The running time of the fastest known FPT algorithm contains the exponential factor of $$4^k$$ . We improve this factor to $$2^k$$ by utilizing fast algorithms for computing the zeta transform and covering product of functions representing partial model counts, thereby achieving the same running time as FPT algorithms that are parameterized by the less general treewidth of the primal graph. Our new algorithm is asymptotically optimal unless the Strong Exponential Time Hypothesis (SETH) fails.

Friedrich Slivovsky, Stefan Szeider
Abstract Cores in Implicit Hitting Set MaxSat Solving

Maximum Satisfiability (MaxSat) solving is an active area of research motivated by numerous successful applications to solving NP-hard combinatorial optimization problems. One of the most successful approaches to solving MaxSat instances arising from real world applications is the Implicit Hitting Set (IHS) approach. IHS solvers are complete MaxSat solvers that harness the strengths of both Boolean Satisfiability (SAT) and Integer Linear Programming (IP) solvers by decoupling core-extraction and optimization. While such solvers show state-of-the-art performance on many instances, it is known that there exist MaxSat instances on which IHS solvers need to extract an exponential number of cores before terminating. Motivated by the structure of the simplest of these problematic instances, we propose a technique we call abstract cores that provides a compact representation for a potentially exponential number of regular cores. We demonstrate how to incorporate abstract core reasoning into the IHS algorithm and report on an empirical evaluation demonstrating that including abstract cores into a state-of-the-art IHS solver improves its performance enough to surpass the best performing solvers of the most recent 2019 MaxSat Evaluation.

Jeremias Berg, Fahiem Bacchus, Alex Poole
MaxSAT Resolution and Subcube Sums

We study the MaxRes rule in the context of certifying unsatisfiability. We show that it can be exponentially more powerful than tree-like resolution, and when augmented with weakening (the system MaxResW), p-simulates tree-like resolution. In devising a lower bound technique specific to MaxRes (and not merely inheriting lower bounds from Res), we define a new semialgebraic proof system called the SubCubeSums proof system. This system, which p-simulates MaxResW, is a special case of the Sherali–Adams proof system. In expressivity, it is the integral restriction of conical juntas studied in the contexts of communication complexity and extension complexity. We show that it is not simulated by Res. Using a proof technique qualitatively different from the lower bounds that MaxResW inherits from Res, we show that Tseitin contradictions on expander graphs are hard to refute in SubCubeSums. We also establish a lower bound technique via lifting: for formulas requiring large degree in SubCubeSums, their XOR-ification requires large size in SubCubeSums.

Yuval Filmus, Meena Mahajan, Gaurav Sood, Marc Vinyals
A Lower Bound on DNNF Encodings of Pseudo-Boolean Constraints

Two major considerations when encoding pseudo-Boolean (PB) constraints into SAT are the size of the encoding and its propagation strength, that is, the guarantee that it has a good behaviour under unit propagation. Several encodings with propagation strength guarantees rely upon prior compilation of the constraints into DNNF (decomposable negation normal form), BDD (binary decision diagram), or some other sub-variants. However it has been shown that there exist PB-constraints whose ordered BDD (OBDD) representations, and thus the inferred CNF encodings, all have exponential size. Since DNNFs are more succinct than OBDDs, preferring encodings via DNNF to avoid size explosion seems a legitimate choice. Yet in this paper, we prove the existence of PB-constraints whose DNNFs all require exponential size.

Alexis de Colnet
On Weakening Strategies for PB Solvers

Current pseudo-Boolean solvers implement different variants of the cutting planes proof system to infer new constraints during conflict analysis. One of these variants is generalized resolution, which allows to infer strong constraints, but suffers from the growth of coefficients it generates while combining pseudo-Boolean constraints. Another variant consists in using weakening and division, which is more efficient in practice but may infer weaker constraints. In both cases, weakening is mandatory to derive conflicting constraints. However, its impact on the performance of pseudo-Boolean solvers has not been assessed so far. In this paper, new application strategies for this rule are studied, aiming to infer strong constraints with small coefficients. We implemented them in Sat4j and observed that each of them improves the runtime of the solver. While none of them performs better than the others on all benchmarks, applying weakening on the conflict side has surprising good performance, whereas applying partial weakening and division on both the conflict and the reason sides provides the best results overall.

Daniel Le Berre, Pierre Marquis, Romain Wallon
Reasoning About Strong Inconsistency in ASP

The last decade has witnessed remarkable improvements in the analysis of inconsistent formulas, namely in the case of Boolean Satisfiability (SAT) formulas. However, these successes have been restricted to monotonic logics. Recent work proposed the notion of strong inconsistency for a number of non-monotonic logics, including Answer Set Programming (ASP). This paper shows how algorithms for reasoning about inconsistency in monotonic logics can be extended to the case of ASP programs, in the concrete case of strong inconsistency. Initial experimental results illustrate the potential of the proposed approach.

Carlos Mencía, Joao Marques-Silva
Taming High Treewidth with Abstraction, Nested Dynamic Programming, and Database Technology

Treewidth is one of the most prominent structural parameters. While numerous theoretical results establish tractability under the assumption of fixed treewidth, the practical success of exploiting this parameter is far behind what theoretical runtime bounds have promised. In particular, a naive application of dynamic programming (DP) on tree decompositions (TDs) suffers already from instances of medium width. In this paper, we present several measures to advance this paradigm towards general applicability in practice: We present nested DP, where different levels of abstractions are used to (recursively) compute TDs of a given instance. Further, we integrate the concept of hybrid solving, where subproblems hidden by the abstraction are solved by classical search-based solvers, which leads to an interleaving of parameterized and classical solving. Finally, we provide nested DP algorithms and implementations relying on database technology for variants and extensions of Boolean satisfiability. Experiments indicate that the advancements are promising.

Markus Hecher, Patrick Thier, Stefan Woltran
Reducing Bit-Vector Polynomials to SAT Using Gröbner Bases

We address the satisfiability of systems of polynomial equations over bit-vectors. Instead of conventional bit-blasting, we exploit word-level inference to translate these systems into non-linear pseudo-boolean constraints. We derive the pseudo-booleans by simulating bit assignments through the addition of (linear) polynomials and applying a strong form of propagation by computing Gröbner bases. By handling bit assignments symbolically, the number of Gröbner basis calculations, along with the number of assignments, is reduced. The final Gröbner basis yields expressions for the bit-vectors in terms of the symbolic bits, together with non-linear pseudo-boolean constraints on the symbolic variables, modulo a power of two. The pseudo-booleans can be solved by translation into classical linear pseudo-boolean constraints (without a modulo) or by encoding them as propositional formulae, for which a novel translation process is described.

Thomas Seed, Andy King, Neil Evans
Speeding up Quantified Bit-Vector SMT Solvers by Bit-Width Reductions and Extensions

Recent experiments have shown that satisfiability of a quantified bit-vector formula coming from practical applications almost never changes after reducing all bit-widths in the formula to a small number of bits. This paper proposes a novel technique based on this observation. Roughly speaking, a given quantified bit-vector formula is reduced and sent to a solver, an obtained model is then extended to the original bit-widths and verified against the original formula. We also present an experimental evaluation demonstrating that this technique can significantly improve the performance of state-of-the-art smt solvers Boolector, CVC4, and Q3B on quantified bit-vector formulas from the smt-lib repository.

Martin Jonáš, Jan Strejček
Strong (D)QBF Dependency Schemes via Tautology-Free Resolution Paths

We suggest a general framework to study dependency schemes for dependency quantified Boolean formulas (DQBF). As our main contribution, we exhibit a new tautology-free DQBF dependency scheme that generalises the reflexive resolution path dependency scheme. We establish soundness of the tautology-free scheme, implying that it can be used in any DQBF proof system. We further explore the power of DQBF resolution systems parameterised by dependency schemes and show that our new scheme results in exponentially shorter proofs in comparison to the reflexive resolution path dependency scheme when used in the expansion DQBF system $$\mathsf {\forall {Exp{\text{+ }}Res}}$$ .On QBFs, we demonstrate that our new scheme is exponentially stronger than the reflexive resolution path dependency scheme when used in Q-resolution, thus resulting in the strongest QBF dependency scheme known to date.

Olaf Beyersdorff, Joshua Blinkhorn, Tomáš Peitl
Short Q-Resolution Proofs with Homomorphisms

We introduce new proof systems for quantified Boolean formulas (QBFs) by enhancing Q-resolution systems with rules which exploit local and global symmetries. The rules are based on homomorphisms that admit non-injective mappings between literals. This results in systems that are stronger than Q-resolution with (injective) symmetry rules. We further strengthen the systems by utilizing a dependency system D in a way that surpasses Q(D)-resolution in relative strength.

Ankit Shukla, Friedrich Slivovsky, Stefan Szeider
Multi-linear Strategy Extraction for QBF Expansion Proofs via Local Soundness

In applications, QBF solvers are expected to not only decide whether a given formula is true or false but also return a solution in the form of a strategy. Determining whether strategies can be efficiently extracted from proof traces generated by QBF solvers is a fundamental research task. Most resolution-based proof systems are known to implicitly support polynomial-time strategy extraction through a simulation of the evaluation game associated with an input formula, but this approach introduces large constant factors and results in unwieldy circuit representations. In this work, we present an explicit polynomial-time strategy extraction algorithm for the $$\forall \hbox {-}\mathsf{Exp}\hbox {+}\mathsf{Res}$$ proof system. This system is used by expansion-based solvers that implement counterexample-guided abstraction refinement (CEGAR), currently one of the most effective QBF solving paradigms. Our argument relies on a Curry-Howard style correspondence between strategies and $$\forall \hbox {-}\mathsf{Exp}\hbox {+}\mathsf{Res}$$ derivations, where each strategy realizes an invariant obtained from an annotated clause derived in the proof system.

Matthias Schlaipfer, Friedrich Slivovsky, Georg Weissenbacher, Florian Zuleger
Positional Games and QBF: The Corrective Encoding

Positional games are a mathematical class of two-player games comprising Tic-tac-toe and its generalizations. We propose a novel encoding of these games into Quantified Boolean Formulas (QBFs) such that a game instance admits a winning strategy for first player if and only if the corresponding formula is true. Our approach improves over previous QBF encodings of games in multiple ways. First, it is generic and lets us encode other positional games, such as Hex. Second, structural properties of positional games together with a careful treatment of illegal moves let us generate more compact instances that can be solved faster by state-of-the-art QBF solvers. We establish the latter fact through extensive experiments. Finally, the compactness of our new encoding makes it feasible to translate realistic game problems. We identify a few such problems of historical significance and put them forward to the QBF community as milestones of increasing difficulty.

Valentin Mayer-Eichberger, Abdallah Saffidine
Matrix Multiplication: Verifying Strong Uniquely Solvable Puzzles

Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras [9]. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs) [8]. We begin a systematic computer-aided search for these objects. We develop and implement algorithms based on reductions to $$\mathrm {SAT}$$ and $$\mathrm {IP}$$ to verify that puzzles are strong USPs and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width $$k < 6$$ , and construct puzzles of small width that are larger than previous work. Although our work only deals with puzzles of small-constant width and does not produce a new, faster matrix multiplication algorithm, we provide evidence that there exist families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.

Matthew Anderson, Zongliang Ji, Anthony Yang Xu
Satisfiability Solving Meets Evolutionary Optimisation in Designing Approximate Circuits

Approximate circuits that trade the chip area or power consumption for the precision of the computation play a key role in development of energy-aware systems. Designing complex approximate circuits is, however, very difficult, especially, when a given approximation error has to be guaranteed. Evolutionary search algorithms together with SAT-based error evaluation currently represent one of the most successful approaches for automated circuit approximation. In this paper, we apply satisfiability solving not only for circuit evaluation but also for its minimisation. We consider and evaluate several approaches to this task, both inspired by existing works as well as novel ones. Our experiments show that a combined strategy, integrating evolutionary search and SMT-based sub-circuit minimisation (using quantified theory of arrays) that we propose, is able to find complex approximate circuits (e.g. 16-bit multipliers) with considerably better trade-offs between the circuit precision and size than existing approaches.

Milan Češka, Jiří Matyáš, Vojtech Mrazek, Tomáš Vojnar
SAT Solving with Fragmented Hamiltonian Path Constraints for Wire Arc Additive Manufacturing

In Wire Arc Additive Manufactoring (WAAM), an object is welded from scratch. Finding feasible welding paths that make use of the potential of the technology is a computationally complex problem as it requires planning paths in 3D. All parts of the object to be manufactured have to be visited in few welding paths. The search for such welding paths in 3D can be mapped to searching for a fragmented Hamiltonian path in a mathematical graph.We propose a SAT-based approach to finding such fragmented Hamiltonian paths that is suitable for planning WAAM paths. We show how to encode the search for such paths as a mix of SAT clauses and one non-clausal constraint that can be integrated into the SAT solver itself. The reasoning power of the solver enables us to impose additional constraints coming from the application domain on the planned paths, and we show experimentally that in this way, we can find welding paths for relatively complex object geometries.

Rüdiger Ehlers, Kai Treutler, Volker Wesling
SAT-Based Encodings for Optimal Decision Trees with Explicit Paths

Decision trees play an important role both in Machine Learning and Knowledge Representation. They are attractive due to their immediate interpretability. In the spirit of Occam’s razor, and interpretability, it is desirable to calculate the smallest tree. This, however, has proven to be a challenging task and greedy approaches are typically used to learn trees in practice. Nevertheless, recent work showed that by the use of SAT solvers one may calculate the optimal size tree for real-world benchmarks. This paper proposes a novel SAT-based encoding that explicitly models paths in the tree, which enables us to control the tree’s depth as well as size. At the level of individual SAT calls, we investigate splitting the search space into tree topologies. Our tool outperforms the existing implementation. But also, the experimental results show that minimizing the depth first and then minimizing the number of nodes enables solving a larger set of instances.

Mikoláš Janota, António Morgado
Incremental Encoding of Pseudo-Boolean Goal Functions Based on Comparator Networks

Incremental techniques have been widely used in solving problems reducible to SAT and MaxSAT instances. When an algorithm requires making subsequent runs of a SAT-solver on a slightly changing input formula, it is usually beneficial to change the strategy, so that the algorithm only operates on a single instance of a SAT-solver. One way to do this is via a mechanism called assumptions, which allows to accumulate and reuse knowledge from one iteration to the next and, in consequence, the provided input formula need not to be rebuilt during computation. In this paper we propose an encoding of a Pseudo-Boolean goal function that is based on sorting networks and can be provided to a SAT-solver only once. Then, during an optimization process, different bounds on the value of the function can be given to the solver by appropriate sets of assumptions. The experimental results show that the proposed technique is sound, that is, it increases the number of solved instances and reduces the average time and memory used by the solver on solved instances.

Michał Karpiński, Marek Piotrów
Backmatter
Metadaten
Titel
Theory and Applications of Satisfiability Testing – SAT 2020
herausgegeben von
Luca Pulina
Martina Seidl
Copyright-Jahr
2020
Electronic ISBN
978-3-030-51825-7
Print ISBN
978-3-030-51824-0
DOI
https://doi.org/10.1007/978-3-030-51825-7