Skip to main content

Open Access 2022 | Open Access | Buch

Buchtitelbild

Tools and Algorithms for the Construction and Analysis of Systems

28th International Conference, TACAS 2022, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Munich, Germany, April 2–7, 2022, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022.

The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report.

TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems.

Inhaltsverzeichnis

Frontmatter

Probabilistic Systems

Frontmatter

Open Access

A Probabilistic Logic for Verifying Continuous-time Markov Chains

A continuous-time Markov chain (CTMC) execution is a continuous class of probability distributions over states. This paper proposes a probabilistic linear-time temporal logic, namely continuous-time linear logic (CLL), to reason about the probability distribution execution of CTMCs. We define the syntax of CLL on the space of probability distributions. The syntax of CLL includes multiphase timed until formulas, and the semantics of CLL allows time reset to study relatively temporal properties. We derive a corresponding model-checking algorithm for CLL formulas. The correctness of the model-checking algorithm depends on Schanuel’s conjecture, a central open problem in transcendental number theory. Furthermore, we provide a running example of CTMCs to illustrate our method.

Ji Guan, Nengkun Yu

Open Access

Under-Approximating Expected Total Rewards in POMDPs

We consider the problem: is the optimal expected total reward to reach a goal state in a partially observable Markov decision process (POMDP) below a given threshold? We tackle this—generally undecidable—problem by computing under-approximations on these total expected rewards. This is done by abstracting finite unfoldings of the infinite belief MDP of the POMDP. The key issue is to find a suitable under-approximation of the value function. We provide two techniques: a simple (cut-off) technique that uses a good policy on the POMDP, and a more advanced technique (belief clipping) that uses minimal shifts of probabilities between beliefs. We use mixed-integer linear programming (MILP) to find such minimal probability shifts and experimentally show that our techniques scale quite well while providing tight lower bounds on the expected total reward.

Alexander Bork, Joost-Pieter Katoen, Tim Quatmann

Open Access

Correct Probabilistic Model Checking with Floating-Point Arithmetic

Probabilistic model checking computes probabilities and expected values related to designated behaviours of interest in Markov models. As a formal verification approach, it is applied to critical systems; thus we trust that probabilistic model checkers deliver correct results. To achieve scalability and performance, however, these tools use finite-precision floating-point numbers to represent and calculate probabilities and other values. As a consequence, their results are affected by rounding errors that may accumulate and interact in hard-to-predict ways. In this paper, we show how to implement fast and correct probabilistic model checking by exploiting the ability of current hardware to control the direction of rounding in floating-point calculations. We outline the complications in achieving correct rounding from higher-level programming languages, describe our implementation as part of the Modest Toolset’s mcsta model checker, and exemplify the tradeoffs between performance and correctness in an extensive experimental evaluation across different operating systems and CPU architectures.

Arnd Hartmanns

Open Access

Correlated Equilibria and Fairness in Concurrent Stochastic Games

Game-theoretic techniques and equilibria analysis facilitate the design and verification of competitive systems. While algorithmic complexity of equilibria computation has been extensively studied, practical implementation and application of game-theoretic methods is more recent. Tools such as PRISM-games support automated verification and synthesis of zero-sum and ( $$\varepsilon $$ ε -optimal subgame-perfect) social welfare Nash equilibria properties for concurrent stochastic games. However, these methods become inefficient as the number of agents grows and may also generate equilibria that yield significant variations in the outcomes for individual agents. We extend the functionality of PRISM-games to support correlated equilibria, in which players can coordinate through public signals, and introduce a novel optimality criterion of social fairness, which can be applied to both Nash and correlated equilibria. We show that correlated equilibria are easier to compute, are more equitable, and can also improve joint outcomes. We implement algorithms for both normal form games and the more complex case of multi-player concurrent stochastic games with temporal logic specifications. On a range of case studies, we demonstrate the benefits of our methods.

Marta Kwiatkowska, Gethin Norman, David Parker, Gabriel Santos

Omega Automata

Frontmatter

Open Access

A Direct Symbolic Algorithm for Solving Stochastic Rabin Games

We consider turn-based stochastic 2-player games on graphs with $$\omega $$ ω -regular winning conditions. We provide a direct symbolic algorithm for solving such games when the winning condition is formulated as a Rabin condition. For a stochastic Rabin game with k pairs over a game graph with n vertices, our algorithm runs in $$O(n^{k+2}k!)$$ O ( n k + 2 k ! ) symbolic steps, which improves the state of the art.We have implemented our symbolic algorithm, along with performance optimizations including parallellization and acceleration, in a BDD-based synthesis tool called Fairsyn. We demonstrate the superiority of Fairsyn compared to the state of the art on a set of synthetic benchmarks derived from the VLTS benchmark suite and on a control system benchmark from the literature. In our experiments, Fairsyn performed significantly faster with up to two orders of magnitude improvement in computation time.

Tamajit Banerjee, Rupak Majumdar, Kaushik Mallik, Anne-Kathrin Schmuck, Sadegh Soudjani

Open Access

Practical Applications of the Alternating Cycle Decomposition

In 2021, Casares, Colcombet, and Fijalkow introduced the Alternating Cycle Decomposition (ACD) to study properties and transformations of Muller automata. We present the first practical implementation of the ACD in two different tools, Owl and Spot, and adapt it to the framework of Emerson-Lei automata, i.e., $$\omega $$ ω -automata whose acceptance conditions are defined by Boolean formulas. The ACD provides a transformation of Emerson-Lei automata into parity automata with strong optimality guarantees: the resulting parity automaton is minimal among those automata that can be obtained by duplication of states. Our empirical results show that this transformation is usable in practice. Further, we show how the ACD can generalize many other specialized constructions such as deciding typeness of automata and degeneralization of generalized Büchi automata, providing a framework of practical algorithms for $$\omega $$ ω -automata.

Antonio Casares, Alexandre Duret-Lutz, Klara J. Meyer, Florian Renkin, Salomon Sickert

Open Access

Sky Is Not the Limit
Tighter Rank Bounds for Elevator Automata in Büchi Automata Complementation

We propose several heuristics for mitigating one of the main causes of combinatorial explosion in rank-based complementation of Büchi automata (BAs): unnecessarily high bounds on the ranks of states. First, we identify elevator automata, which is a large class of BAs (generalizing semi-deterministic BAs), occurring often in practice, where ranks of states are bounded according to the structure of strongly connected components. The bounds for elevator automata also carry over to general BAs that contain elevator automata as a sub-structure. Second, we introduce two techniques for refining bounds on the ranks of BA states using data-flow analysis of the automaton. We implement out techniques as an extension of the tool Ranker for BA complementation and show that they indeed greatly prune the generated state space, obtaining significantly better results and outperforming other state-of-the-art tools on a large set of benchmarks.

Vojtěch Havlena, Ondřej Lengál, Barbora Šmahlíková

Open Access

On-The-Fly Solving for Symbolic Parity Games

Parity games can be used to represent many different kinds of decision problems. In practice, tools that use parity games often rely on a specification in a higher-order logic from which the actual game can be obtained by means of an exploration. For many of these decision problems we are only interested in the solution for a designated vertex in the game. We formalise how to use on-the-fly solving techniques during the exploration process, and show that this can help to decide the winner of such a designated vertex in an incomplete game. Furthermore, we define partial solving techniques for incomplete parity games and show how these can be made resilient to work directly on the incomplete game, rather than on a set of safe vertices. We implement our techniques for symbolic parity games and study their effectiveness in practice, showing that speed-ups of several orders of magnitude are feasible and overhead (if unavoidable) is typically low.

Maurice Laveaux, Wieger Wesselink, Tim A. C. Willemse

Equivalence Checking

Frontmatter

Open Access

Distributed Coalgebraic Partition Refinement

Partition refinement is a method for minimizing automata and transition systems of various types. Recently we have developed a partition refinement algorithm and the tool CoPaR that is generic in the transition type of the input system and matches the theoretical run time of the best known algorithms for many concrete system types. Genericity is achieved by modelling transition types as functors on sets and systems as coalgebras. Experimentation has shown that memory consumption is a bottleneck for handling systems with a large state space, while running times are fast. We have therefore extended an algorithm due to Blom and Orzan, which is suitable for a distributed implementation to the coalgebraic level of genericity, and implemented it in CoPaR. Experiments show that this allows to handle much larger state spaces. Running times are low in most experiments, but there is a significant penalty for some.

Fabian Birkmann, Hans-Peter Deifel, Stefan Milius

Open Access

From Bounded Checking to Verification of Equivalence via Symbolic Up-to Techniques

We present a bounded equivalence verification technique for higher-order programs with local state. This technique combines fully abstract symbolic environmental bisimulations similar to symbolic game semantics, novel up-to techniques, and lightweight state invariant annotations. This yields an equivalence verification technique with no false positives or negatives. The technique is bounded-complete, in that all inequivalences are automatically detected given large enough bounds. Moreover, several hard equivalences are proved automatically or after being annotated with state invariants. We realise the technique in a tool prototype called Hobbit and benchmark it with an extensive set of new and existing examples. Hobbit can prove many classical equivalences including all Meyer and Sieber examples.

Vasileios Koutavas, Yu-Yang Lin, Nikos Tzevelekos

Open Access

Equivalence Checking for Orthocomplemented Bisemilattices in Log-Linear Time

Motivated by proof checking, we consider the problem of efficiently establishing equivalence of propositional formulas by relaxing the completeness requirements while still providing certain guarantees. We present a quasilinear time algorithm to decide the word problem on a natural algebraic structures we call orthocomplemented bisemilattices, a subtheory of Boolean algebra. The starting point for our procedure is a variation of Aho, Hopcroft, Ullman algorithm for isomorphism of trees, which we generalize to directed acyclic graphs. We combine this algorithm with a term rewriting system we introduce to decide equivalence of terms. We prove that our rewriting system is terminating and confluent, implying the existence of a normal form. We then show that our algorithm computes this normal form in log linear (and thus sub-quadratic) time. We provide pseudocode and a minimal working implementation in Scala.

Simon Guilloud, Viktor Kunčak

Monitoring and Analysis

Frontmatter

Open Access

A Theoretical Analysis of Random Regression Test Prioritization

Regression testing is an important activity to check software changes by running the tests in a test suite to inform the developers whether the changes lead to test failures. Regression test prioritization (RTP) aims to inform the developers faster by ordering the test suite so that tests likely to fail are run earlier. Many RTP techniques have been proposed and are often compared with the random RTP baseline by sampling some of the n! different test-suite orders for a test suite with n tests. However, there is no theoretical analysis of random RTP. We present such an analysis, deriving probability mass functions and expected values for metrics and scenarios commonly used in RTP research. Using our analysis, we revisit some of the most highly cited RTP papers and find that some presented results may be due to insufficient sampling. Future RTP research can leverage our analysis and need not use random sampling but can use our simple formulas or algorithms to more precisely compare with random RTP.

Pu Yi, Hao Wang, Tao Xie, Darko Marinov, Wing Lam

Open Access

Verified First-Order Monitoring with Recursive Rules

First-order temporal logics and rule-based formalisms are two popular families of specification languages for monitoring. Each family has its advantages and only few monitoring tools support their combination. We extend metric first-order temporal logic (MFOTL) with a recursive let construct, which enables interleaving rules with temporal logic formulas. We also extend VeriMon, an MFOTL monitor whose correctness has been formally verified using the Isabelle proof assistant, to support the new construct. The extended correctness proof covers the interaction of the new construct with the existing verified algorithm, which is subtle due to the presence of the bounded future temporal operators. We demonstrate the recursive let’s usefulness on several example specifications and evaluate our verified algorithm’s performance against the DejaVu monitoring tool.

Sheila Zingg, Srđan Krstić, Martin Raszyk, Joshua Schneider, Dmitriy Traytel

Open Access

Maximizing Branch Coverage with Constrained Horn Clauses

State-of-the-art solvers for constrained Horn clauses (CHC) are successfully used to generate reachability facts from symbolic encodings of programs. In this paper, we present a new application to test-case generation: if a block of code is provably unreachable, no test case can be generated allowing to explore other blocks of code. Our new approach uses CHC to incrementally construct different program unrollings and extract test cases from models of satisfiable formulas. At the same time, a CHC solver keeps track of CHCs that represent unreachable blocks of code which makes the unrolling process more efficient. In practice, this lets our approach to terminate early while guaranteeing maximal coverage. Our implementation called Horntinuum exhibits promising performance: it generates high coverage in the majority of cases and spends less time on average than state-of-the-art.

Ilia Zlatkin, Grigory Fedyukovich

Open Access

Efficient Analysis of Cyclic Redundancy Architectures via Boolean Fault Propagation

Many safety critical systems guarantee fault-tolerance by using several redundant copies of their components. When designing such redundancy architectures, it is crucial to analyze their fault trees, which describe combinations of faults of individual components that may cause malfunction of the system. State-of-the-art techniques for fault tree computation use first-order formulas with uninterpreted functions to model the transformations of signals performed by the redundancy system and an AllSMT query for computation of the fault tree from this encoding. Scalability of the analysis can be further improved by techniques such as predicate abstraction, which reduces the problem to Boolean case.In this paper, we show that as far as fault trees of redundancy architectures are concerned, signal transformation can be equivalently viewed in a purely Boolean way as fault propagation. This alternative view has important practical consequences. First, it applies also to general redundancy architectures with cyclic dependencies among components, to which the current state-of-the-art methods based on AllSMT are not applicable, and which currently require expensive sequential reasoning. Second, it allows for a simpler encoding of the problem and usage of efficient algorithms for analysis of fault propagation, which can significantly improve the runtime of the analyses. A thorough experimental evaluation demonstrates the superiority of the proposed techniques.

Marco Bozzano, Alessandro Cimatti, Alberto Griggio, Martin Jonáš

Tools | Optimizations, Repair and Explainability

Frontmatter

Open Access

Adiar Binary Decision Diagrams in External Memory

We follow up on the idea of Lars Arge to rephrase the Reduce and Apply operations of Binary Decision Diagrams (BDDs) as iterative I/O-efficient algorithms. We identify multiple avenues to simplify and improve the performance of his proposed algorithms. Furthermore, we extend the technique to other common BDD operations, many of which are not derivable using Apply operations alone. We provide asymptotic improvements to the few procedures that can be derived using Apply.Our work has culminated in a BDD package named Adiar that is able to efficiently manipulate BDDs that outgrow main memory. This makes Adiar surpass the limits of conventional BDD packages that use recursive depth-first algorithms. It is able to do so while still achieving a satisfactory performance compared to other BDD packages: Adiar, in parts using the disk, is on instances larger than 9.5 GiB only 1.47 to 3.69 times slower compared to CUDD and Sylvan, exclusively using main memory. Yet, Adiar is able to obtain this performance at a fraction of the main memory needed by conventional BDD packages to function.

Steffan Christ Sølvsten, Jaco van de Pol, Anna Blume Jakobsen, Mathias Weller Berg Thomasen

Open Access

Forest GUMP: A Tool for Explanation

In this paper, we present Forest GUMP (for Generalized, Unifying Merge Process) a tool for providing tangible experience with three concepts of explanation. Besides the well-known model explanation and outcome explanation, Forest GUMP also supports class characterization, i.e., the precise characterization of all samples with the same classification. Key technology to achieve these results is algebraic aggregation, i.e., the transformation of a Random Forest into a semantically equivalent, concise white-box representation in terms of Algebraic Decision Diagrams (ADDs). The paper sketches the method and illustrates the use of Forest GUMP along an illustrative example taken from the literature. This way readers should acquire an intuition about the tool, and the way how it should be used to increase the understanding not only of the considered dataset, but also of the character of Random Forests and the ADD technology, here enriched to comprise infeasible path elimination.

Alnis Murtovi, Alexander Bainczyk, Bernhard Steffen

Open Access

Alpinist: An Annotation-Aware GPU Program Optimizer

GPU programs are widely used in industry. To obtain the best performance, a typical development process involves the manual or semi-automatic application of optimizations prior to compiling the code. To avoid the introduction of errors, we can augment GPU programs with (pre- and postcondition-style) annotations to capture functional properties. However, keeping these annotations correct when optimizing GPU programs is labor-intensive and error-prone.This paper introduces Alpinist, an annotation-aware GPU program optimizer. It applies frequently-used GPU optimizations, but besides transforming code, it also transforms the annotations. We evaluate Alpinist, in combination with the VerCors program verifier, to automatically optimize a collection of verified programs and reverify them.

Ömer Şakar, Mohsen Safari, Marieke Huisman, Anton Wijs

Open Access

Automatic Repair for Network Programs

Debugging imperative network programs is a difficult task for operators as it requires understanding various network modules and complicated data structures. For this purpose, this paper presents an automated technique for repairing network programs with respect to unit tests. Given as input a faulty network program and a set of unit tests, our approach localizes the fault through symbolic reasoning, and synthesizes a patch ensuring that the repaired program passes all unit tests. It applies domain-specific abstraction to simplify network data structures and exploits function summary reuse for modular symbolic analysis. We have implemented the proposed techniques in a tool called NetRep and evaluated it on 10 benchmarks adapted from real-world software-defined network controllers. The evaluation results demonstrate the effectiveness and efficiency of NetRep for repairing network programs.

Lei Shi, Yuepeng Wang, Rajeev Alur, Boon Thau Loo

11th Competition on Software Verification: SV-COMP 2022

Frontmatter

Open Access

Progress on Software Verification: SV-COMP 2022

The 11th edition of the Competition on Software Verification (SV-COMP 2022) provides the largest ever overview of tools for software verification. The competition is an annual comparative evaluation of fully automatic software verifiers for C and Java programs. The objective is to provide an overview of the state of the art in terms of effectiveness and efficiency of software verification, establish standards, provide a platform for exchange to developers of such tools, educate PhD students on reproducibility approaches and benchmarking, and provide computing resources to developers that do not have access to compute clusters. The competition consisted of 15 648 verification tasks for C programs and 586 verification tasks for Java programs. Each verification task consisted of a program and a property (reachability, memory safety, overflows, termination). The new category on data-race detection was introduced as demonstration category. SV-COMP 2022 had 47 participating verification systems from 33 teams from 11 countries.

Dirk Beyer

Open Access

AProVE: Non-Termination Witnesses for C Programs
(Competition Contribution)

To (dis)prove termination of C programs, AProVE uses symbolic execution to transform the program’s LLVM code into an integer transition system, which is then analyzed by several backends. The transformation steps in AProVE and the tools in the backend only produce sub-proofs in their domains. Hence, we now developed new techniques to automatically combine the essence of these proofs. If non-termination is proved, then they yield an overall witness, which identifies a non-terminating path in the original C program.

Jera Hensel, Constantin Mensendiek, Jürgen Giesl

Open Access

BRICK: Path Enumeration Based Bounded Reachability Checking of C Program (Competition Contribution)

BRICK is a bounded reachability checker for embedded C programs. BRICK conducts a path-oriented style checking of the bounded state space of the program, that enumerates and checks all the possible paths of the program in the threshold one by one. To alleviate the path explosion problem, BRICK locates and records unsatisfiable core path segments during the checking of each path and uses them to prune the search space. Furthermore, derivative free optimization based falsification and loop induction are introduced to handle complex program features like nonlinear path conditions and loops efficiently.

Lei Bu, Zhunyi Xie, Lecheng Lyu, Yichao Li, Xiao Guo, Jianhua Zhao, Xuandong Li

Open Access

A Prototype for Data Race Detection in CSeq 3
(Competition Contribution)

We sketch a sequentialization-based technique for bounded detection of data races under sequential consistency, and summarise the major improvements to our verification framework over the last years.

Alex Coto, Omar Inverso, Emerson Sales, Emilio Tuosto

Open Access

Dartagnan: SMT-based Violation Witness Validation (Competition Contribution)

The validation of violation witnesses is an important step during software verification. It hides false alarms raised by verifiers from engineers, which in turn helps them concentrate on critical issues and improves the verification experience. Until the 2021 edition of the Competition on Software Verification (SV-COMP), CPAchecker was the only witness validator for the ConcurrencySafety category. This article describes how we extended the Dartagnan verifier to support the validation of violation witnesses. The results of the 2022 edition of the competition show that, for witnesses generated by different verifiers, Dartagnan succeeds in the validation of witnesses where CPAchecker does not. Our extension thus improves the validation possibilities for the overall competition. We discuss Dartagnan ’s strengths and weaknesses as a validation tool and describe possible ways to improve it in the future.

Hernán Ponce-de-León, Thomas Haas, Roland Meyer

Open Access

Deagle: An SMT-based Verifier for Multi-threaded Programs (Competition Contribution)

Deagle is an SMT-based multi-threaded program verification tool. It is built on top of CBMC (front-end) and MiniSAT (back-end). The basic idea of Deagle is to integrate into the SMT solver an ordering consistency theory that handles ordering relations over the shared variable accesses in the program. The front-end encodes the input program into an extended propositional formula that contains ordering constraints. The back-end is reinforced with a solver for the ordering consistency theory. This paper presents the basic idea, architecture, installation, and usage of Deagle.

Fei He, Zhihang Sun, Hongyu Fan

Open Access

The Static Analyzer Frama-C in SV-COMP (Competition Contribution)

Frama-C is a well-known platform for source-code analysis of programs written in C. It can be extended via its plug-in architecture by various analysis backends and features an extensive annotation language called ACSL. So far it was hard to compare Frama-C to other software verifiers. Our competition participation contributes an adapter named Frama-C-SV, which makes it possible to evaluate Frama-C against other software verifiers. The adapter transforms standard verification tasks (from the well-known SV-Benchmarks collection) in a way that can be understood by Frama-C and produces a verification witness as output. While Frama-C provides many different analyses, we focus on the Evolved Value Analysis (EVA), which uses a combination of different domains to over-approximate the behavior of the analyzed program.

Dirk Beyer, Martin Spiessl

Open Access

GDart: An Ensemble of Tools for Dynamic Symbolic Execution on the Java Virtual Machine (Competition Contribution)

GDart is an ensemble of tools allowing dynamic symbolic execution of JVM programs. The dynamic symbolic execution engine is decomposed into three different components: a symbolic decision engine (DSE), a concolic executor (SPouT), and a SMT solver backend allowing meta-strategy solving of SMT problems (JConstraints). The symbolic decision component is loosely coupled with the executor by a newly introduced communication protocol. At SV-COMP 2022, GDart solved 471 of 586 tasks finding more correct false results (302) than correct true results (169). It scored fourth place.

Malte Mues, Falk Howar

Open Access

Graves-CPA: A Graph-Attention Verifier Selector (Competition Contribution)

Graves-CPA is a verification tool which uses algorithm selection to decide an ordering of underlying verifiers to most effectively verify a given program. Graves-CPA represents programs using an amalgam of traditional program graph representations and uses state-of-the-art graph neural network techniques to dynamically decide how to run a set of verification techniques. The Graves technique is implementation agnostic, but it’s competition submission, Graves-CPA, is built using several CPAchecker configurations as its underlying verifiers.

Will Leeson, Matthew B. Dwyer

Open Access

GWIT: A Witness Validator for Java based on GraalVM (Competition Contribution)

GWIT is a validator for violation witnesses produced by Java verifiers in the SV-COMP software verification competition. GWIT weaves assumptions documented in a witness into the source code of a program, effectively restricting the part of the program that is explored by a program analysis. It then uses the GDart tool (dynamic symbolic execution) to search for reachable errors in the modified program.

Falk Howar, Malte Mues

Open Access

The Static Analyzer Infer in SV-COMP (Competition Contribution)

We present Infer-sv, a wrapper that adapts Infer for SV-COMP. Infer is a static-analysis tool for C and other languages, developed by Facebook and used by multiple large companies. It is strongly aimed at industry and the internal use at Facebook. Despite its popularity, there are no reported numbers on its precision and efficiency. With Infer-sv, we take a first step towards an objective comparison of Infer with other SV-COMP participants from academia and industry.

Matthias Kettl, Thomas Lemberger

Open Access

LART: Compiled Abstract Execution
(Competition Contribution)

lart – llvm abstraction and refinement tool – originates from the divine model-checker [5, 7], in which it was employed as an abstraction toolchain for the llvm interpreter. In this contribution, we present a stand-alone tool that does not need a verification backend but performs the verification natively. The core idea is to instrument abstract semantics directly into the program and compile it into a native binary that performs program analysis. This approach provides a performance gain of native execution over the interpreted analysis and allows compiler optimizations to be employed on abstracted code, further extending the analysis efficiency. Compilation-based abstraction introduces new challenges solved by lart, like domain interaction of concrete and abstract values simulation of nondeterministic runtime or constraint propagation.

Henrich Lauko, Petr Ročkai

Open Access

Symbiotic 9: String Analysis and Backward Symbolic Execution with Loop Folding
(Competition Contribution)

The development of Symbiotic 9 focused mainly on two components. One is the symbolic executor Slowbeast, which newly supports backward symbolic execution including its extension called loop folding. This technique can infer inductive invariants from backward symbolic execution states. Thanks to these invariants, Symbiotic 9 is able to produce non-trivial correctness witnesses, which is a feature that is missing in previous versions of Symbiotic. We have also extended forward symbolic execution in Slowbeast with a basic support for parallel programs. The second component with significant improvements is the instrumentation module. In particular, we have extended the static analysis of accesses to arrays with features designed for programs that manipulate C strings.Symbiotic 9 is the Overall winner of SV-COMP 2022. Moreover, it won also the categories MemSafety and SoftwareSystems, and placed third in FalsificationOverall.

Marek Chalupa, Vincent Mihalkovič, Anna Řechtáčková, Lukáš Zaoral, Jan Strejček

Open Access

Symbiotic-Witch: A Klee-Based Violation Witness Checker
(Competition Contribution)

Symbiotic-Witch is a new tool for checking violation witnesses in the GraphML-based format used at SV-COMP since 2015. Roughly speaking, Symbiotic-Witch symbolically executes a given program with Klee and simultaneously tracks the set of nodes the witness automaton can be in. Moreover, it reads the return values of nondeterministic functions specified in the witness and uses them to prune the symbolic execution. The violation witness is confirmed if the symbolic execution reaches an error and the current set of witness nodes contains a matching violation node. Symbiotic-Witch currently supports violation witnesses of reachability safety, memory safety, memory cleanup, and overflow properties.

Paulína Ayaziová, Marek Chalupa, Jan Strejček

Open Access

Theta: portfolio of CEGAR-based analyses with dynamic algorithm selection (Competition Contribution)

Theta is a model checking framework based on abstraction refinement algorithms. In SV-COMP 2022, we introduce: 1) reasoning at the source-level via a direct translation from C programs; 2) support for concurrent programs with interleaving semantics; 3) mitigation for non-progressing refinement loops; 4) support for SMT-LIB-compliant solvers. We combine all of the aforementioned techniques into a portfolio with dynamic algorithm selection.

Zsófia Ádám, Levente Bajczi, Mihály Dobos-Kovács, Ákos Hajdu, Vince Molnár

Open Access

Ultimate GemCutter and the Axes of Generalization
(Competition Contribution)

Ultimate GemCutter verifies concurrent programs using the CEGAR paradigm, by generalizing from spurious counterexample traces to larger sets of correct traces. We integrate classical CEGAR generalization with orthogonal generalization across interleavings. Thereby, we are able to prove correctness of programs otherwise out-of-reach for interpolation-based verification. The competition results show significant advantages over other concurrency approaches in the Ultimate family.

Dominik Klumpp, Daniel Dietsch, Matthias Heizmann, Frank Schüssele, Marcel Ebbinghaus, Azadeh Farzan, Andreas Podelski

Open Access

Wit4Java: A Violation-Witness Validator for Java Verifiers (Competition Contribution)

We describe and evaluate a violation-witness validator for Java verifiers called Wit4Java. It takes a Java program with a safety property and the respective violation-witness output by a Java verifier to generate a new Java program whose execution deterministically violates the property. We extract the value of the program variables from the counterexample represented by the violation-witness and feed this information back into the original program. In addition, we have two implementations for instantiating source programs by injecting counterexamples. Experimental results show that Wit4Java can correctly validate the violation-witnesses produced by JBMC and GDart in a few seconds.

Tong Wu, Peter Schrammel, Lucas C. Cordeiro
Backmatter
Metadaten
Titel
Tools and Algorithms for the Construction and Analysis of Systems
herausgegeben von
Dr. Dana Fisman
Grigore Rosu
Copyright-Jahr
2022
Electronic ISBN
978-3-030-99527-0
Print ISBN
978-3-030-99526-3
DOI
https://doi.org/10.1007/978-3-030-99527-0

Premium Partner