Skip to main content

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the 20th International Symposium on Static Analysis, SAS 2013, held in Seattle, WA, USA, in June 2013. The 23 revised full papers presented together with 2 invited talks were selected from 56 submissions. The papers address all aspects of static analysis, including abstract domains, abstract interpretation, abstract testing, bug detection, data flow analysis, model checking, new applications, program transformation, program verification, security analysis, theoretical frameworks, and type checking.



Static Analysis by Abstract Interpretation of Numerical Programs and Systems, and FLUCTUAT

This invited lecture is a survey of our work over the last 12 years or so, dealing with the precise analysis of numerical programs, essentially control programs such as the ones found in the aerospace, nuclear and automotive industry.

Our approach is now based on a rather generic abstract domain, based on “zonotopes” or “affine forms” [7], but with some specificities. For instance, our zonotopic domain provides a functional abstraction [16,13], i.e. an abstraction of the input-output relationships between values of variables, allowing for test generation and modular verification [21]. Also, our domain deals with the real number and the finite precision (for instance, floating-point or fixed-point) semantics [14,17]. It is used in practice in FLUCTUAT [20,9,4] to prove some functional properties of programs, generate (counter-) examples, identify the discrepancy between the real number and the finite precision semantics and its origin etc.

Eric Goubault

Static Analysis in the Continuously Changing World

In this talk, we examine static analysis techniques for continuous-time dynamical systems. Continuous time systems arise in many domains including engineered control systems, physical and biological systems. They are increasingly of interest to the static analysis community, due to the focus on

hybrid (cyber-physical) systems

that capture discrete programs interacting with a continuous external environment. We examine two types of properties that are typically verified: reachability and stability, and explore parallels between commonly used static analysis approaches and a variety of approaches to prove/disprove reachability and stability properties.

Sriram Sankaranarayanan

Abstract Interpretation over Non-lattice Abstract Domains

The classical theoretical framework for static analysis of programs is abstract interpretation. Much of the power and elegance of that framework rests on the assumption that an abstract domain is a lattice. Nonetheless, and for good reason, the literature on program analysis provides many examples of non-lattice domains, including non-convex numeric domains. The lack of domain structure, however, has negative consequences, both for the precision of program analysis and for the termination of standard Kleene iteration. In this paper we explore these consequences and present general remedies.

Graeme Gange, Jorge A. Navas, Peter Schachte, Harald Søndergaard, Peter J. Stuckey

Localizing Widening and Narrowing

We show two strategies which may be easily applied to standard abstract interpretation-based static analyzers. They consist in 1) restricting the scope of widening, and 2) intertwining the computation of ascending and descending chains. Using these optimizations it is possible to improve the precision of the analysis, without any change to the abstract domains.

Gianluca Amato, Francesca Scozzari

The Abstract Domain of Segmented Ranking Functions

We present a parameterized abstract domain for proving program termination by abstract interpretation. The domain automatically synthesizes piecewise-defined ranking functions and infers sufficient conditions for program termination. The analysis uses over-approximations but we prove its soundness, meaning that all program executions respecting these sufficient conditions are indeed terminating.

The abstract domain is parameterized by a numerical abstract domain for environments and a numerical abstract domain for functions. This parameterization allows to easily tune the trade-off between precision and cost of the analysis. We describe an instantiation of this generic domain with intervals and affine functions. We define all abstract operators, including widening to ensure convergence.

To illustrate the potential of the proposed framework, we have implemented a research prototype static analyzer, for a small imperative language, that yielded interesting preliminary results.

Caterina Urban

Symbolic Automata for Static Specification Mining

We present a formal framework for static specification mining. The main idea is to represent partial temporal specifications as symbolic automata – automata where transitions may be labeled by variables, and a variable can be substituted by a letter, a word, or a regular language. Using symbolic automata, we construct an abstract domain for static specification mining, capturing both the partialness of a specification and the precision of a specification. We show interesting relationships between lattice operations of this domain and common operators for manipulating partial temporal specifications, such as building a more informative specification by consolidating two partial specifications.

Hila Peleg, Sharon Shoham, Eran Yahav, Hongseok Yang

Predicate Abstraction for Relaxed Memory Models

We present a novel approach for predicate abstraction of programs running on relaxed memory models. Our approach consists of two steps.

First, we reduce the problem of verifying a program


running on a memory model


to the problem of verifying a program



that captures an abstraction of


as part of the program.

Second, we present a new technique for discovering predicates that enable verification of



. The core idea is to extrapolate from the predicates used to verify


under sequential consistency. A key new concept is that of cube extrapolation: it successfully avoids exponential state explosion when abstracting




We implemented our approach for the x86 TSO and PSO memory models and showed that predicates discovered via extrapolation are powerful enough to verify several challenging concurrent programs. This is the first time some of these programs have been verified for a model as relaxed as PSO.

Andrei Marian Dan, Yuri Meshman, Martin Vechev, Eran Yahav

On Solving Universally Quantified Horn Clauses

Program proving can be viewed as solving for unknown relations (such as loop invariants, procedure summaries and so on) that occur in the logical verification conditions of a program, such that the verification conditions are valid. Generic logical tools exist that can solve such problems modulo certain background theories, and therefore can be used for program analysis. Here, we extend these techniques to solve for


relations. This makes it possible to guide the solver by constraining the form of the proof, allowing it to converge when it otherwise would not. We show how to simulate existing abstract domains in this way, without having to directly implement program analyses or make certain heuristic choices, such as the terms and predicates that form the parameters of the abstract domain. Moreover, the approach gives the flexibility to go beyond these domains and experiment quickly with various invariant forms.

Nikolaj Bjørner, Ken McMillan, Andrey Rybalchenko

From Concrete Examples to Heap Manipulating Programs

Data-structure manipulation is not just a perplexing ordeal for newbies, but also a tedious errand for seasoned programmers. Even after a programmer gets the "hang of things", programming complex pointer manipulations (like reversing a linked list) still makes one reach for a notebook to draw some


diagrams to work out the low-level pointer jugglery. These diagrams are, not surprisingly, used as a basic tool to introduce heap manipulations in introductory programming courses.

We propose a synthesis technology to automatically create programs that manipulate heap data-structures from such diagrams. The programmer is needed to provide a set of concrete examples of her high-level strategy for the low-level manipulations to be discharged automatically. We plan the synthesis task as a sequence of "fast" stages, making it usable in an integrated development environment. We envisage that such a tool will be useful to programmers as a


comfortably tucked away in their favorite integrated development environment.

Subhajit Roy

Local Shape Analysis for Overlaid Data Structures

We present a shape analysis for programs that manipulate overlaid data structures which share sets of objects. The abstract domain contains Separation Logic formulas that (1) combine a per-object separating conjunction with a per-field separating conjunction and (2) constrain a set of variables interpreted as sets of objects. The definition of the abstract domain operators is based on a notion of homomorphism between formulas, viewed as graphs, used recently to define optimal decision procedures for fragments of the Separation Logic. Based on a Frame Rule that supports the two versions of the separating conjunction, the analysis is able to reason in a modular manner about non-overlaid data structures and then, compose information only at a few program points, e.g., procedure returns. We have implemented this analysis in a prototype tool and applied it on several interesting case studies that manipulate overlaid and nested linked lists.

Cezara Drăgoi, Constantin Enea, Mihaela Sighireanu

Quantified Data Automata on Skinny Trees: An Abstract Domain for Lists

We propose a new approach to heap analysis through an abstract domain of automata, called

automatic shapes

. Automatic shapes are modeled after a particular version of

quantified data automata on skinny trees



s), that allows to define universally quantified properties of programs manipulating acyclic heaps with a single pointer field, including data-structures such singly-linked lists. To ensure convergence of the abstract fixed-point computation, we introduce a subclass of


s called elastic


s, which forms an abstract domain. We evaluate our approach on several list manipulating programs and we show that the proposed domain is powerful enough to prove a large class of these programs correct.

Pranav Garg, P. Madhusudan, Gennaro Parlato

Static Validation of Dynamically Generated HTML Documents Based on Abstract Parsing and Semantic Processing

Abstract parsing is a static-analysis technique for a program that, given a reference LR(k) context-free grammar, statically checks whether or not every dynamically generated string output by the program conforms to the grammar. The technique operates by applying an LR(k) parser for the reference language to data-flow equations extracted from the program, immediately parsing all the possible string outputs to validate their syntactic well-formedness.

In this paper, we extend abstract parsing to do semantic-attribute processing and apply this extension to statically verify that HTML documents generated by JSP or PHP are always valid according to the HTML DTD. This application is necessary because the HTML DTD cannot be fully described as an LR(k) grammar. We completely define the HTML 4.01 Transitional DTD in an attributed LALR(1) grammar, carry out experiments for selected real-world JSP and PHP applications, and expose numerous HTML validation errors in the applications. In the process, we experimentally show that semantic properties defined by attribute grammars can also be verified using our technique.

Hyunha Kim, Kyung-Goo Doh, David A. Schmidt

Byte-Precise Verification of Low-Level List Manipulation

We propose a new approach to shape analysis of programs with linked lists that use low-level memory operations. Such operations include pointer arithmetic, safe usage of invalid pointers, block operations with memory, reinterpretation of the memory contents, address alignment, etc. Our approach is based on a new representation of sets of heaps, which is to some degree inspired by works on separation logic with higher-order list predicates, but it is graph-based and uses a more fine-grained (byte-precise) memory model in order to support the various low-level memory operations. The approach was implemented in the Predator tool and successfully validated on multiple non-trivial case studies that are beyond the capabilities of other current fully automated shape analysis tools.

Kamil Dudka, Petr Peringer, Tomáš Vojnar

Abstract Semantic Differencing for Numerical Programs

We address the problem of computing semantic differences between a program and a patched version of the program. Our goal is to obtain a precise characterization of the difference between program versions, or establish their equivalence when no difference exists.

We focus on computing semantic differences in numerical programs where the values of variables have no a-priori bounds, and use abstract interpretation to compute an over-approximation of program differences. Computing differences and establishing equivalence under abstraction requires abstracting relationships between variables in the two programs. Towards that end, we first construct a

correlating program

in which these relationships can be tracked, and then use a

correlating abstract domain

to compute a sound approximation of these relationships. To better establish equivalence between correlated variables and precisely capture differences, our domain has to represent non-convex information using a partially-disjunctive abstract domain. To balance precision and cost of this representation, our domain over-approximates numerical information while preserving equivalence between correlated variables by dynamically partitioning the disjunctive state according to equivalence criteria.

We have implemented our approach in a tool called DIZY, and applied it to a number of real-world examples, including programs from the GNU core utilities, Mozilla Firefox and the Linux Kernel. Our evaluation shows that DIZY often manages to establish equivalence, describes precise approximation of semantic differences when difference exists, and reports only a few false differences.

Nimrod Partush, Eran Yahav

Precise Slicing in Imperative Programs via Term-Rewriting and Abstract Interpretation

We propose a new approach for producing precise constrained slices of programs in a language such as C. We build upon a previous approach for this problem, which is based on term-rewriting, which primarily targets loop-free fragments and is fully precise in this setting. We incorporate abstract interpretation into term-rewriting, using a given arbitrary abstract lattice, resulting in a novel technique for slicing loops whose precision is linked to the power of the given abstract lattice. We address pointers in a first-class manner, including when they are used within loops to traverse and update recursive data structures. Finally, we illustrate the comparative precision of our slices over those of previous approaches using representative examples.

Raghavan Komondoor

Automatic Synthesis of Deterministic Concurrency

Many parallel programs are meant to be deterministic: for the same input, the program must produce the same output, regardless of scheduling choices. Unfortunately, due to complex parallel interaction, programmers make subtle mistakes that lead to violations of determinism.

In this paper, we present a framework for static synthesis of deterministic concurrency control: given a non-deterministic parallel program, our synthesis algorithm introduces synchronization that transforms the program into a deterministic one. The main idea is to statically compute inter-thread ordering constraints that guarantee determinism and preserve program termination. Then, given the constraints and a set of synchronization primitives, the synthesizer produces a program that enforces the constraints using the provided synchronization primitives.

To handle realistic programs, our synthesis algorithm uses two abstractions: a thread-modular abstraction, and an abstraction for memory locations that can track array accesses. We have implemented our algorithm and successfully applied it to synthesize deterministic control for a number of programs inspired by those used in the high-performance computing community. For most programs, the synthesizer produced synchronization that is as good or better than the handcrafted synchronization inserted by the programmer.

Veselin Raychev, Martin Vechev, Eran Yahav

Witnessing Program Transformations

We study two closely related problems: (a) showing that a program transformation is correct and (b) propagating an invariant through a program transformation. The second problem is motivated by an application which utilizes program invariants to improve the quality of compiler optimizations. We show that both problems can be addressed by augmenting a transformation with an auxiliary

witness generation

procedure. For every application of the transformation, the witness generator constructs a relation which guarantees the correctness of that instance. We show that

stuttering simulation

is a sound and complete witness format. Completeness means that, under mild conditions, every correct transformation induces a stuttering simulation witness which is strong enough to prove that the transformation is correct. A witness is self-contained, in that its correctness is independent of the optimization procedure which generates it. Any invariant of a source program can be turned into an invariant of the target of a transformation by suitably composing it with its witness. Stuttering simulations readily compose, forming a single witness for a sequence of transformations. Witness generation is simpler than a formal proof of correctness, and it is comprehensive, unlike the heuristics used for translation validation. We define witnesses for a number of standard compiler optimizations; this exercise shows that witness generators can be implemented quite easily.

Kedar S. Namjoshi, Lenore D. Zuck

Formal Verification of a C Value Analysis Based on Abstract Interpretation

Static analyzers based on abstract interpretation are complex pieces of software implementing delicate algorithms. Even if static analysis techniques are well understood, their implementation on real languages is still error-prone.

This paper presents a formal verification using the Coq proof assistant: a formalization of a value analysis (based on abstract interpretation), and a soundness proof of the value analysis. The formalization relies on generic interfaces. The mechanized proof is facilitated by a translation validation of a Bourdoncle fixpoint iterator.

The work has been integrated into the CompCert verified C-compiler. Our verified analysis directly operates over an intermediate language of the compiler having the same expressiveness as C. The automatic extraction of our value analysis into OCaml yields a program with competitive results, obtained from experiments on a number of benchmarks and comparisons with the Frama-C tool.

Sandrine Blazy, Vincent Laporte, André Maroneze, David Pichardie

Efficient Generation of Correctness Certificates for the Abstract Domain of Polyhedra

Polyhedra form an established abstract domain for inferring runtime properties of programs using abstract interpretation. Computations on them need to be certified for the whole static analysis results to be trusted. In this work, we look at how far we can get down the road of a posteriori verification to lower the overhead of certification of the abstract domain of polyhedra. We demonstrate methods for making the cost of inclusion certificate generation negligible. From a performance point of view, our single-representation, constraints-based implementation compares with state-of-the-art implementations.

Alexis Fouilhe, David Monniaux, Michaël Périn

Static Provenance Verification for Message Passing Programs

Provenance information records the source and ownership history of an object. We study the problem of provenance tracking in concurrent programs, in which several principals execute concurrent processes and exchange messages over unbounded but unordered channels. The provenance of a message, roughly, is a function of the sequence of principals that have transmitted the message in the past. The provenance verification problem is to statically decide, given a message passing program and a set of allowed provenances, whether the provenance of all messages in all possible program executions, belongs to the allowed set.

We formalize the provenance verification problem abstractly in terms of well-structured provenance domains, and show a general decidability result for it. In particular, we show that if the provenance of a message is a sequence of principals who have sent the message, and a provenance query asks if the provenance lies in a regular set, the problem is decidable and EXPSPACE-complete.

While the theoretical complexity is high, we show an implementation of our technique that performs efficiently on a set of Javascript examples tracking provenances in Firefox extensions. Our experiments show that many browser extensions store and transmit user information although the user sets the browser to the private mode.

Rupak Majumdar, Roland Meyer, Zilong Wang

Verification as Learning Geometric Concepts

We formalize the problem of program verification as a learning problem, showing that invariants in program verification can be regarded as geometric concepts in machine learning. Safety properties define

bad states

: states a program should not reach. Program verification explains why a program’s set of reachable states is disjoint from the set of bad states. In Hoare Logic, these explanations are predicates that form inductive assertions. Using samples for reachable and bad states and by applying well known machine learning algorithms for classification, we are able to generate inductive assertions. By relaxing the search for an exact proof to classifiers, we obtain complexity theoretic improvements. Further, we extend the learning algorithm to obtain a sound procedure that can generate proofs containing invariants that are arbitrary boolean combinations of polynomial inequalities. We have evaluated our approach on a number of challenging benchmarks and the results are promising.

Rahul Sharma, Saurabh Gupta, Bharath Hariharan, Alex Aiken, Aditya V. Nori

Interpolation-Based Verification of Floating-Point Programs with Abstract CDCL

One approach for


solvers to improve efficiency is to delegate reasoning to abstract domains. Solvers using abstract domains do not support interpolation and cannot be used for interpolation-based verification. We extend Abstract Conflict Driven Clause Learning (


) solvers with proof generation and interpolation. Our results lead to the first interpolation procedure for floating-point logic and subsequently, the first interpolation-based verifiers for programs with floating-point variables. We demonstrate the potential of this approach by verifying a number of programs which are challenging for current verification tools.

Martin Brain, Vijay D’Silva, Alberto Griggio, Leopold Haller, Daniel Kroening

Concise Analysis Using Implication Algebras for Task-Local Memory Optimisation

OpenMP is a pragma-based extension to C to support parallelism. The OpenMP standard recently added support for task-based parallelism but in a richer way than languages such as Cilk. Naïve implementations give each task its own stack for task-local memory, which is very inefficient.

We detail a program analysis for OpenMP to enable tasks to share stacks without synchronisation—either unconditionally or dependent on some cheap run-time condition which is very likely to hold in busy systems.

The analysis is based on a novel implication-algebra generalisation of logic programming which allows concise but easily readable encodings of the various constraints. The formalism enables us to show that the analysis has a unique solution and polynomial-time complexity.

We conclude with performance figures.

Leo White, Alan Mycroft

Automatic Verification of Erlang-Style Concurrency

This paper presents an approach to verify safety properties of Erlang-style, higher-order concurrent programs


. Inspired by Core Erlang, we introduce



, a prototypical functional language with pattern-matching algebraic data types, augmented with process creation and asynchronous message-passing primitives. We formalise an abstract model of



programs called

Actor Communicating System

(ACS) which has a natural interpretation as a vector addition system, for which some verification problems are decidable. We give a parametric abstract interpretation framework for



and use it to build a polytime computable, flow-based, abstract semantics of



programs, which we then use to bootstrap the ACS construction, thus deriving a more accurate abstract model of the input program.

We evaluate the method which we implemented in the prototype


. We find that in practice our abstraction technique is accurate enough to verify an interesting range of safety properties. Though the ACS coverability problem is




can analyse non-trivial programs in a matter of seconds.

Emanuele D’Osualdo, Jonathan Kochems, C. -H. Luke Ong

Contextual Locking for Dynamic Pushdown Networks

Contextual locking is a scheme for synchronizing between possibly recursive processes that has been proposed by Chadha et al. recently. Contextual locking allows for arbitrary usage of locks within the same procedure call and Chadha et al. show that control-point reachability for two processes adhering to contextual locking is decidable in polynomial time. Here, we complement these results. We show that in presence of contextual locking, control-point reachability becomes PSPACE-hard, already if the number of processes is increased to three. On the other hand, we show that PSPACE is both necessary and sufficient for deciding control-point reachability of


processes for


 > 2, and that this upper bound remains valid even if dynamic spawning of new processes is allowed. Furthermore, we consider the problem of regular reachability, i.e., whether a configuration within a given regular set can be reached. Here, we show that this problem is decidable for recursive processes with dynamic thread creation and contextual locking. Finally, we generalize this result to processes that additionally use a form of join operations.

Peter Lammich, Markus Müller-Olm, Helmut Seidl, Alexander Wenner


Weitere Informationen

Premium Partner

Neuer Inhalt

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Product Lifecycle Management im Konzernumfeld – Herausforderungen, Lösungsansätze und Handlungsempfehlungen

Für produzierende Unternehmen hat sich Product Lifecycle Management in den letzten Jahrzehnten in wachsendem Maße zu einem strategisch wichtigen Ansatz entwickelt. Forciert durch steigende Effektivitäts- und Effizienzanforderungen stellen viele Unternehmen ihre Product Lifecycle Management-Prozesse und -Informationssysteme auf den Prüfstand. Der vorliegende Beitrag beschreibt entlang eines etablierten Analyseframeworks Herausforderungen und Lösungsansätze im Product Lifecycle Management im Konzernumfeld.
Jetzt gratis downloaden!