Skip to main content

2015 | Buch

Hardware and Software: Verification and Testing

11th International Haifa Verification Conference, HVC 2015, Haifa, Israel, November 17-19, 2015, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 11th International Haifa Verification Conference, HVC 2015, held in Haifa, Israel, in November 2015.

The 17 revised full papers and 4 invited talks presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on hybrid systems; tools; verification of robotics; symbolic execution; model checking; timed systems; SAT solving; multi domain verification; and synthesis.

Inhaltsverzeichnis

Frontmatter

Hybrid Systems

Frontmatter
XSpeed: Accelerating Reachability Analysis on Multi-core Processors
Abstract
We present XSpeed a parallel state-space exploration algorithm for continuous systems with linear dynamics and nondeterministic inputs. The motivation of having parallel algorithms is to exploit the computational power of multi-core processors to speed-up performance. The parallelization is achieved on two fronts. First, we propose a parallel implementation of the support function algorithm by sampling functions in parallel. Second, we propose a parallel state-space exploration by slicing the time horizon and computing the reachable states in the time slices in parallel. The second method can be however applied only to a class of linear systems with invertible dynamics and fixed input. A GP-GPU implementation is also presented following a lazy evaluation strategy on support functions. The parallel algorithms are implemented in the tool XSpeed. We evaluated the performance on two benchmarks including an 28 dimension Helicopter model. Comparison with the sequential counterpart shows a maximum speed-up of almost 7\(\times \) on a 6 core, 12 thread Intel Xeon CPU E5-2420 processor. Our GP-GPU implementation shows a maximum speed-up of 12\(\times \) over the sequential implementation and 53\(\times \) over SpaceEx (LGG scenario), the state of the art tool for reachability analysis of linear hybrid systems. Experiments illustrate that our parallel algorithm with time slicing not only speeds-up performance but also improves precision.
Rajarshi Ray, Amit Gurung, Binayak Das, Ezio Bartocci, Sergiy Bogomolov, Radu Grosu
Abstraction-Based Parameter Synthesis for Multiaffine Systems
Abstract
Multiaffine hybrid automata (MHA) represent a powerful formalism to model complex dynamical systems. This formalism is particularly suited for the representation of biological systems which often exhibit highly non-linear behavior. In this paper, we consider the problem of parameter identification for MHA. We present an abstraction of MHA based on linear hybrid automata, which can be analyzed by the SpaceEx model checker. This abstraction enables a precise handling of time-dependent properties. We demonstrate the potential of our approach on a model of a genetic regulatory network and a myocyte model.
Sergiy Bogomolov, Christian Schilling, Ezio Bartocci, Gregory Batt, Hui Kong, Radu Grosu

Tools

Frontmatter
Combining Static and Dynamic Analyses for Vulnerability Detection: Illustration on Heartbleed
Abstract
Security of modern information and communication systems has become a major concern. This tool paper presents Flinder-SCA, an original combined tool for vulnerability detection, implemented on top of Frama-C, a platform for collaborative verification of C programs, and Search Lab’s Flinder testing tool. Flinder-SCA includes three steps. First, abstract interpretation and taint analysis are used to detect potential vulnerabilities (alarms), then program slicing is applied to reduce the initial program, and finally a testing step tries to confirm detected alarms by fuzzing on the reduced program. We describe the proposed approach and the tool, illustrate its application for the recent OpenSSL/HeartBeat Heartbleed vulnerability, and discuss the benefits and industrial application perspectives of the proposed verification approach.
Balázs Kiss, Nikolai Kosmatov, Dillon Pariente, Armand Puccetti
The Verification Cockpit – Creating the Dream Playground for Data Analytics over the Verification Process
Abstract
The Verification Cockpit (VC) is a consolidated platform for planning, tracking, analysis, and optimization of large scale verification projects. Its prime role is to provide decision support from planning to on-going operations of the verification process. The heart of the VC is a holistic centralized data model for the arsenal of verification tools used in modern verification processes. This enables connection of the verification tools and provides rich reporting capabilities as well as hooks to advanced data analytics engines. This paper describes the concept of the Verification Cockpit, its architecture, and implementation. We also include examples of its use in the verification of a high-end processor, while highlighting the capabilities of the platform and the benefits of its use.
Moab Arar, Michael Behm, Odellia Boni, Raviv Gal, Alex Goldin, Maxim Ilyaev, Einat Kermany, John Reysa, Bilal Saleh, Klaus-Dieter Schubert, Gil Shurek, Avi Ziv

Verification of Robotics

Frontmatter
Coverage-Driven Verification —
An Approach to Verify Code for Robots that Directly Interact with Humans
Abstract
Collaborative robots could transform several industries, such as manufacturing and healthcare, but they present a significant challenge to verification. The complex nature of their working environment necessitates testing in realistic detail under a broad range of circumstances. We propose the use of Coverage-Driven Verification (CDV) to meet this challenge. By automating the simulation-based testing process as far as possible, CDV provides an efficient route to coverage closure. We discuss the need, practical considerations, and potential benefits of transferring this approach from microelectronic design verification to the field of human-robot interaction. We demonstrate the validity and feasibility of the proposed approach by constructing a custom CDV testbench and applying it to the verification of an object handover task.
Dejanira Araiza-Illan, David Western, Anthony Pipe, Kerstin Eder

Symbolic Execution

Frontmatter
PANDA: Simultaneous Predicate Abstraction and Concrete Execution
Abstract
We present a new verification algorithm, Panda, that combines predicate abstraction with concrete execution and dynamic analysis. Both the concrete and abstract state spaces of an input program are traversed simultaneously, guiding each other through on-the-fly mutual interaction. Panda performs dynamic on-the-fly pruning of those branches in the abstract state space that diverge from the corresponding concrete trace. If the abstract branch is actually feasible for a different concrete trace, Panda discovers the covering trace by exploring different data choices. Candidate spurious errors may also arise, for example, due to overapproximation of the points-to relation between heap objects. We eliminate all the spurious errors using the well-known approach based on lazy abstraction refinement with interpolants. Results of experiments with our prototype implementation show that Panda can successfully verify programs that feature loops, recursion, and manipulation with objects and arrays. It has a competitive performance and does not report any spurious error for our benchmarks.
Jakub Daniel, Pavel Parízek
TSO to SC via Symbolic Execution
Abstract
Modern multi-core processors equipped with weak memory models like TSO exhibit executions which – due to store buffers – seemingly reorder program operations. Thus, they deviate from the commonly assumed sequential consistency (SC) semantics. Analysis techniques for concurrent programs consequently need to take reorderings into account. For TSO, this is often accomplished by explicitly modelling store buffers.
In this paper, we present an approach for reducing TSO-verification of concurrent programs (with fenced or write-free loops) to SC-verification, thereby being able to reuse standard verification tools. To this end, we transform a given program P into a new program \(P'\) whose SC-semantics is (bisimulation-) equivalent to the TSO-semantics of P. The transformation proceeds via a symbolic execution of P, however, only with respect to store buffer contents. Out of the thus obtained abstraction of P, we generate the SC program \(P'\) which can then be the target of standard analysis tools.
Heike Wehrheim, Oleg Travkin
Parallel Symbolic Execution: Merging In-Flight Requests
Abstract
The strength of symbolic execution is the systematic analysis and validation of all possible control flow paths of a program and their respective properties, which is done by use of a solver component. Thus, it can be used for program testing in many different domains, e.g. test generation, fault discovery, information leakage detection, or energy consumption analysis. But major challenges remain, notably the huge (up to infinite) number of possible paths and the high computation costs generated by the solver to check the satisfiability of the constraints imposed by the paths. To tackle these challenges, researchers proposed the parallelization of symbolic execution by dividing the state space and handling the parts independently. Although this approach scales out well, we can further improve it by proposing a thread-based parallelized approach. It allows us to reuse shared resources like caches more efficiently – a vital part to reduce the solving costs. More importantly, this architecture enables us to use a new technique, which merges parallel incoming solver requests, leveraging incremental solving capabilities provided by modern solvers. Our results show a reduction of the solver time up to 50 % over the multi-threaded execution.
Martin Nowack, Katja Tietze, Christof Fetzer

Model Checking

Frontmatter
Limited Mobility, Eventual Stability
Abstract
The IPv6 Mobility protocol, an archetypal system for supporting communication amongst mobile devices, presents challenging verification problems. While model-checking techniques have been used to illustrate subtle oversights and flaws in the informal specifications previously, the more difficult question — whether it is possible to verify the correctness of the core architecture by checking properties on a small model — has not been adequately examined. In this paper we present a novel technique combining ideas from verification of parameterised systems, abstraction, model-checking of temporal logic properties and simulation relations found in process algebras. The technique relies on the fact that the system can be considered to eventually stabilise to a form more amenable to techniques used for model-checking parameterised systems, allowing the checking of arbitrary LTL properties.
Lenore D. Zuck, Sanjiva Prasad
A New Refinement Strategy for CEGAR-Based Industrial Model Checking
Abstract
This paper presents a novel refinement strategy in the setting of counterexample-guided abstraction refinement (CEGAR)-based model checking. More specifically, the approach shown builds on lazy abstraction in the context of predicate abstraction. While the concept of interpolants is typically used for refinement, this paper employs unsatisfiability cores together with weakest preconditions. The new refinement technique is especially applicable in the setting where interpolants are hard to compute, such as for McCarthy’s theory of arrays or for the theory of fixed-width bit vectors. It is implemented within a model-checking tool developed for the verification of industrial-critical systems and outperforms current refinement strategies on several examples.
Martin Leucker, Grigory Markin, Martin R. Neuhäußer

Timed Systems

Frontmatter
Quasi-equal Clock Reduction: Eliminating Assumptions on Networks
Abstract
Quasi-equal clock reduction for networks of timed automata replaces clocks in equivalence classes by representative clocks. An existing approach which reduces quasi-equal clocks and does not constrain the support of properties of networks, yields significant reductions of the overall verification time of properties. However, this approach requires strong assumptions on networks in order to soundly apply the reduction of clocks. In this work we propose a transformation which does not require assumptions on networks, and does not constrain the support of properties of networks. We demonstrate that the cost of verifying properties is much lower in transformed networks than in their original counterparts with quasi-equal clocks.
Christian Herrera, Bernd Westphal
Resource-Parameterized Timing Analysis of Real-Time Systems
Abstract
Cyber-Physical Systems (CPS) are subject to platform-given resource constraints upon such resources as CPU, memory, and bus, in executing their functionalities. This causes the behavior of a verified application to deviate from its intended timing behavior when the application is integrated on a specific platform. For the same reason, a configuration of platforms cannot be independent from applications in most cases. This paper proposes a new analysis framework of real-time systems where an application and a platform can be analyzed in a fully independent way such that not only the application but also the platform once verified can be exploited by various applications. The dependent behaviors of application and platform are also analyzed by exploiting their individual models transformed from their independent models. To the end, we provide a highly configurable platform model that can be parameterized by various resource configurations. For analysis of application and platform models, we use two model checking techniques: symbolic and statistical model checking techniques of Uppaal. Our framework is demonstrated by a case study where a turn indicator system is analyzed with respect to various platform resource constraints.
Jin Hyun Kim, Axel Legay, Kim G. Larsen, Marius Mikučionis, Brian Nielsen

SAT Solving

Frontmatter
SAT-Based Explicit LTL Reasoning
Abstract
We present here a new explicit reasoning framework for linear temporal logic (LTL), which is built on top of propositional satisfiability (SAT) solving. As a proof-of-concept of this framework, we describe a new LTL satisfiability algorithm. We implemented the algorithm in a tool, Aalta_v2.0, which is built on top of the Minisat SAT solver. We tested the effectiveness of this approach by demonstrating that Aalta_v2.0 significantly outperforms all existing LTL satisfiability solvers.
Jianwen Li, Shufang Zhu, Geguang Pu, Moshe Y. Vardi
Understanding VSIDS Branching Heuristics in Conflict-Driven Clause-Learning SAT Solvers
Abstract
Conflict-Driven Clause-Learning (CDCL) SAT solvers crucially depend on the Variable State Independent Decaying Sum (VSIDS) branching heuristic for their performance. Although VSIDS was proposed nearly fifteen years ago, and many other branching heuristics for SAT solving have since been proposed, VSIDS remains one of the most effective branching heuristics. Despite its widespread use and repeated attempts to understand it, this additive bumping and multiplicative decay branching heuristic has remained an enigma.
In this paper, we advance our understanding of VSIDS by answering the following key questions. The first question we pose is “what is special about the class of variables that VSIDS chooses to additively bump?” In answering this question we showed that VSIDS overwhelmingly picks, bumps, and learns bridge variables, defined as the variables that connect distinct communities in the community structure of SAT instances. This is surprising since VSIDS was invented more than a decade before the link between community structure and SAT solver performance was discovered. Additionally, we show that VSIDS viewed as a ranking function correlates strongly with temporal graph centrality measures. Putting these two findings together, we conclude that VSIDS picks high-centrality bridge variables. The second question we pose is “what role does multiplicative decay play in making VSIDS so effective?” We show that the multiplicative decay behaves like an exponential moving average (EMA) that favors variables that persistently occur in conflicts (the signal) over variables that occur intermittently (the noise). The third question we pose is “whether VSIDS is temporally and spatially focused.” We show that VSIDS disproportionately picks variables from a few communities unlike, say, the random branching heuristic. We put these findings together to invent a new adaptive VSIDS branching heuristic that solves more instances than one of the best-known VSIDS variants over the SAT Competition 2013 benchmarks.
Jia Hui Liang, Vijay Ganesh, Ed Zulkoski, Atulan Zaman, Krzysztof Czarnecki

Multi Domain Verification

Frontmatter
Multi-Domain Verification of Power, Clock and Reset Domains
Abstract
Multi-Domain Verification (MDV) is a comprehensive approach that specializes in verifying design logic that straddles heterogeneous domains. An integrated circuit design can be conceptually disintegrated into multiple types of partition for domain analysis. For example, a modern design typically has a power domain partition, a clock domain partition, and a reset domain partition. Historically, domain analysis is confined to verification of the same domain (homogeneous domain): for example, power domain verification and clock domain crossing verification are performed separately. As designs become highly sophisticated and domains are inter-dependence of each other, this practice is no longer sufficient. Interactions between different types of domains (heterogeneous domains) is exceptionally complex and critical to the success of the device. Hence, a new methodology is required to verify them effectively. Multi-domain verification uses power domain information from the Unified Power Format (UPF) specifications, clock domain information from the clock tree models and reset domain information from the reset tree models. It employs specialized domain analysis and methodologies to examine the complex interactions of logic that straddles domain boundaries—among both homogeneous domains and heterogeneous domains. Multi-domain verification is an efficient way to ensure that all inter-domain issues are explored and verified with complete confidence.
Ping Yeung, Eugene Mandel

Synthesis

Frontmatter
FudgeFactor: Syntax-Guided Synthesis for Accurate RTL Error Localization and Correction
Abstract
Functional verification occupies a significant amount of the digital circuit design cycle. In this paper, we present a novel approach to improve circuit debugging which not only localizes errors with high confidence, but can also provide semantically-meaningful source code corrections. Our method, which we call FudgeFactor, starts with a buggy design, at least one failing and several correct test vectors, and a list of suspect bug locations. We obtain the suspect location from a state-of-the-art debugging tool that includes a significant number of false positives. Using this list and a library of rules empirically characterizing typical source-code mistakes, we instrument the buggy design to allow each potential error location to either be left unchanged, or replaced with a set of possible corrections. FudgeFactor then combines the instrumented design with the test vectors and solves a 2QBF-SAT problem to find the minimum number of source-level changes from the original code which correct the bug. Our 13 benchmarks demonstrate that our method is able to correct a sizable portion of realistic bugs within a reasonable computational time. With the aid of available golden reference designs, we show that those corrections are, at least on these benchmarks, always valid and non-trivial fixes. We believe that our technique significantly improves over other debugging tools in two respects: When we succeed, we obtain a much more precise bug localization with no false positives and little or no ambiguity. Additionally, we offer bug corrections that are inherently meaningful to the designers and enable designers to quickly recognize and understand the root cause of the bug with a high level of confidence.
Andrew Becker, Djordje Maksimovic, David Novo, Mohsen Ewaida, Andreas Veneris, Barbara Jobstmann, Paolo Ienne
On Switching Aware Synthesis for Combinational Circuits
Abstract
We propose a synthesis algorithm for combinational circuits which optimizes the expected number of gate switchings induced by typical sequences of input vectors. Our algorithm, which is based on simple observations concerning AND gates, performs quite well on sequences produced by the same probabilistic models used to generate the training sequences.
Jan Lanik, Oded Maler
Backmatter
Metadaten
Titel
Hardware and Software: Verification and Testing
herausgegeben von
Nir Piterman
Copyright-Jahr
2015
Electronic ISBN
978-3-319-26287-1
Print ISBN
978-3-319-26286-4
DOI
https://doi.org/10.1007/978-3-319-26287-1

Premium Partner