Skip to main content

2013 | Buch

Hardware and Software: Verification and Testing

9th International Haifa Verification Conference, HVC 2013, Haifa, Israel, November 5-7, 2013, Proceedings

herausgegeben von: Valeria Bertacco, Axel Legay

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 9th International Haifa Verification Conference, HVC 2013, held in Haifa, Israel in November 2013. The 24 revised full papers presented were carefully reviewed and selected from 49 submissions. The papers are organized in topical sections on SAT and SMT-based verification, software testing, supporting dynamic verification, specification and coverage, abstraction and model presentation.

Inhaltsverzeichnis

Frontmatter

Session 1: SAT and SMT-Based Verification

Backbones for Equality
Abstract
This paper generalizes the notion of the backbone of a CNF formula to capture also equations between literals. Each such equation applies to remove a variable from the original formula thus simplifying the formula without changing its satisfiability, or the number of its satisfying assignments. We prove that for a formula with n variables, the generalized backbone is computed with at most n + 1 satisfiable calls and exactly one unsatisfiable call to the SAT solver. We illustrate the integration of generalized backbone computation to facilitate the encoding of finite domain constraints to SAT. In this context generalized backbones are computed for small groups of constraints and then propagated to simplify the entire constraint model. A preliminary experimental evaluation is provided.
Michael Codish, Yoav Fekete, Amit Metodi
PASS: String Solving with Parameterized Array and Interval Automaton
Abstract
The problem of solving string constraints together with numeric constraints has received increasing interest recently. Existing methods use either bit-vectors or automata (or their combination) to model strings, and reduce string constraints to bit-vector constraints or automaton operations, which are then solved in the respective domain. Unfortunately, they often fail to achieve a good balance between efficiency, accuracy, and comprehensiveness. In this paper we illustrate a new technique that uses parameterized arrays as the main data structure to model strings, and converts string constraints into quantified expressions that are solved through quantifier elimination. We present an efficient and sound quantifier elimination algorithm. In addition, we use an automaton model to handle regular expressions and reason about string values faster. Our method does not need to enumerate string lengths (as bit-vector based methods do), or concrete string values (as automaton based methods do). Hence, it can achieve much better accuracy and efficiency. In particular, it can identify unsatisfiable cases quickly. Our solver (named PASS) supports most of the popular string operations, including string comparisons, string-numeric conversions, and regular expressions. Experimental results demonstrate the advantages of our method.
Guodong Li, Indradeep Ghosh
Increasing Confidence in Liveness Model Checking Results with Proofs
Abstract
Model checking is an established technique to get confidence in the correctness of a system when testing is not sufficient. Validating safety-critical systems is one of the use cases for model checking. As model checkers themselves are quite complicated pieces of software, there is room for doubt about the correctness of the model checking result. The model checker might contain programming errors that influence the result of the analysis.
When a model checker finds a counter-example, it is straightforward to simulate the model and check that the counter-example is valid. Some model checking algorithms are also capable of providing proofs of validity. In this paper we describe a way to get proofs of correctness for liveness properties. This works by transforming the liveness property into a safety property using a reduction, and then getting a proof for that safety property. This changes the need to trust the model checker into the need to trust our reduction and a proof checker, which are much simpler programs than model checkers. Our method is intended to be usable in practice, and we provide experimental data to support this. We only handle properties that hold: counter-examples should be detected with other methods.
Tuomas Kuismin, Keijo Heljanko
Speeding Up the Safety Verification of Programmable Logic Controller Code
Abstract
Programmable logic controllers (PLC) are widely used in industries ranging from assembly lines, power plants, chemical processes to mining and rail automation. Such systems usually exhibit high safety requirements, and downtimes due to software errors entail intolerably high economic costs. Hence, their control programs are particularly suited for applying formal methods; in particular, bounded model checking (BMC) techniques based on satisfiability modulo theories promise to be highly beneficial in this domain.
In this paper, we investigate adaptations and extensions of property dependent static analyses which operate on a novel form of intermediate code and which are tailored specifically to model checking PLC programs. In our experiments, our program transformations drastically reduce the size of formulae for BMC and speed up the verification times by orders of magnitude.
Tim Lange, Martin R. Neuhäußer, Thomas Noll

Session 2: Software Testing I

Modeling Firmware as Service Functions and Its Application to Test Generation
Abstract
The term firmware refers to software that is tied to a specific hardware platform, e.g., low-level drivers that physically interface with the peripherals. More recently, this has grown to include software that manages critical hardware platform functions such as power management. This growing firmware needs to be shipped with the hardware and shares many of the same critical design concerns as the hardware. The two that we address in this paper are: co-design with the other system components, and validation of the firmware interactions with the connected hardware modules. To this end we introduce a specific Service-Function Transaction-Level Model (TLM) for modeling the firmware and interacting hardware components. A service function provides a service in response to a specific trigger, much like an interrupt-service routine responding to an interrupt. While TLM has been used in the past for HW-SW codesign, we show how the particular structure of the proposed service function based model is useful in the context of firmware design. Specifically, we show its application in automatic test generation. Recently concolic testing has emerged as an automated technique for test generation for single-threaded software. This technique cannot be used directly for firmware, which, by definition, runs in parallel with the interacting hardware modules. We show how the service function model proposed here can be used to analyze these interactions and how single-threaded concolic testing can still be used for an important class of these interaction patterns. The model and the test generation are illustrated through a non-trivial case study of the open-source Rockbox MP3 player.
Sunha Ahn, Sharad Malik
Symbolic Model-Based Testing for Industrial Automation Software
Abstract
In industrial automation software controls systems whose failure can be critical and expensive. Testing this software is very crucial but so far done manually, an expensive and not very thorough method. Model-based testing is an emerging concept in computer science for automatically testing a real implementation. It uses a formal specification describing the system behaviour. This specification is the blue print against which an implementation is tested. This paper presents how to use model-based testing in industrial automation. In detail it shows how the known concepts such as sequential function charts, used in industrial automation to describe a system, can be translated to a format that is required for model-based testing, including an automatic derivation of test-cases and its execution. A concrete case study illustrates the strength of this approach.
Sabrina von Styp, Liyong Yu

Session 3: Software Testing II

Online Testing of LTL Properties for Java Code
Abstract
LTL specifications are commonly used in runtime verification to describe the requirements about the system behavior. Efficient techniques derive, from LTL specifications, monitors that can check if system executions respect these properties. In this paper we present an online testing approach which is based on LTL properties of Java programs. We present an algorithm able to derive and execute test cases from monitors for LTL specifications. Our technique actively tests a Java class, avoids false failures, and it is able to check the correctness of the outputs also in the presence of nondeterminism. We devise several coverage criteria and strategies for visiting the monitors, providing different qualities in terms of test size, testing time, and fault detection capability.
Paolo Arcaini, Angelo Gargantini, Elvinia Riccobene
Modbat: A Model-Based API Tester for Event-Driven Systems
Abstract
Model-based testing derives test executions from an abstract model that describes the system behavior. However, existing approaches are not tailored to event-driven or input/output-driven systems. In particular, there is a need to support non-blocking I/O operations, or operations throwing exceptions when communication is disrupted.
Our new tool “Modbat” is specialized for testing systems where these issues are common. Modbat uses extended finite-state machines to model system behavior. Unlike most existing tools, Modbat offers a domain-specific language that supports state machines and exceptions as first-class constructs. Our model notation also handles non-determinism in the system under test, and supports alternative continuations of test cases depending on the outcome of non-deterministic operations.
These features allow us to model a number of interesting libraries succinctly. Our experiments show the flexibility of Modbat and how language support for model features benefits their correct use.
Cyrille Valentin Artho, Armin Biere, Masami Hagiya, Eric Platon, Martina Seidl, Yoshinori Tanabe, Mitsuharu Yamamoto
Predictive Taint Analysis for Extended Testing of Parallel Executions
Abstract
Dynamic information flow analysis is utterly useful in several contexts. Its adaptation to parallel executions of multi-threaded programs has to overcome the non-deterministic serialization of memory accesses. We propose an offline sliding window-based prediction algorithm for taint analysis. It infers explicit taint propagations that could have occurred under plausible serializations of an observed execution.
Emmanuel Sifakis, Laurent Mounier
Continuous Integration for Web-Based Software Infrastructures: Lessons Learned on the webinos Project
Abstract
Testing web-based software infrastructures is challenging. The need to interact with different services running on different devices, with different expectations for security and privacy contributes not only to the complexity of the infrastructure, but also to the approaches necessary to test it. Moreover, as large-scale systems, such infrastructures may be developed by distributed teams simultaneously making changes to APIs and critical components that implement them. In this paper, we describe our experiences testing one such infrastructure – the webinos software platform – and the lessons learned tackling the challenges faced. While ultimately these challenges were impossible to overcome, this paper explores the techniques that worked most effectively and makes recommendations for developers and teams in similar situations. In particular, our experiences with continuous integration and automated testing processes are described and analysed.
Tao Su, John Lyle, Andrea Atzeni, Shamal Faily, Habib Virji, Christos Ntanos, Christos Botsikas

Session 4: Supporting Dynamic Verification

SLAM: SLice And Merge - Effective Test Generation for Large Systems
Abstract
As hardware systems continue to grow exponentially, existing functional verification methods are lagging behind, consuming a growing amount of manual effort and simulation time. In response to this inefficiency gap, we developed SLAM, a novel method for test case generation for large systems. Our verification solution combines several scenarios to run in parallel, while preserving each one intact. This is done by automatically and randomly slicing the system model into sub-systems termed slices, and assigning a different scenario to each slice. SLAM increases simulation efficiency by exercising the different system components simultaneously in varied scenarios. It reduces manual effort of test preparation by allowing reuse and mix of test scenarios. We show how to integrate SLAM into the verification cycle to save simulation time and increase coverage. We present real-life results from the use of our solution in the verification process of the latest IBM System p server.
Tali Rabetti, Ronny Morad, Alex Goryachev, Wisam Kadry, Richard D. Peterson
Improving Post-silicon Validation Efficiency by Using Pre-generated Data
Abstract
Post-silicon functional validation poses unique challenges that must be overcome by bring-up tools. One such major challenge is the requirement to reduce overhead associated with the testing procedures, thereby ensuring that the expensive silicon platform is utilized to its utmost extent. Another crucial requirement is to conduct high-quality validation that guarantees the design is bug-free prior to its shipment to customers. Our work addresses these issues in the realm of software-based self-tests.
We propose a novel solution that satisfies these two requirements. The solution calls for shifting the preparation of complex data from the runtime to the offline phase that takes place before the software test is compiled and loaded onto the silicon platform. This data is then variably used by the test-case to ensure high testing quality while retaining silicon utilization.
We demonstrate the applicability of our method in the context of baremetal functional exercisers. An exerciser is a unique type of self-test that, once loaded onto the system, continuously generates test-cases, executes them, and checks their results. To ensure high silicon utilization, the exerciser’s software must be kept lightweight and simple. This requirement contradicts the demand for high-quality validation, as the latter calls for complex test generation, which in turn implies extensive, complex computation. Our solution bridges these contradicting requirements.
We implemented our proposed solution scheme in IBM’s Threadmill post-silicon exerciser, and we demonstrate the value of this scheme in two highly important domains: address translation path generation and memory access management.
Wisam Kadry, Anatoly Koyfman, Dmitry Krestyashyn, Shimon Landa, Amir Nahir, Vitali Sokhin
Development and Verification of Complex Hybrid Systems Using Synthesizable Monitors
Abstract
Using simulation monitors that are formally defined and automatically synthesized is already part of the standard methodology of hardware design and verification. However, this is not yet the case in the domain of systems engineering for cyber-physical systems. The growing trend towards model-based systems engineering is making the use of simulation monitors more relevant and possible. Recent related work focuses almost exclusively on the aspects of requirements specification. In this work, we explain how monitors can play a much more pervasive role in systems engineering, going beyond merely checking requirements. We describe how monitors can be used along the entire product lifecycle, from early design alternative analysis to final field testing. This work also covers the special considerations that must be addressed when designing a monitor specification language, specifically in the context of systems engineering. Our focus is on the practical issues related to the use of monitors and describes a prototype monitor specification and synthesis platform applied to the hybrid simulation of an automotive subsystem.
Andreas Abel, Allon Adir, Torsten Blochwitz, Lev Greenberg, Tamer Salman
Assertion Checking Using Dynamic Inference
Abstract
We present a technique for checking assertions in code that combines model checking and dynamic analysis. Our technique first constructs an abstraction by summarizing code fragments in the form of pre and post conditions. Spurious counterexamples are then analyzed by Daikon, a dynamic analysis engine, to infer invariants over the fragments. These invariants, representing a set of traces, are used to partition the summary with one partition consisting of the observed spurious behaviour. Partitioning summaries in this manner increases precision of the abstraction and accelerates the refinement loop. Our technique is sound and compositional, allowing us to scale model checking engines to larger code size, as seen from the experiments.
Anand Yeolekar, Divyesh Unadkat

Session 5: Specification and Coverage

Formal Specification of an Erase Block Management Layer for Flash Memory
Abstract
This work presents a formal specification and an implementation of an erase block management layer and a formal model of the flash driver interface. It is part of our effort to construct a verified file system for flash memory. The implementation supports wear-leveling, handling of bad blocks and asynchronous erasure of blocks. It uses additional data structures in RAM for efficiency and relies on a model of the flash driver, which is similar to the Memory Technology Device (MTD) layer of Linux. We specify the effects of unexpected power failure and subsequent recovery. All models are mechanized in the interactive theorem prover KIV.
Jörg Pfähler, Gidon Ernst, Gerhard Schellhorn, Dominik Haneberg, Wolfgang Reif
Attention-Based Coverage Metrics
Abstract
Over the last decade, extensive research has been conducted on coverage metrics for model checking. The most common coverage metrics are based on mutations, where one examines the effect of small modifications of the system on the satisfaction of the specification. While it is commonly accepted that mutation-based coverage provides adequate means for assessing the exhaustiveness of the model-checking procedure, the incorporation of coverage checks in industrial model checking tools is still very partial. One reason for this is the typically overwhelming number of non-covered mutations, which requires the user to somehow filter those that are most likely to point to real errors or overlooked behaviors.
We address this problem and propose to filter mutations according to the attention the designer has paid to the mutated components in the model. We formalize the attention intuition using a multi-valued setting, where the truth values of the signals in the model describe their level of importance. Non-covered mutations of signals of high importance are then more alarming than non-covered mutations of signals with low intention. Given that such “importance information” is usually not available in practice, we suggest two new coverage metrics that automatically approximate it. The idea behind both metrics is the observation that designers tend to modify the value of signals only when there is a reason to do so. We demonstrate the advantages of both metrics and describe algorithms for calculating them.
Shoham Ben-David, Hana Chockler, Orna Kupferman

Keynote Presentation

Synthesizing, Correcting and Improving Code, Using Model Checking-Based Genetic Programming
Abstract
The use of genetic programming, in combination of model checking and testing, provides a powerful way to synthesize programs. Whereas classical algorithmic synthesis provides alarming high complexity and undecidability results, the genetic approach provides a surprisingly successful heuristics. We describe several versions of a method for synthesizing sequential and concurrent systems. To cope with the constraints of model checking and of theorem proving, we combine such exhaustive verification methods with testing. We show several examples where we used our approach to synthesize, improve and correct code.
Gal Katz, Doron Peled

Session 6: Abstraction

Domain Types: Abstract-Domain Selection Based on Variable Usage
Abstract
The success of software model checking depends on finding an appropriate abstraction of the program to verify. The choice of the abstract domain and the analysis configuration is currently left to the user, who may not be familiar with the tradeoffs and performance details of the available abstract domains. We introduce the concept of domain types, which classify the program variables into types that are more fine-grained than standard declared types (e.g., ‘int’ and ‘long’) to guide the selection of an appropriate abstract domain for a model checker. Our implementation on top of an existing verification framework determines the domain type for each variable in a pre-analysis step, based on the usage of variables in the program, and then assigns each variable to an abstract domain. Based on a series of experiments on a comprehensive set of verification tasks from international verification competitions, we demonstrate that the choice of the abstract domain per variable (we consider one explicit and one symbolic domain) can substantially improve the verification in terms of performance and precision.
Sven Apel, Dirk Beyer, Karlheinz Friedberger, Franco Raimondi, Alexander von Rhein
Efficient Analysis of Reliability Architectures via Predicate Abstraction
Abstract
The overall safety of critical systems is often based on the use of redundant architectural patterns, such as Triple Modular Redundancy. Certification procedures in various application domains require an explicit evaluation of the reliability, and the production of various artifacts. Particularly interesting are Fault Trees (FT), that represent in a compact form all the combinations of (basic) faults that are required to cause a (system-level) failure. Yet, such activities are essentially based on manual analysis, and are thus time consuming and error prone.
A recently proposed approach opens the way to the automated analysis of reliability architectures. The approach is based on the use of Satisfiability Modulo Theories (SMT), using the theory of Equality and Uninterpreted Functions (\(\mathcal{EUF}\)) to represent block diagrams. Within this framework, the construction of FTs is based on the existential quantification of an \(\mathcal{EUF}\) formula. Unfortunately, the off-the-shelf application of available techniques, based on the translation into an AllSMT problem, suffers from severe scalability issues.
In this paper, we propose a compositional method to solve this problem, based on the use of predicate abstraction. We prove that our method is sound and complete for a wide class of system architectures. The presented approach greatly improves the overall scalability with respect to the monolithic case, obtaining speed-ups of various orders of magnitude. In practice, this approach allows for the verification of architectures of realistic systems.
Marco Bozzano, Alessandro Cimatti, Cristian Mattarei
Lazy Symbolic Execution through Abstraction and Sub-space Search
Abstract
We present an approach to address a main performance bottleneck in symbolic execution. Despite a powerful method to produce test cases with high coverage, symbolic execution often suffers from the problem of exploring a huge number of paths without (1) significantly increasing the coverage, and (2) going deep enough to hit hot spots. The situation becomes worse for modern programming languages such as C/C++ which extensively use library calls and shared code. In this paper we use a novel “lazy” execution approach to evaluate functions, library calls, and other entities commonly used in a high level language. Specifically, the symbolic executor uses high level abstractions and sub-space search to control and guide symbolic execution so that only necessary paths are visited to produce valid test cases. This method is able to avoid exploring many useless or duplicate paths. Experimental results show that it can help solve path constraints and produce test cases in much less time. For many programs, it can improve the performance by several orders of magnitude while maintaining the same source code coverage.
Guodong Li, Indradeep Ghosh
SPIN as a Linearizability Checker under Weak Memory Models
Abstract
Linearizability is the key correctness criterion for concurrent data structures like stacks, queues or sets. Consequently, much effort has been spent on developing techniques for showing linearizability. However, most of these approaches assume a sequentially consistent memory model whereas today’s multicore processors provide relaxed out-of-order execution semantics.
In this paper, we present a new approach for checking linearizability of concurrent algorithms under weak memory models, in particular the TSO memory model. Our technique first compiles the algorithm into intermediate low-level code. For achieving the out-of-order execution, we (abstractly) model the processor’s architecture with shared memory and local buffers. Low-level code as well as architecture model are given as input to the model checker SPIN which checks whether the out-of-order execution of the particular algorithm is linearizable. We report on experiments with different algorithms.
Oleg Travkin, Annika Mütze, Heike Wehrheim

Session 7: Model Representation

Arithmetic Bit-Level Verification Using Network Flow Model
Abstract
The paper presents a new approach to functional, bit-level verification of arithmetic circuits. The circuit is modeled as a network of adders and basic Boolean gates, and the computation performed by the circuit is viewed as a flow of binary data through such a network. The verification problem is cast as a Network Flow problem and solved using symbolic term rewriting and simple algebraic techniques. Functional correctness is proved by showing that the symbolic flow computed at the primary inputs is equal to the flow computed at the primary outputs. Experimental results show a potential application of the method to certain classes of arithmetic circuits.
Maciej Ciesielski, Walter Brown, André Rossi
Performance Evaluation of Process Partitioning Using Probabilistic Model Checking
Abstract
Consider the problem of partitioning a number of concurrent interacting processes into a smaller number of physical processors. The performance and efficiency of such a system critically depends on the tasks that the processes perform and the partitioning scheme. Although empirical measurements have been extensively used a posteriori to assess the success of partitioning, the results only focus on a subset of possible executions and cannot be generalized. In this paper, we propose a probabilistic state exploration method to evaluate a priori the efficiency of a set of partitions in terms of the speedup they achieve for a given model. Our experiments show that our method is quite effective in identifying partitions that result in better levels of parallelism.
Saddek Bensalem, Borzoo Bonakdarpour, Marius Bozga, Doron Peled, Jean Quilbeuf
Improving Representative Computation in ExpliSAT
Abstract
This paper proposes a new algorithm for computing path representatives in the concolic software verification tool ExpliSAT. A path representative is a useful technique for reducing the number of calls to a decision procedure by maintaining an explicit instantiation of symbolic variables that matches the current control flow path. In the current implementation in ExpliSAT, the whole representative is guessed at the beginning of an execution and then recomputed if it does not match the currently traversed control flow path. In this paper we suggest a new algorithm for computation of a representative, where the instantiation is done “on demand”, that is, only when a specific value is required in order to determine feasibility. Experimental results show that using representatives improves performance.
Hana Chockler, Dmitry Pidan, Sitvanit Ruah
Backmatter
Metadaten
Titel
Hardware and Software: Verification and Testing
herausgegeben von
Valeria Bertacco
Axel Legay
Copyright-Jahr
2013
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-03077-7
Print ISBN
978-3-319-03076-0
DOI
https://doi.org/10.1007/978-3-319-03077-7