Skip to main content

2004 | Buch

Formal Methods in Computer-Aided Design

5th International Conference, FMCAD 2004, Austin, Texas, USA, November 15-17, 2004. Proceedings

herausgegeben von: Alan J. Hu, Andrew K. Martin

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter
Challenges in System-Level Design
Abstract
This paper summarizes some of the challenges presented by very large integrated circuits. Today’s embedded applications require not just complex algorithms but multi-tasking systems that perform several different types of operations on the data. Those applications are run on systems-on-chips that embody complex, heterogeneous architectures. Furthermore, systems-on-chips are connected into even larger systems. The large amounts of state in systems- on-chips, along with the interactions between traditional functionality and performance, mean that we must expand the scope of system design and verification activities. We still need to solve all the traditional design problems, but we must also develop new methodologies that allow us to design and verify the long-term behavior of the system.
Wayne Wolf
Generating Fast Multipliers Using Clever Circuits
Abstract
New insights into the general structure of partial product reduction trees are combined with the notion of clever circuits to give a novel method of writing simple but flexible and highly parameterised data-path generators.
Mary Sheeran
Verification of Analog and Mixed-Signal Circuits Using Hybrid System Techniques
Abstract
In this paper we demonstrate a potential extension of formal verification methodology in order to deal with time-domain properties of analog and mixed-signal circuits whose dynamic behavior is described by differential algebraic equations. To model and analyze such circuits under all possible input signals and all values of parameters, we build upon two techniques developed in the context of hybrid (discrete-continuous) control systems. First, we extend our algorithm for approximating sets of reachable sets for dense-time continuous systems to deal with differential algebraic equations (DAEs) and apply it to a biquad low-pass filter. To analyze more complex circuits, we resort to bounded horizon verification. We use optimal control techniques to check whether a Δ-Σ modulator, modeled as a discrete-time hybrid automaton, admits an input sequence of bounded length that drives it to saturation.
Thao Dang, Alexandre Donzé, Oded Maler
A Methodology for the Formal Verification of FFT Algorithms in HOL
Abstract
This paper addresses the formal specification and verification of fast Fourier transform (FFT) algorithms at different abstraction levels based on the HOL theorem prover. We make use of existing theories in HOL on real and complex numbers, IEEE standard floating-point, and fixed-point arithmetics to model the FFT algorithms. Then, we derive, by proving theorems in HOL, expressions for the accumulation of roundoff error in floating- and fixed-point FFT designs with respect to the corresponding ideal real and complex numbers specification. The HOL formalization and proofs are found to be in good agreement with the theoretical paper-and-pencil counterparts. Finally, we use a classical hierarchical proof approach in HOL to prove that the FFT implementations at the register transfer level (RTL) implies the corresponding high level fixed-point algorithmic specification.
Behzad Akbarpour, Sofiène Tahar
A Functional Approach to the Formal Specification of Networks on Chip
Abstract
We present a functional approach, based on the ACL2 logic, for the specification of system on a chip communication architectures. Our decomposition of the communications allows the method to be modular for both system definition and validation. When performed in the context of the ACL2 logic, all the definitions and theorems are not only reusable, but also constitute an executable and proven valid specification for the system. We illustrate the approach on a state of the art network on chip: the Octagon. We prove that messages travel over this network without being modified and eventually reach their expected destination.
Julien Schmaltz, Dominique Borrione
Proof Styles in Operational Semantics
Abstract
abs We relate two well-studied methodologies in deductive verification of operationally modeled sequential programs, namely the use of inductive invariants and clock functions. We show that the two methodologies are equivalent and one can mechanically transform a proof of a program in one methodology to a proof in the other. Both partial and total correctness are considered. This mechanical transformation is compositional; different parts of a program can be verified using different methodologies to achieve a complete proof of the entire program. The equivalence theorems have been mechanically checked by the ACL2 theorem prover and we implement automatic tools to carry out the transformation between the two methodologies in ACL2.
Sandip Ray, J. Strother Moore
Integrating Reasoning About Ordinal Arithmetic into ACL2
Abstract
Termination poses one of the main challenges for mechanically verifying infinite state systems. In this paper, we develop a powerful and extensible framework based on the ordinals for reasoning about termination in a general purpose programming language. We have incorporated our work into the ACL2 theorem proving system, thereby greatly extending its ability to automatically reason about termination. The resulting technology has been adopted into the newly released ACL2 version 2.8. We discuss the creation of this technology and present two case studies illustrating its effectiveness.
Panagiotis Manolios, Daron Vroon
Combining Equivalence Verification and Completion Functions
Abstract
This work presents a new method for verifying optimized register-transfer-level implementations of pipelined circuits. We combine the robust, yet limited, capabilities of combinational equivalence verification with the modular and composable verification strategy of completion functions. We have applied this technique to a 32-bit OpenRISC processor and a Sobel edge-detector circuit. Each case study required less than fifteen verification obligations and each obligation could be checked in less than one minute. We believe that our approach will be applicable to a large class of pipelines with in-order execution.
Mark D. Aagaard, Vlad C. Ciubotariu, Jason T. Higgins, Farzad Khalvati
Synchronization-at-Retirement for Pipeline Verification
Abstract
Much automatic pipeline verification research of the last decade has been based on some form of “Burch-Dill flushing” [BD94]. In this work, we study synchronization-at-retirement, an alternative formulation of correctness for pipelines. In this formulation, the proof obligations can also be verified automatically but have significantly-reduced verification complexity compared to flushing. We present an approach for systematically generating invariants, addressing one of the most difficult aspects of pipeline verification. We establish by proof that synchronization-at-retirement and the Burch-Dill flushing correctness statements are equivalent under reasonable side conditions. Finally, we provide experimental evidence of the reduced complexity of our approach for a pipelined processor with ALU operations, memory operations, stalls, jumps, and branch prediction.
Mark D. Aagaard, Nancy A. Day, Robert B. Jones
Late Design Changes (ECOs) for Sequentially Optimized Esterel Designs
Abstract
Late changes in silicon design (ECO) is a common although undesired practice. The need for ECO exists even in high-level design flows since bugs may occur in the specifications, in the compilation, or due to late specification changes. Esterel compilation deploys sequential optimization to improve delay and area of the netlist. This makes it harder to find in the netlist where manual changes should be done and to trace circuit changes back to the high-level specification. We show that all sequential optimizations used in Esterel compilation can be made reversible and demonstrate that an ECO problem can be reduced to a commonly solved combinational ECO problem. This is achieved by reconstructing some of the suppressed registers in order to backannotate to the original code. We demonstrate that the cost of reversibility is negligible.
Laurent Arditi, Gerard Berry, Michael Kishinevsky
Non-miter-based Combinational Equivalence Checking by Comparing BDDs with Different Variable Orders
Abstract
This paper describes a new method that is useful in combinational equivalence checking with very challenging industrial designs. The method does not build a miter; instead it builds BDDs of reference and implementation circuits independently – so that each BDD can have its best order while building the BDD – then compares the two BDDs directly without transforming one variable order to the other. In the comparison, checking containment of two BDDs is necessary to handle don’t cares efficiently. Even though there are polynomial algorithms for checking equality of two BDDs, those algorithms are not extendible to containment checking. Thus we also present an efficient algorithm, of polynomial complexity, to check both equality and containment of two BDDs with different variable orders. Our non-miter-based method was able to verify many hard industrial designs previously infeasible with existing miter-based state-of-the-art techniques. Experimental results show that the non-miter-based method is very efficient for designs that are especially difficult to check due to dissimilarities and don’t cares. The current work does not suggest an alternative that replaces conventional equivalence checking algorithms, but rather an approach that augments their capability for designs that are very hard to check.
In-Ho Moon, Carl Pixley
Scalable Automated Verification via Expert-System Guided Transformations
Abstract
Transformation-based verification has been proposed to synergistically leverage various transformations to successively simplify and decompose large problems to ones which may be formally discharged. While powerful, such systems require a fair amount of user sophistication and experimentation to yield greatest benefits – every verification problem is different, hence the most efficient transformation flow differs widely from problem to problem. Finding an efficient proof strategy not only enables exponential reductions in computational resources, it often makes the difference between obtaining a conclusive result or not. In this paper, we propose the use of an expert system to automate this proof strategy development process. We discuss the types of rules used by the expert system, and the type of feedback necessary between the algorithms and expert system, all oriented towards yielding a conclusive result with minimal resources. Experimental results are provided to demonstrate that such a system is able to automatically discover efficient proof strategies, even on large and complex problems with more than 100,000 state elements in their respective cones of influence. These results also demonstrate numerous types of algorithmic synergies that are critical to the automation of such complex proofs.
Hari Mony, Jason Baumgartner, Viresh Paruthi, Robert Kanzelman, Andreas Kuehlmann
Simple Yet Efficient Improvements of SAT Based Bounded Model Checking
Abstract
In this paper, we show how proper benchmarking, which matches day-to-day use of formal methods, allows us to assess direct improvements for SAT use for formal methods. Proper uses of our benchmark allowed us to prove that previous results on tuning SAT solver for Bounded Model Checking (BMC) were overly optimistic and that a simpler algorithm was in fact more efficient.
Emmanuel Zarpas
Simple Bounded LTL Model Checking
Abstract
We present a new and very simple translation of the bounded model checking problem which is linear both in the size of the formula and the length of the bound. The resulting CNF-formula has a linear number of variables and clauses.
Timo Latvala, Armin Biere, Keijo Heljanko, Tommi Junttila
QuBE++: An Efficient QBF Solver
Abstract
In this paper we describe QuBE++, an efficient solver for Quantified Boolean Formulas (QBFs). To the extent of our knowledge, QuBE++ is the first QBF reasoning engine that uses lazy data structures both for unit clauses propagation and for pure literals detection. QuBE++ also features non-chronological backtracking and a branching heuristic that leverages the information gathered during the backtracking phase. Owing to such techniques and to a careful implementation, QuBE++ turns out to be an efficient and robust solver, whose performances exceed those of other state-of-the-art QBF engines, and are comparable with the best engines currently available on SAT instances.
Enrico Giunchiglia, Massimo Narizzano, Armando Tacchella
Bounded Probabilistic Model Checking with the Murφ Verifier
Abstract
In this paper we present an explicit verification algorithm for Probabilistic Systems defining discrete time/finite state Markov Chains. We restrict ourselves to verification of Bounded PCTL formulas(BPCTL), that is, PCTL formulas in which all Until operators arebounded, possibly with different bounds. This means that we consider only paths (system runs) of bounded length. Given a Markov Chain \({\cal M}\) and a BPCTL formula Φ, our algorithm checks if Φ is satisfied in \({\cal M}\). This allows to verify important properties, such as reliability in Discrete Time Hybrid Systems.
We present an implementation of our algorithm within a suitable extension of the Murφ verifier. We call FHP-Murφ (Finite Horizon Probabilistic Murφ) such extension of the Murφ verifier.
We give experimental results comparing FHP-Murφ with (a finite horizon subset of) PRISM, a state-of-the-art symbolic model checker for Markov Chains. Our experimental results show that FHP-Murφ can effectively handle verification of BPCTL formulas for systems that are out of reach for PRISM, namely those involving arithmetic operations on the state variables (e.g. hybrid systems).
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, Marisa Venturini Zilli
Increasing the Robustness of Bounded Model Checking by Computing Lower Bounds on the Reachable States
Abstract
Most symbolic model checkers are based on either Binary Decision Diagrams (BDDs), which may grow exponentially large, or Satisfiability (SAT) solvers, whose time requirements rapidly increase with the sequential depth of the circuit. We investigate the integration of BDD-based methods with SAT to speed up the verification of safety properties of the form G f, where f is either propositional or contains only the next-time temporal operator X. We use BDD-based reachability analysis to find lower bounds on the reachable states and the states that reach the bad states. Then, we use these lower bounds to shorten the counterexample or reduce the depth of the induction step (termination depth). We present experimental results that compare our method to a pure BDD-based method and a pure SAT-based method. Our method can prove properties that are hard for both the BDD-based and the SAT-based methods.
Mohammad Awedh, Fabio Somenzi
Bounded Verification of Past LTL
Abstract
Temporal logics with past operators are gaining increasing importance in several areas of formal verification for their ability to concisely express useful properties. In this paper we propose a new approach to bounded verification of PLTL, the linear time temporal logic extended with past temporal operators. Our approach is based on the transformation of PLTL into Separated Normal Form, which in turn is amenable for reduction to propositional satisfiability. An experimental evaluation shows that our approach induces encodings which are significantly smaller and more easily solved than previous approaches, in the cases of both model checking and satisfiability problems.
Alessandro Cimatti, Marco Roveri, Daniel Sheridan
A Hybrid of Counterexample-Based and Proof-Based Abstraction
Abstract
Counterexample- and proof-based refinement are complementary approaches to iterative abstraction. In the former case, a single abstract counterexample is eliminated by each refinement step, while in the latter case, all counterexamples of a given length are eliminated. In counterexample-based abstraction, the concretization and refinement problems are relatively easy, but the number of iterations tends to be large. Proof-based abstraction, on the other hand, puts a greater burden on the refinement step, which can then become the performance bottleneck. In this paper, we show that counterexample- and proof-based refinement are extremes of a continuum, and propose a hybrid approach that balances the cost and quality of refinement. In a study of a large number of industrial verification problems, we find that there is a strong relation between the effort applied in the refinement phase and the number of refinement iterations. For this reason, proof-based abstraction is substantially more efficient than counterexample-based abstraction. However, a judicious application of the hybrid approach can lessen the refinement effort without unduly increasing the number of iterations, yielding a method that is somewhat more robust overall.
Nina Amla, Ken L. McMillan
Memory Efficient All-Solutions SAT Solver and Its Application for Reachability Analysis
Abstract
This work presents a memory-efficient All-SAT engine which, given a propositional formula over sets of important and non-important variables, returns the set of all the assignments to the important variables, which can be extended to solutions (satisfying assignments) to the formula. The engine is built using elements of modern SAT solvers, including a scheme for learning conflict clauses and non-chronological backtracking. Re-discovering solutions that were already found is avoided by the search algorithm itself, rather than by adding blocking clauses. As a result, the space requirements of a solved instance do not increase when solutions are found. Finding the next solution is as efficient as finding the first one, making it possible to solve instances for which the number of solutions is larger than the size of the main memory.
We show how to exploit our All-SAT engine for performing image computation and use it as a basic block in achieving full reachability which is purely SAT-based (no BDDs involved).
We implemented our All-SAT solver and reachability algorithm using the state-of-the-art SAT solver Chaff [19] as a code base. The results show that our new scheme significantly outperforms All-SAT algorithms that use blocking clauses, as measured by the execution time, the memory requirement, and the number of steps performed by the reachability analysis.
Orna Grumberg, Assaf Schuster, Avi Yadgar
Approximate Symbolic Model Checking for Incomplete Designs
Abstract
We consider the problem of checking whether an incomplete design can still be extended to a complete design satisfying a given CTL formula and whether the property is satisfied for all possible extensions. Motivated by the fact that well-known model checkers like SMV or VIS produce incorrect results when handling unknowns by using the programs’ non-deterministic signals, we present a series of approximate, yet sound algorithms to process incomplete designs with increasing quality and computational resources. Finally we give a series of experimental results demonstrating the effectiveness and feasibility of the presented methods.
Tobias Nopper, Christoph Scholl
Extending Extended Vacuity
Abstract
There has been a growing interest in detecting whether a logic specification holds in the system vacuously. For example, a specification “every request is eventually followed by an acknowledgment” holds vacuously on those systems that never generate requests. In a recent paper, Armoni et al. have argued against previous definitions of vacuity, defined as sensitivity with respect to syntactic perturbation. They suggested that vacuity should be robust, i.e., insensitive to trivial changes in the logic and in the model, and is better described as sensitivity with respect to semantic perturbation, represented by universal propositional quantification. In this paper, we extend the above suggestion by giving a formal definition of robust vacuity that allows us to define and detect vacuous satisfaction and vacuous failure for arbitrary CTL* properties, even with respect to multiple occurrences of subformulas. We discuss complexity of our approaches and study the relationship between vacuity and abstraction.
Arie Gurfinkel, Marsha Chechik
Parameterized Vacuity
Abstract
In model checking, a specification is vacuously true, if some subformula can be modified without affecting the truth value of the specification. Intuitively, this means that the property expressed in this subformula is satisfied for a trivial reason, and likely not the intended one. It has been shown by Kupferman and Vardi that vacuity detection can be reduced to model checking of simplified specifications where the subformulas of interest are replaced by constant truth values.
In this paper, we argue that the common definition describes extreme cases of vacuity where the subformula indeed collapses to a constant truth value. We suggest a refined notion of vacuity (weak vacuity) which is parameterized by a user-defined class of vacuity causes. Under this notion, a specification is vacuously true, if a subformula collapses to a vacuity cause. Our analysis exhibits a close relationship between vacuity detection and temporal logic query solving. We exploit this relationship to obtain vacuity detection algorithms in symbolic, automata-theoretic, and multi-valued frameworks.
Marko Samer, Helmut Veith
An Operational Semantics for Weak PSL
Abstract
Extending linear temporal logic by adding regular expressions increases its expressiveness. However, as for example, problems in recent versions of Accellera’s Property Specification Language (PSL) as well as in OpenVera’s ForSpec and other property languages show, it is a non-trivial task to give a formal denotational semantics with desirable properties to the resulting logic. In this paper, we argue that specifying an operational semantics may be helpful in guiding this work, and as a bonus leads to an implementation of the logic for free. We give a concrete operational semantics for Weak PSL, which is the safety property subset of PSL. We also propose a denotational semantics which we show to be equivalent to the operational one. This semantics is inspired by a new denotational semantics proposed in recent related work.
Koen Claessen, Johan Mårtensson
Accepting Predecessors Are Better than Back Edges in Distributed LTL Model-Checking
Abstract
We present a new distributed-memory algorithm for enumerative LTL model-checking that is designed to be run on a cluster of workstations communicating via MPI. The detection of accepting cycles is based on computing maximal accepting predecessors and the subsequent decomposition of the graph into independent predecessor subgraphs induced by maximal accepting predecessors. Several optimizations of the basic algorithm are presented and the influence of the ordering on the algorithm performance is discussed. Experimental implementation of the algorithm shows promising results.
Luboš Brim, Ivana Černá, Pavel Moravec, Jiří Šimša
Bloom Filters in Probabilistic Verification
Abstract
Probabilistic techniques for verification of finite-state transition systems offer huge memory savings over deterministic techniques. The two leading probabilistic schemes are hash compaction and the bitstate method, which stores states in a Bloom filter. Bloom filters have been criticized for being slow, inaccurate, and memory-inefficient, but in this paper, we show how to obtain Bloom filters that are simultaneously fast, accurate, memory-efficient, scalable, and flexible. The idea is that we can introduce large dependences among the hash functions of a Bloom filter with almost no observable effect on accuracy, and because computation of independent hash functions was the dominant computational cost of accurate Bloom filters and model checkers based on them, our savings are tremendous. We present a mathematical analysis of Bloom filters in verification in unprecedented detail, which enables us to give a fresh comparison between hash compaction and Bloom filters. Finally, we validate our work and analyses with extensive testing using 3SPIN, a model checker we developed by extending SPIN.
Peter C. Dillinger, Panagiotis Manolios
A Simple Method for Parameterized Verification of Cache Coherence Protocols
Abstract
We present a simple method for verifying the safety properties of cache coherence protocols with arbitrarily many nodes. Our presentation begins with two examples. The first example describes in intuitive terms how the German protocol with arbitrarily many nodes can be verified using a combination of Murphi model checking and apparently circular reasoning. The second example outlines a similar proof of the FLASH protocol. These are followed by a simple theory based on the classical notion of simulation proofs that justifies the apparently circular reasoning. We conclude the paper by discussing what remains to be done and by comparing our method with other approaches to the parameterized verification of cache coherence protocols, such as compositional model checking, machine-assisted theorem proving, predicate abstraction, invisible invariants, and cut-off theorems.
Ching-Tsun Chou, Phanindra K. Mannava, Seungjoon Park
A Partitioning Methodology for BDD-Based Verification
Abstract
The main challenge in BDD-based verification is dealing with the memory explosion problem during reachability analysis. In this paper we advocate a methodology to handle this problem based on state space partitioning of functions as well as relations. We investigate the key questions of how to perform partitioning in reachability based verification and provide suitable algorithms. We also address the problem of instability of BDD-based verification by automatically picking the best configuration from different short traces of the reachability computation. Our approach drastically decreases verification time, often by orders of magnitude.
Debashis Sahoo, Subramanian Iyer, Jawahar Jain, Christian Stangier, Amit Narayan, David L. Dill, E. Allen Emerson
Invariant Checking Combining Forward and Backward Traversal
Abstract
In invariant checking two directions of state space traversal are possible: Forward from initial states or backward starting from potential error states. It is not clear in advance, which direction will be computationally easier or will terminate in fewer steps. This paper presents a dynamic approach based on OBDDs for interleaving forward and backward traversal. The approach increases the chance for selecting the shorter direction and at the same time limits the overhead due to redundant computation. Additionally, a second approach using two OBDDs with different variable orders is presented, providing improved completion at the cost of some additional overhead. These approaches result in a dramatic gain in efficiency over unidirectional traversal. For the first time all benchmarks of the VIS-Verilog suite have been finished using a BDD-based method.
Christian Stangier, Thomas Sidle
Variable Reuse for Efficient Image Computation
Abstract
Image computation, that is, computing the set of states reachable from a given set in one step, is a crucial component in typical tools for BDD-based symbolic reachability analysis. It has been shown that the size of the intermediate BDDs during image computation can be dramatically reduced via conjunctive partitioning of the transition relation and ordering the conjuncts for facilitating early quantification. In this paper, we propose to enhance the effectiveness of these techniques by reusing the quantified variables. Given an ordered set of conjuncts, if the last conjunct that uses a variable u appears before the first conjunct that uses another variable v, then v can be renamed to u, assuming u will be quantified immediately after its last use. In general, multiple variables can share the same identifier so the BDD nodes that are inactive but not garbage collected may be activated. We give a polynomial-time algorithm for generating the optimum number of variables that are required for image computation and show how to modify the image computation accounting for variable reuse. The savings for image computation are demonstrated on ISCAS’89 and Texas’97 benchmark models.
Zijiang Yang, Rajeev Alur
Backmatter
Metadaten
Titel
Formal Methods in Computer-Aided Design
herausgegeben von
Alan J. Hu
Andrew K. Martin
Copyright-Jahr
2004
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-30494-4
Print ISBN
978-3-540-23738-9
DOI
https://doi.org/10.1007/b102264