Skip to main content



Invited Papers

Automatic Requirement Extraction from Test Cases

This paper describes a method for extracting functional requirements from tests, where tests take the form of vectors of inputs (supplied to the system) and outputs (produced by the system in response to inputs). The approach uses data-mining techniques to infer invariants from the test data, and an automated-verification technology to determine which of these proposed invariants are indeed invariant and may thus be seen as requirements. Experimental results from a pilot study involving an automotive-electronics application show that using tests that fully cover the structure of the software yield more complete invariants than structurally-agnostic black-box tests.

Chris Ackermann, Rance Cleaveland, Samuel Huang, Arnab Ray, Charles Shelton, Elizabeth Latronico

Code Contracts for .NET: Runtime Verification and So Much More

The project






[1] comes from the Research in Software Engineering (RiSE) group [5] at Microsoft Research. We took the lessons we learned from the Spec# project [3,4] and have applied them in a setting available to all .NET programmers without the need for them to adopt an experimental programming language or the Spec# programming methodology. It has been available since early 2009 with a commercial use license on the DevLabs [7] web site. Since then there have been about 20,000 downloads, with an active forum of users.

Mike Barnett

Visual Debugging for Stream Processing Applications

Stream processing is a new computing paradigm that enables continuous and fast analysis of massive volumes of streaming data. Debugging streaming applications is not trivial, since they are typically distributed across multiple nodes and handle large amounts of data. Traditional debugging techniques like breakpoints often rely on a stop-the-world approach, which may be useful for debugging single node applications, but insufficient for streaming applications. We propose a new visual and analytic environment to support debugging, performance analysis, and troubleshooting for stream processing applications. Our environment provides several visualization methods to study, characterize, and summarize the flow of tuples between stream processing operators. The user can interactively indicate points in the streaming application from where tuples will be traced and visualized as they flow through different operators, without stopping the application. To substantiate our discussion, we also discuss several of these features in the context of a financial engineering application.

Wim De Pauw, Mihai Leţia, Buğra Gedik, Henrique Andrade, Andy Frenkiel, Michael Pfeifer, Daby Sow

Runtime Verification in Context: Can Optimizing Error Detection Improve Fault Diagnosis?

Runtime verification has primarily been developed and evaluated as a means of enriching the software testing process. While many researchers have pointed to its potential applicability in online approaches to software fault tolerance, there has been a dearth of work exploring the details of how that might be accomplished.

In this paper, we describe how a component-oriented approach to software health management exposes the connections between program execution, error detection, fault diagnosis, and recovery. We identify both research challenges and opportunities in exploiting those connections. Specifically, we describe how recent approaches to reducing the overhead of runtime monitoring aimed at error detection might be adapted to reduce the overhead and improve the effectiveness of fault diagnosis.

Matthew B. Dwyer, Rahul Purandare, Suzette Person

Contracts for Scala

Contracts are partial specifications that can be added to program code. They are checked by compilers or other tools, or, more commonly, by runtime checks. Languages such as Eiffel[8], JML[7], or Spec#[1] support contracts natively. Scala [11] does not. Instead, Scala provides flexible syntax that helps in writing highlevel libraries which can often mimic true language extensions. An interesting question is to what degree this syntactic flexibility supports the embedding of contracts.

Martin Odersky

Runtime Analysis and Instrumentation for Securing Software

The past decade has witnessed an explosive increase in the scale, intensity and sophistication of cyber attacks. While software vendors have significantly increased their efforts on security, they are almost always playing “catch up.” As a result, security-conscious organizations and individuals have come to expect their system administrators to deploy an array of tools and techniques to stay a step ahead of the hackers. While developer-oriented security tools rely mainly on static analysis, runtime analysis and policy enforcement are the mechanisms of choice in administrator-oriented tools. Runtime techniques offer increased automation and precision over static analysis, thus addressing the needs of administrators who don’t have the time or resources needed to acquire extensive knowledge about the internals of a software system.

In this talk, I will begin by summarizing some of the significant advances that have been achieved in the past few years in the context of software vulnerability mitigation, including buffer overflow defenses, and more recently, the impressive results that have been achieved using dynamic information-flow analysis for blocking the most popular exploits today, including SQL and command injection and cross-site scripting. I will then proceed to describe dynamic analysis and enforcement techniques aimed at another high-profile security problem faced today, namely, malware defense. Our initial target in this regard has been on dynamic analysis techniques for extracting high-level models of program behavior. These models could be used in a variety of applications such as intrusion detection, vulnerability analysis and security policy verification. More recently, interesting advances have been made in the context of security policy development, where a combination of static and dynamic analysis techniques have been developed to synthesize low-level, enforceable policies that achieve a high-level goal such as protecting system integrity. Finally, I will conclude the talk with a discussion of some research opportunities and challenges in software security.

R. Sekar


Run-Time Verification of Networked Software

Most applications that are in use today inter-operate with other applications, so-called peers, over a network. The analysis of such distributed applications requires that the effect of the communication with peers is included. This can be achieved by writing or generating stubs of peers, or by including all processes in the execution environment. The latter approach also requires special treatment of network communication primitives.

We also present an alternative approach, which analyzes a networked application by recording and caching its communication with peers. Caching becomes useful when several traces of the application are analyzed. It dispenses with the need of generating a new peer application execution for each different execution of the main application. Such a caching framework for input/output has been implemented on the Java PathFinder platform, which can be used to verify executions of non-deterministic applications at run-time.

Cyrille Valentin Artho

Clara: Partially Evaluating Runtime Monitors at Compile Time

Tutorial Supplement


is a novel static-analysis framework for partially evaluating finite-state runtime monitors at compile time.


uses static typestate analyses to automatically convert any AspectJ monitoring aspect into a residual runtime monitor that only monitors events triggered by program locations that the analyses failed to prove safe. If the static analysis succeeds on all locations, this gives strong static guarantees. If not, the efficient residual runtime monitor is guaranteed to capture property violations at runtime. Researchers can use


with most runtime-monitoring tools that implement monitors as AspectJ aspects.

In this tutorial supplement, we provide references to related reading material that will allow the reader to obtain in-depth knowledge about the context in which


can be applied and about the techniques that underlie the



Eric Bodden, Patrick Lam

You Should Better Enforce Than Verify

This tutorial deals with runtime enforcement and advocates its use as an extension of runtime verification. While research efforts in runtime verification have been mainly concerned with detection of misbehaviors and acknowledgement of desired behaviors, runtime enforcement aims mainly to circumvent misbehaviors of systems and to guarantee desired behaviors. First, we propose a comparison between runtime verification and runtime enforcement. We then present previous theoretical models of runtime enforcement mechanisms and their expressive power with respect to enforcement. Then, we overview existing work on runtime enforcement monitor synthesis. Finally, we propose some future challenges for the runtime enforcement technique.

Yliès Falcone

Runtime Verification for the Web

A Tutorial Introduction to Interface Contracts in Web Applications

This tutorial presents an introduction to the monitoring of web applications. These applications run in a user’s web browser and exchange requests and responses with a server in the background to update their display. A demo application, called the Beep Store, illustrates why complex properties on this exchange must be verified at runtime. These properties can be formalized using an extension of Linear Temporal Logic called LTL-FO


. The tutorial concludes with the presentation of BeepBeep, a lightweight runtime monitor for web applications.

Sylvain Hallé, Roger Villemaire

Statistical Model Checking: An Overview

Quantitative properties of stochastic systems are usually specified in logics that allow one to compare the measure of executions satisfying certain temporal properties with thresholds. The model checking problem for stochastic systems with respect to such logics is typically solved by a numerical approach [31,8,35,22,21,5] that iteratively computes (or approximates) the exact measure of paths satisfying relevant subformulas; the algorithms themselves depend on the class of systems being analyzed as well as the logic used for specifying the properties. Another approach to solve the model checking problem is to


the system for finitely many executions, and use

hypothesis testing

to infer whether the samples provide a


evidence for the satisfaction or violation of the specification. In this tutorial, we survey the statistical approach, and outline its main advantages in terms of efficiency, uniformity, and simplicity.

Axel Legay, Benoît Delahaye, Saddek Bensalem

Runtime Verification with the RV System

The RV system is the first system to merge the benefits of Runtime Monitoring with Predictive Analysis. The Runtime Monitoring portion of RV is based on the successful Monitoring Oriented Programming system developed at the University of Illinois [6,7,9,21,5], while the Predictive Analysis capability is a vastly expanded version of the jPredictor System also developed at the University of Illinois [11,14].

With the RV system, runtime monitoring is supported and encouraged as a fundamental principle for building reliable software: monitors are automatically synthesized from specified properties and integrated into the original system to check its dynamic behaviors. When certain conditions of interest occur, such as a violation of a specification, user-defined actions will be triggered, which can be any code from information logging to runtime recovery. The RV system supports the monitoring of


properties that may specify a relationship between objects. Properties may be defined using one of several logical formalisms, such as: extended regular languages, context-free patterns, deterministic finite state machines, linear temporal logic, and past time linear temporal logic. The system is designed in such a way that adding new logical formalisms is a relatively simple task.

The predictive capabilities allow any of these monitoring specifications to be extended to checking not just the actual runtime traces of program execution, but any trace that may be inferred from a constructed casual model. The Predictive Analysis also features built in algorithms for race detection and atomicity violations, that are both highly useful in concurrent system design and difficult to specify in terms of formal specification languages.

Patrick Meredith, Grigore Roşu

Regular and Short Papers

A Meta-Aspect Protocol for Developing Dynamic Analyses

Dynamic aspect-oriented programming has been widely used for the development of dynamic analyses to abstract over low-level program instrumentation. Due to particular feature requirements in different analysis domains like debugging or testing, many different aspect languages were developed from scratch or by extensive compiler or interpreter extensions. We introduce another level of abstraction in form of a meta-aspect protocol to separate the host language from the analysis domain. A language expert can use this protocol to tailor an analysis-specific aspect language, based on which a domain expert can develop a particular analysis. Our design enables a flexible specification of the join point model, configurability of aspect deployment and scoping, and extensibility of pointcut and advice language. We present the application of our design to different dynamic analysis domains.

Michael Achenbach, Klaus Ostermann

Behavior Abstraction in Malware Analysis

We present an approach for proactive malware detection working by abstraction of program behaviors. Our technique consists in abstracting program traces, by rewriting given subtraces into abstract symbols representing their functionality. Traces are captured dynamically by code instrumentation, which allows us to handle packed or self-modifying malware. Suspicious behaviors are detected by comparing trace abstractions to reference malicious behaviors. The expressive power of abstraction allows us to handle general suspicious behaviors rather than specific malware code and then, to detect malware mutations. We present and discuss an implementation validating our approach.

Philippe Beaucamps, Isabelle Gnaedig, Jean-Yves Marion

Clara: A Framework for Partially Evaluating Finite-State Runtime Monitors Ahead of Time

Researchers have developed a number of runtime verification tools that generate runtime monitors in the form of AspectJ aspects. In this work, we present C


, a novel framework to statically optimize such monitoring aspects with respect to a given program under test. C


uses a sequence of increasingly precise static analyses to automatically convert a monitoring aspect into a residual runtime monitor. The residual monitor only watches events triggered by program locations that the analyses failed to prove safe at compile time. In two-thirds of the cases in our experiments, the static analysis succeeds on all locations, proving that the program fulfills the stated properties, and completely obviating the need for runtime monitoring. In the remaining cases, the residual runtime monitor is usually much more efficient than a full monitor, yet still captures all property violations at runtime.

Eric Bodden, Patrick Lam, Laurie Hendren

Checking the Correspondence between UML Models and Implementation

UML class and sequence diagrams are used as the basis for runtime profiling along with either offline or online analysis to determine whether the execution conforms to the diagrams. Situations where sequence diagrams are intended to characterize all possible executions are described. The approach generates an execution tree of all possible sequences, using a detailed collection of graph transformations that represent a precise operational semantics for sequence diagrams, including treatment for polymorphism, multiple activations, reference to other diagrams, and the use of frames in sequence diagrams. The sequence diagrams are also used to determine the information that should be gathered about method calls in the system. Aspects that can follow the flow of messages in a distributed system, are generated and the results of execution are recorded. The execution tree is used to automatically check the recorded execution to detect operations that do not correspond to any of the diagrams. These represent either new types of sequence diagrams that should be added to the collection, or implementation errors where the system is not responding as designed. In either case, it is important to identify such situations.

Selim Ciraci, Somayeh Malakuti, Shmuel Katz, Mehmet Aksit

Compensation-Aware Runtime Monitoring

To avoid large overheads induced by runtime monitoring, the use of asynchronous log-based monitoring is sometimes adopted — even though this implies that the system may proceed further despite having reached an anomalous state. Any actions performed by the system after the error occurring are undesirable, since for instance, an unchecked malicious user may perform unauthorized actions. Since stopping such actions is not feasible, in this paper we investigate the use of compensations to enable the undoing of actions, thus enriching asynchronous monitoring with the ability to restore the system to the original state in which the anomaly occurred. Furthermore, we show how allowing the monitor to adaptively synchronise and desynchronise with the system is also possible and report on the use of the approach on an industrial case study of a financial transaction system.

Christian Colombo, Gordon J. Pace, Patrick Abela

Recovery Tasks: An Automated Approach to Failure Recovery

We present a new approach for developing robust software applications that breaks dependences on the failed parts of an application’s execution to allow the rest of the application to continue executing. When a failure occurs, the recovery algorithm uses information from a static analysis to characterize the intended behavior of the application had it not failed. It then uses this characterization to recover as much of the application’s execution as possible.

We have implemented this approach in the Bristlecone compiler. We have evaluated our implementation on a multiplayer game, a web portal, and a MapReduce framework. We found that in the presence of injected failures, the recovery task version provided substantially better service than the control versions. Moreover, the recovery task version of the game benchmark successfully recovered from a real fault that we accidentally introduced during development, while the same fault caused the two control versions to crash.

Brian Demsky, Jin Zhou, William Montaz

Formally Efficient Program Instrumentation

The term “instrumentation” refers to modification of a program or its runtime environment to make hidden details of execution visible. Instrumentation can severely compromise program execution speed. Frameworks like DTrace (Sun Microsystems) and VProbes (VMware) offer practical ways of addressing performance concerns, but there has been no formal understanding of what it means for instrumentation to be efficient. To fill this gap, we propose a criterion based on that of Popek and Goldberg for virtual machines and on our previous work relating this to Jones optimality of program specializers. We further suggest linguistic constraints on instrumentation code to enable more aggressive static optimization of dynamically instrumented programs.

Boris Feigin, Alan Mycroft

Interval Analysis for Concurrent Trace Programs Using Transaction Sequence Graphs

Concurrent trace programs (CTPs) are slices of the concurrent programs that generate the concrete program execution traces, where inter-thread event order specific to the given traces are relaxed. For such CTPs, we introduce transaction sequence graph (TSG) as a model for efficient concurrent data flow analysis. The TSG is a digraph of thread-local control nodes and edges corresponding to transactions and possible context-switches. Such a graph captures all the representative interleavings of these nodes/transactions. We use a mutually atomic transaction (MAT) based partial order reduction to construct such a TSG. We also present a non-trivial improvement to the original MAT analysis to further reduce the TSG sizes. As an application, we have used interval analysis in our experiments to show that TSG leads to more precise intervals and more time/space efficient concurrent data flow analysis than the standard models such as concurrent control flow graph.

Malay K. Ganai, Chao Wang

Causality Analysis in Contract Violation

Establishing liabilities in component-based systems is a challenging task, as it requires to establish convincing evidence with respect to the occurrence of a fault, and the causality relation between the fault and a damage. The second issue is especially complex when several faults are detected and the impact of these faults on the occurrence of the failure has to be assessed. In this paper we propose a formal framework for reasoning about logical causality between contract violations.

Gregor Gössler, Daniel Le Métayer, Jean-Baptiste Raclet

Reducing Configurations to Monitor in a Software Product Line

A software

product line

is a family of programs where each program is defined by a unique combination of features. Product lines, like conventional programs, can be checked for safety properties through execution monitoring. However, because a product line induces a number of programs that is potentially exponential in the number of features, it would be very expensive to use existing monitoring techniques: one would have to apply those techniques to every single program. Doing so would also be wasteful because many programs can provably never violate the stated property. We introduce a monitoring technique dedicated to product lines that, given a safety property, statically determines the feature combinations that cannot possibly violate the property, thus reducing the number of programs to monitor. Experiments show that our technique is effective, particularly for safety properties that crosscut many optional features.

Chang Hwan Peter Kim, Eric Bodden, Don Batory, Sarfraz Khurshid

Runtime Instrumentation for Precise Flow-Sensitive Type Analysis

We describe a combination of runtime information and static analysis for checking properties of complex and configurable systems. The basic idea of our approach is to 1) let the program execute and thereby read the important dynamic configuration data, then 2) invoke static analysis from this runtime state to detect possible errors that can happen in the continued execution. This approach improves analysis precision, particularly with respect to types of global variables and nested data structures. It also enables the resolution of modules that are loaded based on dynamically computed information.

We describe an implementation of this approach in a tool that statically computes possible types of variables in PHP applications, including detailed types of nested maps (arrays). PHP is a dynamically typed language; PHP programs extensively use nested value maps, as well as ’include’ directives whose arguments are dynamically computed file names. We have applied our analysis tool to over 50’000 lines of PHP code, including the popular DokuWiki software, which has a plug-in architecture. The analysis identified 200 problems in the code and in the type hints of the original source code base. Some of these problems can cause exploits, infinite loops, and crashes. Our experiments show that dynamic information simplifies the development of the analysis and decreases the number of false alarms compared to a purely static analysis approach.

Etienne Kneuss, Philippe Suter, Viktor Kuncak

Trace Recording for Embedded Systems: Lessons Learned from Five Industrial Projects

This paper presents experiences from five industry collaboration projects performed between 2004 – 2009 where solutions for embedded systems trace recording have been developed and evaluated; in four cases for specific industrial systems and in the last case as a generic solution for a commercial real-time operating system, in collaboration with the RTOS company. The experiences includes technical solutions regarding efficient instrumentation and logging, technology transfer issues and evaluation results regarding CPU and RAM overhead. A brief overview of the


tool is also presented, a result of the first project (2004) which still is used by ABB Robotics and now in commercialization.

Johan Kraft, Anders Wall, Holger Kienle

Verification of an AFDX Infrastructure Using Simulations and Probabilities

Until recently, there was not a strong need for networking inside aircrafts. Indeed, the communications were mainly cabled and handled by Ethernet protocols. The evolution of avionics embedded systems and the number of integrated functions in civilian aircrafts has changed the situation. Indeed, those functionalities implies a huge increase in the quantity of data exchanged and thus in the number of connections between functions. Among the available mechanisms provided to handle this new complexity, one find Avionics Full Duplex Switched Ethernet (AFDX), a protocol that allows to simulate a point-to-point network between a source and one or more destinations. The core idea in AFDX is the one of Virtual Links (VL) that are used to simulate point-to-point communication between devices. One of the main challenge is to show that the total delivery time for packets on VL is bounded by some predefined value. This is a difficult problem that also requires to provide a formal, but quite evolutive, model of the AFDX network. In this paper, we propose to use a component-based design methodology to describe the behavior of the model. We then propose a stochastic abstraction that allows not only to simplify the complexity of the verification process but also to provide quantitative information on the protocol.

Ananda Basu, Saddek Bensalem, Marius Bozga, Benoît Delahaye, Axel Legay, Emmanuel Sifakis

Copilot: A Hard Real-Time Runtime Monitor

We address the problem of runtime monitoring for hard real-time programs—a domain in which correctness is critical yet has largely been overlooked in the runtime monitoring community. We describe the challenges to runtime monitoring for this domain as well as an approach to satisfy the challenges. The core of our approach is a language and compiler called


. Copilot is a stream-based dataflow language that generates small constant-time and constant-space C programs, implementing embedded monitors. Copilot also generates its own scheduler, obviating the need for an underlying real-time operating system.

Lee Pike, Alwyn Goodloe, Robin Morisset, Sebastian Niller

StealthWorks: Emulating Memory Errors

A study of Google’s data center revealed that the incidence of main memory errors is surprisingly high. These errors can lead to application and system corruption, impacting reliability. The high error rate is an indication that new resiliency techniques will be vital in future memories. To develop such approaches, a framework is needed to conduct flexible and repeatable experiments. This paper describes such a framework, StealthWorks, to facilitate research on software resilience by behaviorally emulating memory errors in a live system. We illustrate it to study program tolerance to random errors and in the development of a new software technique to continuously test memory for errors.

Musfiq Rahman, Bruce R. Childers, Sangyeun Cho

Efficient Data Race Detection for Async-Finish Parallelism

A major productivity hurdle for parallel programming is the presence of

data races

. Data races can lead to all kinds of harmful program behaviors, including determinism violations and corrupted memory. However, runtime overheads of current dynamic data race detectors are still prohibitively large (often incurring slowdowns of 10× or larger) for use in mainstream software development.

In this paper, we present an efficient dynamic race detector algorithm targeting the async-finish task-parallel parallel programming model. The async and finish constructs are at the core of languages such as X10 and Habanero Java (HJ). These constructs generalize the spawn-sync constructs used in Cilk, while still ensuring that all computation graphs are deadlock-free.

We have implemented our algorithm in a tool called


and evaluated it on a suite of 12 benchmarks. To reduce overhead of the dynamic analysis, we have also implemented various static optimizations in the tool. Our experimental results indicate that our approach performs well in practice, incurring an average slowdown of 3.05× compared to a serial execution in the optimized case.

Raghavan Raman, Jisheng Zhao, Vivek Sarkar, Martin Vechev, Eran Yahav

Run-Time Verification of Optimistic Concurrency

Assertion based specifications are not suitable for optimistic concurrency where concurrent operations are performed assuming no conflict among threads and correctness is cast in terms of the absence or presence of conflicts that happen in the future. What is needed is a formalism that allows expressing constraints about the future. In previous work, we introduced tressa claims and incorporated prophecy variables as one such formalism. We investigated static verification of tressa claims and how tressa claims improve reduction proofs.

In this paper, we consider tressa claims in the run-time verification of optimistic concurrency implementations. We formalize, via a simple grammar, the annotation of a program with tressa claims. Our method relieves the user from dealing with explicit manipulation of prophecy variables. We demonstrate the use of tressa claims in expressing complex properties with simple syntax.

We develop a run-time verification framework which enables the user to evaluate the correctness of tressa claims. To this end, we first describe the algorithms for monitor synthesis which can be used to evaluate the satisfaction of a tressa claim over a given execution. We then describe our tool implementing these algorithms. We report our initial test results.

Ali Sezgin, Serdar Tasiran, Kivanc Muslu, Shaz Qadeer

Who Guards the Guardians? — Toward V&V of Health Management Software

(Short Paper)

Highly complex and safety-critical systems are not only present in traditional areas such as the aerospace and nuclear power industries, but are also becoming ubiquitous in other areas including the automobile, health care, and the manufacturings industries. Integrated System Health Management (ISHM) systems are being created to detect, diagnose, predict, and potentially mitigate adverse events during the operation of such systems. ISHM systems obtain data from multiple sensors and perform reasoning about the state of health of the system based on data and physical models. Usually, major electromechanical or hydraulic subsystems in aircraft are monitored by an ISHM system.

Johann Schumann, Ashok N. Srivastava, Ole J. Mengshoel

Aspect-Oriented Instrumentation with GCC

We present the


instrumentation framework for GCC, a widely used compiler infrastructure. The addition of plug-in support in the latest release of GCC makes it an attractive platform for runtime instrumentation, as GCC plug-ins can directly add instrumentation by transforming the compiler’s intermediate representation. Such transformations, however, require expert knowledge of GCC internals.


addresses this situation by allowing instrumentation plug-ins to be developed using the familiar vocabulary of Aspect-Oriented Programming pointcuts, join points, and advice functions.


also supports powerful customized instrumentation, where specific information about each join point in a pointcut, as well as results of static analysis, can be used to customize the inserted instrumentation. We introduce the


API and present several examples that illustrate how it can be applied to useful runtime verification problems.

Justin Seyster, Ketan Dixit, Xiaowan Huang, Radu Grosu, Klaus Havelund, Scott A. Smolka, Scott D. Stoller, Erez Zadok

Runtime Verification for Software Transactional Memories

Software transactional memories (STMs) promise simple and efficient concurrent programming. Several correctness properties have been proposed for STMs. Based on a bounded conflict graph algorithm for verifying correctness of STMs, we develop


, a tool for runtime verification of STM implementations. The novelty of


lies in the way it combines coarse and precise runtime analyses to guarantee sound and complete verification in an efficient manner. We implement


in the TL2 STM implementation. We evaluate the performance of


on STAMP benchmarks. While a precise runtime verification technique based on conflict graphs results in an average slowdown of 60x, the two-level approach of


performs complete verification with an average slowdown of around 25x across different benchmarks.

Vasu Singh

Optimized Temporal Monitors for SystemC

SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation on a synthetic workload and identify a configuration that offers the best performance in terms of runtime overhead.

Deian Tabakov, Moshe Y. Vardi

Runtime Verification of Stochastic, Faulty Systems

We desire a capability for the lifelong verification of complex embedded systems that degrade over time, such as a semi-autonomous car. The field of runtime verification has developed many tools for monitoring the safety of software systems in real time. However, these tools do not allow for uncertainty in the system’s state or failure, both of which are essential for monitoring hardware as it degrades. This work augments runtime verification with techniques from model-based estimation in order to provide a capability for monitoring the safety criteria of mixed hardware/software systems that is robust to uncertainty and hardware failure.

We begin by framing the problem as runtime verification of stochastic, faulty, hidden-state systems. We solve this problem by performing belief state estimation over the combined state of the Büchi automata representing the safety requirements and the probabilistic hierarchical constraint automata representing the embedded system. This method provides a clean framing of safety monitoring of mixed stochastic systems as an instance of Bayesian filtering.

Cristina M. Wilcox, Brian C. Williams

Low-Overhead Bug Fingerprinting for Fast Debugging

There is a gap between the information available at the time of a software failure and the information actually shipped to developers in the corresponding bug report. As a result, identifying the cause of the bug based on this bug report is often difficult. To close this gap, we propose

bug fingerprints

—an augmentation of classic automated bug reports with runtime information about how the reported bug occurred in production.

Classic automated bug reporting systems contain at most a coredump that describes the final manifestation of a bug. In contrast, bug fingerprints contain additional small amounts of highly relevant runtime information that helps understand how the bug occurred. We show how these “fingerprints” can be used to speed up both manual and automated debugging. As a proof of concept, we present DCop, a system for collecting such runtime information about deadlocks and including it in the corresponding bug reports. The runtime overhead introduced by DCop is negligible (less than 0.17% for the Apache Web server), so it is suitable for use in production.

Cristian Zamfir, George Candea

Tool Demonstrations

ESAT: A Tool for Animating Logic-Based Specifications of Evolvable Component Systems

An increasingly important area of runtime monitoring is the incorporation of techniques for diagnosis and repair, for example, in autonomic control applications [9], in robotics, and in e-business process change [12]. In particular, a runtime monitor becomes a ’supervisor’ - a process which not only monitors but may evolve the running system dynamically. In [4], a framework for the logical modelling of hierarchically structured supervised component systems was set out. The modelling captures the following key behavioural concepts: at runtime, a supervisory component can (i) monitor its supervisee to ensure conformance against desired behaviour, (ii) analyse reasons for non-conformance, should that arise, (iii) evolve its supervisee in a pre-programmed way following diagnosis, or via external stimulus received from higher-level supervisory components. Structurally, components may contain sub-components, actions over the state of the component, and programs over the actions. In this logical framework, components are specified by first-order logic theories. Actions are either basic revisions to the state of the component or combinations of actions. Crucially, a supervisory component is treated as a logical theory meta to its supervisee, thus providing access to all facets of the supervisee’s structure. A supervisory component program is executed meta to its supervisee’s program. Synchronisation between the two may occur through a variety of schemes, from lock-step synchronisation to asynchronous execution with defined synchronisation points. A supervisory program action may evolve its supervisee by making changes to its state, to its actions, to its sub-components, or to its program. This occurs in the logical framework via a theory change induced from the meta-level.

Djihed Afifi, David E. Rydeheard, Howard Barringer

A Tool Which Mines Partial Execution Traces to Improve Static Analysis

We present a tool which performs abstract interpretation based static analysis of numerical variables. The novelty is that the analysis is parametric, and parameters are chosen by applying a variant of principal component analysis to partial execution traces of programs.

Gianluca Amato, Maurizio Parton, Francesca Scozzari

LarvaStat: Monitoring of Statistical Properties

Execution paths expose non-functional information such as system reliability and performance, which can be collected using runtime verification techniques. Statistics gathering and evaluation can be very useful for processing such information for areas ranging from performance profiling to user modelling and intrusion detection. In this paper, we give an overview of LarvaStat — a runtime verification tool extending


[2] with the ability to straightforwardly specify real-time related statistical properties. Being automaton-based, LarvaStat also makes explicit the overhead induced by monitoring.

Christian Colombo, Andrew Gauci, Gordon J. Pace

WS-PSC Monitor: A Tool Chain for Monitoring Temporal and Timing Properties in Composite Service Based on Property Sequence Chart

Web service composition is a new paradigm to develop distributed and reactive software-intensive systems. Due to the autonomous attribute of each basic service, validation of composite services must be extended from design time to run-time. In this paper, we describe a novel tool chain called WS-PSC Monitor to monitor temporal and timing properties in composite service based on graphical specification property sequence chart and timed property sequence chart. The tool chain provides a completely graphical front-end which can make software designers do not have to deal with any particular textual and logical formalism.

Pengcheng Zhang, Zhiyong Su, Yuelong Zhu, Wenrui Li, Bixin Li


Weitere Informationen

Premium Partner