Skip to main content
Top

2013 | Book

Runtime Verification

4th International Conference, RV 2013, Rennes, France, September 24-27, 2013. Proceedings

Editors: Axel Legay, Saddek Bensalem

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 4th International Conference on Runtime Verification, RV 2013, held in Rennes, France, in September 2013. The 24 revised full papers presented together with 3 invited papers, 2 tool papers, and 6 tutorials were carefully reviewed and selected from 58 submissions. The papers address a wide range of specification languages and formalisms for traces; specification mining; program instrumentation; monitor construction techniques; logging, recording, and replay; fault detection, localization, recovery, and repair; program steering and adaptation; as well as metrics and statistical information gathering; combination of static and dynamic analyses and program execution visualization.

Table of Contents

Frontmatter

Invited Paper

Executing Specifications Using Synthesis and Constraint Solving

Specifications are key to improving software reliability as well as documenting precisely the intended behavior of software. Writing specifications is still perceived as expensive. Of course, writing implementations is at least as expensive, but is hardly questioned because there is currently no real alternative. Our goal is to give specifications a more balanced role compared to implementations, enabling the developers to compile, execute, optimize, and verify against each other mixed code fragments containing both specifications and implementations. To make specification constructs executable we combine deductive synthesis with run-time constraint solving, in both cases leveraging modern SMT solvers. Our tool decomposes specifications into simpler fragments using a cost-driven deductive synthesis framework. It compiles as many fragments as possible into conventional functional code; it executes the remaining fragments by invoking our constraint solver that extends an SMT solver to handle recursive functions. Using this approach we were able to execute constraints that describe the desired properties of integers, sets, maps and algebraic data types.

Viktor Kuncak, Etienne Kneuss, Philippe Suter

Regular Papers

Informative Types and Effects for Hybrid Migration Control

Flow policy confinement is a property of programs whose declassifications respect the allowed flow policy of the context in which they execute. In a distributed setting where computation domains enforce different allowed flow policies, code migration between domains implies dynamic changes to the relevant allowed policy. Furthermore, when programs consist of more than one thread running concurrently, the same program might need to comply to more than one allowed flow policy simultaneously. In this scenario, confinement can be enforced as a migration control mechanism. In the present work we compare three type-based enforcement mechanisms for confinement, regarding precision and efficiency of the analysis. In particular, we propose an efficient hybrid mechanism based on statically annotating programs with the declassification effect of migrating code. This is done by means of an informative type and effect pre-processing of the program, and is used for supporting runtime decisions.

Ana Almeida Matos, Jan Cederquist
Monitoring of Temporal First-Order Properties with Aggregations

Compliance policies often stipulate conditions on aggregated data. Current policy monitoring approaches are limited in the kind of aggregations that they can handle. We rectify this as follows. First, we extend metric first-order temporal logic with aggregation operators. This extension is inspired by the aggregation operators common in database query languages like SQL. Second, we provide a monitoring algorithm for this enriched policy specification language. Finally, we experimentally evaluate our monitor’s performance.

David Basin, Felix Klaedtke, Srdjan Marinovic, Eugen Zălinescu
From Propositional to First-Order Monitoring

The main purpose of this paper is to introduce a first-order temporal logic, LTL

FO

, and a corresponding monitor construction based on a new type of automaton, called spawning automaton.

Specifically, we show that monitoring a specification in LTL

FO

boils down to an undecidable decision problem. The proof of this result revolves around specific ideas on what we consider a “proper” monitor. As these ideas are general, we outline them first in the setting of standard LTL, before lifting them to the setting of first-order logic and LTL

FO

. Although due to the above result one cannot hope to obtain a complete monitor for LTL

FO

, we prove the soundness of our automata-based construction and give experimental results from an implementation. These seem to substantiate our hypothesis that the automata-based construction leads to efficient runtime monitors whose size does not grow with increasing trace lengths (as is often observed in similar approaches). However, we also discuss formulae for which growth is unavoidable, irrespective of the chosen monitoring approach.

Andreas Bauer, Jan-Christoph Küster, Gil Vegliach
Right-Universality of Visibly Pushdown Automata

Visibly pushdown automata (VPAs) express properties on structures with a nesting relation such as program traces with nested method calls. In the context of runtime verification, we are interested in the following problem: given

u

, the beginning of a program trace, and

$\mathcal{A}$

, a VPA expressing a property to be checked on this trace, can we ensure that any extension

uv

of

u

will be accepted by

$\mathcal{A}$

? We call this property right-universality w.r.t.

u

. We propose an online algorithm detecting at the earliest position of the trace, whether this trace is accepted by

$\mathcal{A}$

. The decision problem associated with right-universality is ExpTime-complete. Our algorithm uses antichains and other optimizations, in order to avoid the exponential blow-up in most cases. This is confirmed by promising experiments conducted on a prototype implementation.

Véronique Bruyère, Marc Ducobu, Olivier Gauwin
Distributed Finite-State Runtime Monitoring with Aggregated Events

Security information and event management (SIEM) systems usually consist of a centralized monitoring server that processes events sent from a large number of hosts through a potentially slow network. In this work, we discuss how monitoring efficiency can be increased by switching to a model of

aggregated traces

, where monitored hosts buffer events into lossy but compact batches. In our trace model, such batches retain the number and types of events processed, but not their order. We present an algorithm for automatically constructing, out of a regular finitestate property definition, a monitor that can process such aggregated traces.

We discuss the resultant monitor’s complexity and prove that it determines the set of possible next states without producing false negatives and with a precision that is optimal given the reduced information the trace carries.

Kevin Falzon, Eric Bodden, Rahul Purandare
Synthesising Correct Concurrent Runtime Monitors
(Extended Abstract)

We study the correctness of automated synthesis for concurrent monitors. We adapt sHML, a subset of the Hennessy-Milner logic with recursion, to specify safety properties of Erlang programs, and define an automated translation from sHML formulas to Erlang monitors so as to detect formula violations at runtime. We then formalise monitor correctness for our concurrent setting and describe a technique that allows us to prove monitor correctness in stages; this technique is used to prove the correctness of our automated monitor synthesis.

Adrian Francalanza, Aldrin Seychell
Practical Interruptible Conversations
Distributed Dynamic Verification with Session Types and Python

The rigorous and comprehensive verification of communication-based software is an important engineering challenge in distributed systems. Drawn from our industrial collaborations [33,28] on Scribble, a choreography description language based on multiparty session types, this paper proposes a dynamic verification framework for structured interruptible conversation programming. We first present our extension of Scribble to support the specification of asynchronously interruptible conversations. We then implement a concise API for conversation programming with interrupts in Python that enables session types properties to be dynamically verified for distributed processes. Our framework ensures the global safety of a system in the presence of asynchronous interrupts through independent runtime monitoring of each endpoint, checking the conformance of the local execution trace to the specified protocol. The usability of our framework for describing and verifying choreographic communications has been tested by integration into the large scientific cyberinfrastructure developed by the Ocean Observatories Initiative. Asynchronous interrupts have proven expressive enough to represent and verify their main classes of communication patterns, including asynchronous streaming and various timeout-based protocols, without requiring additional synchronisation mechanisms. Benchmarks show conversation programming and monitoring can be realised with little overhead.

Raymond Hu, Rumyana Neykova, Nobuko Yoshida, Romain Demangeon, Kohei Honda
Runtime Verification with Particle Filtering

We introduce

Runtime Verification with Particle Filtering

(RVPF), a powerful and versatile method for controlling the tradeoff between uncertainty and overhead in runtime verification. Overhead and accuracy are controlled by adjusting the frequency and duration of

observation gaps

, during which program events are not monitored, and by adjusting the number of particles used in the RVPF algorithm. We succinctly represent the program model, the program monitor, their interaction, and their observations as a dynamic Bayesian network (DBN). Our formulation of RVPF in terms of DBNs is essential for a proper formalization of

peek events

: low-cost observations of parts of the program state, which are performed probabilistically at the end of observation gaps. Peek events provide information that our algorithm uses to reduce the uncertainty in the monitor state after gaps.

We estimate the internal state of the DBN using particle filtering (PF) with sequential importance resampling (SIR). PF uses a collection of conceptual particles (random samples) to estimate the probability distribution for the system’s current state: the probability of a state is given by the sum of the importance weights of the particles in that state. After an observed event, each particle chooses a state transition to execute by sampling the DBN’s joint transition probability distribution; particles are then redistributed among the states that best predicted the current observation. SIR exploits the DBN structure and the current observation to reduce the variance of the PF and increase its performance.

We experimentally compare the overhead and accuracy of our RVPF algorithm with two previous approaches to runtime verification with state estimation: an exact algorithm based on the forward algorithm for HMMs, and an approximate version of that algorithm, which uses precomputation to reduce runtime overhead. Our results confim RVPF’s versatility, showing how it can be used to control the tradeoff between execution time and memory usage while, at the same time, being the most accurate of the three algorithms.

Kenan Kalajdzic, Ezio Bartocci, Scott A. Smolka, Scott D. Stoller, Radu Grosu
An Optimized Memory Monitoring for Runtime Assertion Checking of C Programs

Runtime assertion checking provides a powerful, highly automatizable technique to detect violations of specified program properties. However, monitoring of annotations for pointers and memory locations (such as being valid, initialized, in a particular block, with a particular offset, etc.) is not straightforward and requires systematic instrumentation and monitoring of memory-related operations.

This paper describes the runtime memory monitoring library we developed for execution support of

e-acsl

, executable specification language for C programs offered by the F

rama

-C platform for analysis of C code. We present the global architecture of our solution as well as various optimizations we realized to make memory monitoring more efficient. Our experiments confirm the benefits of these optimizations and illustrate the bug detection potential of runtime assertion checking with

e-acsl

.

Nikolai Kosmatov, Guillaume Petiot, Julien Signoles
Impartiality and Anticipation for Monitoring of Visibly Context-Free Properties

We study monitoring of visibly context-free properties. These properties reflect the common concept of nesting which arises naturally in software systems. They can be expressed e.g. in the temporal logic CaRet which extends LTL by means of matching calls and returns. The future fragment of CaRet enables us to give a direct unfolding-based automaton construction, similar to LTL. We provide a four-valued, impartial semantics on finite words which is particularly suitable for monitoring. This allows us to synthesize monitors in terms of deterministic push-down Mealy machines. To go beyond impartiality, we develop a construction for anticipatory monitors from visibly push-down

ω

-automata by utilizing a decision procedure for emptiness.

Normann Decker, Martin Leucker, Daniel Thoma
Accelerating Data Race Detection Utilizing On-Chip Data-Parallel Cores

Programmers are taking advantage of the increasing availability of on-chip parallelism to meet the rising performance demands of diverse applications. Support of tools that can facilitate the detection of incorrect program execution when concurrent threads are involved is critical to this evolution. Many concurrency bugs manifest as some form of data race condition, and their runtime detection is inherently difficult due to the high overhead of the required memory trace comparisons. Various software and hardware tools have been proposed to detect concurrency bugs at runtime. However, software-based schemes lead to significant performance overhead, while, hardware-based schemes require significant hardware modifications. To enable cost-efficient design of data race detectors, it is desirable to utilize available on-chip resources. The recent integration of CPU cores with data-parallel accelerator cores, such as GPU, provides the opportunity to offload the task of data race detection to these accelerator cores. In this paper, we explore this opportunity by designing a G̱PU̱ A̱ccelerated Data Ṟace Ḏetector (GUARD) that utilizes GPU cores to process memory traces and detect data races in parallel applications executing on the CPU cores. GUARD further explores various optimization techniques for: (i) reducing the size of memory traces by employing signatures; and (ii) improving accuracy of signatures using coherence-based filtering. Overall, GUARD achieves the performance of hardware-based data race detection mechanisms with minimal hardware modifications.

Vineeth Mekkat, Anup Holey, Antonia Zhai
Efficient Model to Query and Visualize the System States Extracted from Trace Data

Any application, database server, telephony server, or operating system maintains different states for their internal elements and resources. When tracing is enabled on such systems, the corresponding events in the trace logs can be used to extract and model the different state values of the traced modules to analyze their runtime behavior. In this paper, a generic method and corresponding data structures are proposed to model and manage the system state values, allowing efficient storage and access. The proposed state organization mechanism generates state intervals from trace events and stores them in a tree-based state history database. The state history database can then be used to extract the state of any system resources (i.e. cpu, process, memory, file, etc.) at any timestamp. The extracted state values can be used to track system problems (e.g. performance degradation). The proposed system is usable in both the offline tracing mode, when there is a set of trace files, and online tracing mode, when there is a stream of trace events. The proposed system has been implemented and used to display and analyze interactively various information extracted from very large traces in the magnitude order of 1 TB.

Alexandre Montplaisir, Naser Ezzati-Jivan, Florian Wininger, Michel Dagenais
Repair Abstractions for More Efficient Data Structure Repair

Despite the substantial advances in techniques for finding and removing bugs, code is often deployed with (unknown or known) bugs, which pose a fundamental problem for software reliability. A promising approach to address this problem is

data structure repair

—a runtime approach designed to perform

repair actions

, i.e., mutations of erroneous data structures to repair (certain) errors in program state, to allow the program to

recover

from those errors and continue to execute. While data structure repair holds much promise, current techniques for repair do not scale to real applications.

This paper introduces

repair abstractions

for more efficient data structure repair. Our key insight is that if an error in the program state is due to a fault in software or hardware, a similar error may occur again, say when the same buggy code segment is executed again or when the same faulty memory location is accessed again. Conceptually, repair abstractions capture how erroneous program executions are repaired using concrete mutations to enable faster repair of similar errors in future. Experimental results using a suite of complex data structures show how repair abstractions allow more efficient repair than previous techniques.

Razieh Nokhbeh Zaeem, Muhammad Zubair Malik, Sarfraz Khurshid
To Run What No One Has Run Before: Executing an Intermediate Verification Language

When program verification fails, it is often hard to understand what went wrong in the absence of concrete executions that expose parts of the implementation or specification responsible for the failure. Automatic generation of such tests would require “executing” the complex specifications typically used for verification (with unbounded quantification and other expressive constructs), something beyond the capabilities of standard testing tools.

This paper presents a technique to automatically generate executions of programs annotated with complex specifications, and its implementation for the Boogie intermediate verification language. Our approach combines symbolic execution and SMT constraint solving to generate small tests that are easy to read and understand. The evaluation on several program verification examples demonstrates that our test case generation technique can help understand failed verification attempts in conditions where traditional testing is not applicable, thus making formal verification techniques easier to use in practice.

Nadia Polikarpova, Carlo A. Furia, Scott West
Optimizing Nop-shadows Typestate Analysis by Filtering Interferential Configurations

Nop-shadows Analysis (NSA) is an efficient static typestate analysis, which can be used to eliminate unnecessary monitoring instrumentations for runtime monitors. In this paper, we propose two optimizations to improve the precision of NSA. Both of the optimizations filter interferential configurations when determining whether a monitoring instrumentation is necessary. We have implemented our optimization methods in Clara and conducted extensive experiments on the DaCapo benchmark. The experimental results indicate that the optimized NSA can further remove unnecessary instrumentations after the original NSA in more than half of the cases, without a significant overhead. In addition, for two cases, all the instrumentations are removed, which implies the program is proved to satisfy the typestate property.

Chengsong Wang, Zhenbang Chen, Xiaoguang Mao
A Causality Analysis Framework for Component-Based Real-Time Systems

We propose an approach to enhance the fault diagnosis in black-box component-based systems, in which only events on component interfaces are observable, and assume that causal dependencies between component interface events within components are not known. For such systems, we describe a causality analysis framework that helps us establish the causal relationship between component failures and system failures, given an observed system execution trace. The analysis is based on a formalization of counterfactual reasoning, and applicable to real-time systems. We illustrate the analysis with a case study from the medical device domain.

Shaohui Wang, Anaheed Ayoub, BaekGyu Kim, Gregor Gössler, Oleg Sokolsky, Insup Lee
Reducing Monitoring Overhead by Integrating Event- and Time-Triggered Techniques

Runtime verification

is a formal technique used to check whether a program under inspection satisfies its specification by using a runtime monitor. Existing monitoring approaches use one of two ways for evaluating a set of logical properties: (1)

event-triggered

, where the program invokes the monitor when the state of the program changes, and (2)

time-triggered

, where the monitor periodically preempts the program and reads its state. Realizing the former is straightforward, but the runtime behaviour of event-triggered monitors are difficult to predict. Time-triggered monitoring (designed for real-time embedded systems), on the other hand, provides predictable monitoring behavior and overhead bounds at run time. Our previous work shows that time-triggered monitoring can potentially reduce the runtime overhead provided that the monitor samples the program state at a low frequency.

In this paper, we propose a

hybrid

method that leverages the benefits of both event- and time-triggered methods to reduce the overall monitoring overhead. We formulate an optimization problem, whose solution is a set of instrumentation instructions that switches between event-triggered and time-triggered modes of monitoring at run time; the solution may indicate the use of exactly one mode or a combination of the two modes. We fully implemented this method to produce instrumentation schemes for C programs that run on an ARM Cortex-M3 processor, and experimental results validate the effectiveness of this approach.

Chun Wah Wallace Wu, Deepak Kumar, Borzoo Bonakdarpour, Sebastian Fischmeister

Short Papers

A Scala DSL for Rete-Based Runtime Verification

Runtime verification (RV) consists in part of checking execution traces against formalized specifications. Several systems have emerged, most of which support specification notations based on state machines, regular expressions, temporal logic, or grammars. The field of Artificial Intelligence (AI) has for an even longer period of time studied rule-based production systems, which at a closer look appear to be relevant for RV, although seemingly focused on slightly different application domains, such as for example business processes and expert systems. The core algorithm in many of these systems is the Rete algorithm. We have implemented a Rete-based runtime verification system, named LogFire (originally intended for offline log analysis but also applicable to online analysis), as an internal DSL in the Scala programming language, using Scala’s support for defining DSLs. This combination appears attractive from a practical point of view. Our contribution is in part conceptual in arguing that such rule-based frameworks originating from AI may be suited for RV.

Klaus Havelund
A Late Treatment of C Precondition in Dynamic Symbolic Execution Testing Tools

This paper presents a novel technique for handling a precondition in dynamic symbolic execution (DSE) testing tools. It delays precondition constraints until the end of the program path evaluation. This method allows PathCrawler, a DSE tool for C programs, to accept a precondition defined as a C function. It provides a convenient way to express a precondition even for developers who are not familiar with specification formalisms. Our initial experiments show that it is more efficient than early precondition treatment, and has a limited overhead compared to a native treatment of a precondition directly expressed in constraints. It has also proven useful for combinations of static and dynamic analysis.

Mickaël Delahaye, Nikolai Kosmatov
Towards a Generic Non-intrusive Fault Detection Framework

Temporal dependencies between programming library API operations form a protocol that can be used to automatically detect incorrect use of abstractions provided by the API. Traditionally, aliasing of abstraction instances is one of the main problems of detecting this kind of protocol violations. In this paper we describe our runtime fault detection approach that uses dynamic data-flow tracking to cope with the aliasing problem. In addition, we present a proof-of-concept fault detection framework for integrating our approach to a development environment.

Jukka Julku, Mika Rautila
A Witnessing Compiler: A Proof of Concept

In prior work we proposed a mechanism of “witness generation and propagation” to construct proofs of the correctness of program transformations. Here we present a simpler theory, and describe our experience with an initial implementation based on the LLVM open-source compiler and the Z3 SMT solver.

Kedar S. Namjoshi, Giacomo Tagliabue, Lenore D. Zuck
Runtime Monitoring of Temporal Logic Properties in a Platform Game

We report on the use of runtime monitoring to automatically discover gameplay bugs in the execution of video games. In this context, the expected behaviour of game objects is expressed as a set of temporal logic formulæ on sequences of game events. Initial empirical results indicate that, in time, the use of a runtime monitor may greatly speed up the testing phase of a video game under development, by automating the detection of bugs when the game is being played.

Simon Varvaressos, Dominic Vaillancourt, Sébastien Gaboury, Alexandre Blondin Massé, Sylvain Hallé

Tool Papers

SMock — A Test Platform for Monitoring Tools

In the absence of a test framework for runtime verification tools, the evaluation and testing of such tools is an onerous task. In this paper we present the tool SMock; an easily and highly configurable mock system based on a domainspecific language providing profiling reports and enabling behaviour replayability, and specifically built to support the testing and evaluation of runtime verification tools.

Christian Colombo, Ruth Mizzi, Gordon J. Pace
SPY: Local Verification of Global Protocols

This paper presents a toolchain for designing deadlock-free multiparty global protocols, and their run-time verification through automatically generated, distributed endpoint monitors. Building on the theory of multiparty session types, our toolchain implementation validates communication safety properties on the global protocol, but enforces them via independent monitoring of each endpoint process. Each monitor can be internally embedded in or externally deployed alongside the endpoint runtime, and detects the occurrence of illegal communication actions and message types that do not conform to the protocol. The global protocol specifications can be additionally elaborated to express finer-grained and higher-level requirements, such as logical assertions on message payloads and security policies, supported by third-party plugins. Our demonstration use case is the verification of choreographic communications in a large cyberinfrastructure for oceanography [10].

Rumyana Neykova, Nobuko Yoshida, Raymond Hu

Tutorials

Instrumenting Android and Java Applications as Easy as abc

Program instrumentation is a widely used mechanism in different software engineering areas. It can be used for creating profilers and debuggers, for detecting programming errors at runtime, or for securing programs through inline reference monitoring.

This paper presents a tutorial on instrumenting Android applications using Soot and the AspectBench compiler (abc). We show how two well-known monitoring languages –Tracematches and AspectJ– can be used for instrumenting Android applications. Furthermore, we also describe the more flexible approach of manual imperative instrumentation directly using Soot’s intermediate representation Jimple. In all three cases no source code of the target application is required.

Steven Arzt, Siegfried Rasthofer, Eric Bodden
On Signal Temporal Logic

Temporal Logic (TL) is a popular formalism, introduced into systems design [Pnu77] as a language for specifying acceptable behaviors of reactive systems. Traditionally, it has been used for formal verification, either by deductive methods [MP95], or algorithmic methods (Model Checking [CGP99,QS82]). In this framework, the behaviors in question are typically discrete, that is, sequences of states and/or events. In recent years, several trends suggest alternative ways to use TL.

Alexandre Donzé
Runtime Verification and Refutation for Biological Systems

Advanced computational models are transforming the way research is done in biology, by providing quantitative means to assess the validity of theories and hypotheses and allowing predictive capabilities, raising an urgent need to be able to systematically and efficiently analyze runtime properties of models. In this tutorial I describe key biological applications, modeling formalisms, property specification languages, and computational tools utilized in this domain, survey the techniques and research from the formal verification, machine learning and simulation communities that are currently being used, and outline opportunities for the runtime verification community to contribute new scalable methods.

Hillel Kugler
A Lesson on Runtime Assertion Checking with Frama-C

Runtime assertion checking provides a powerful, highly automatizable technique to detect violations of specified program properties. This paper provides a lesson on runtime assertion checking with

Frama-C

, a publicly available toolset for analysis of C programs. We illustrate how a C program can be specified in executable specification language

e-acsl

and how this specification can be automatically translated into instrumented C code suitable for monitoring and runtime verification of specified properties. We show how various errors can be automatically detected on the instrumented code, including C runtime errors, failures in postconditions, assertions, preconditions of called functions, and memory leaks. Benefits of combining runtime assertion checking with other

Frama-C

analyzers are illustrated as well.

Nikolai Kosmatov, Julien Signoles
With Real-Time Performance Analysis and Monitoring to Timing Predictable Use of Multi-core Architectures

With future societies and individuals possibly becoming more and more dependent on highly capable (control) systems, the ability to carry out precise system analysis and to enforce the expected behaviour of systems at run-time, i. e., during their operation, could become an irrevocable requirement. -

Who would accept that a computer takes full or partial control over one’s car without believing that the system is always reacting as expected?

- Correctness of system designs and their implementations is only one aspect. The other major aspect is the capability to build cost-effective systems. Advances in (consumer) electronics have brought about multi-cored micro-controllers equipped with considerably large memory. This will stimulate the building of embedded (control) systems where multiple applications can be integrated into a single controller and thereby effectively lower the per unit costs of a system as a whole.

Kai Lampka
Dynamic Analysis and Debugging of Binary Code for Security Applications

Dynamic analysis techniques have made a significant impact in security practice, e.g. by automating some of the most tedious processes in detecting vulnerabilities. However, a significant gap remains between existing software tools and what many security applications demand. In this paper, we present our work on developing a

cross-platform interactive analysis

tool, which leverages techniques such as symbolic execution and taint tracking to analyze binary code on a range of platforms. The tool builds upon IDA, a popular reverse engineering platform, and provides a unified analysis engine to handle various instruction sets and operating systems. We have evaluated the tool on a set of real-world applications and shown that it can help identify the root causes of security vulnerabilities quickly.

Lixin Li, Chao Wang
Backmatter
Metadata
Title
Runtime Verification
Editors
Axel Legay
Saddek Bensalem
Copyright Year
2013
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-40787-1
Print ISBN
978-3-642-40786-4
DOI
https://doi.org/10.1007/978-3-642-40787-1

Premium Partner