Skip to main content

2014 | Buch

Leveraging Applications of Formal Methods, Verification and Validation. Specialized Techniques and Applications

6th International Symposium, ISoLA 2014, Imperial, Corfu, Greece, October 8-11, 2014, Proceedings, Part II

herausgegeben von: Tiziana Margaria, Bernhard Steffen

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 8802 and LNCS 8803 constitutes the refereed proceedings of the 6th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation, ISoLA 2014, held in Imperial, Corfu, Greece, in October 2014. The total of 67 full papers was carefully reviewed and selected for inclusion in the proceedings. Featuring a track introduction to each section, the papers are organized in topical sections named: evolving critical systems; rigorous engineering of autonomic ensembles; automata learning; formal methods and analysis in software product line engineering; model-based code generators and compilers; engineering virtualized systems; statistical model checking; risk-based testing; medical cyber-physical systems; scientific workflows; evaluation and reproducibility of program analysis; processes and data integration in the networked healthcare; semantic heterogeneity in the formal development of complex systems. In addition, part I contains a tutorial on automata learning in practice; as well as the preliminary manifesto to the LNCS Transactions on the Foundations for Mastering Change with several position papers. Part II contains information on the industrial track and the doctoral symposium and poster session.

Inhaltsverzeichnis

Frontmatter

Engineering Virtualized Systems

Introduction to Track on Engineering Virtualized Services

Virtualization is a key technology enabler for cloud computing. Despite the added value and compelling business drivers of cloud computing, this new paradigm poses considerable new challenges that have to be addressed to render its usage effective for industry. Virtualization makes elastic amounts of resources available to application-level services; for example, the processing capacity allocated to a service may be changed according to demand. Current software development methods, however, do not support the modeling and validation of services running on virtualized resources in a satisfactory way. This seriously limits the potential for fine-tuning services to the available virtualized resources as well as for designing services for scalability and dynamic resource management. The track on

Engineering Virtualized Services

aims to discuss key challenges that need to be addressed to enable software development methods to target resource-aware virtualized services.

Reiner Hähnle, Einar Broch Johnsen
Erlang-Style Error Recovery for Concurrent Objects with Cooperative Scheduling

Re-establishing a safe program state after an error occurred is a known problem. Manually written error-recovery code is both more difficult to test and less often executed than the main code paths, hence errors are prevalent in these parts of a program. This paper proposes a failure model for concurrent objects with cooperative scheduling that automatically re-establishes object invariants after program failures, thereby eliminating the need to manually write this problematic code. The proposed model relies on a number of features of actor-based object-oriented languages, such as asynchronous method calls, co-operative scheduling with explicit synchronization points, and communication via future variables. We show that this approach can be used to implement Erlang-style process linking, and implement a supervision tree as a proof-of-concept.

Georg Göri, Einar Broch Johnsen, Rudolf Schlatte, Volker Stolz
Fault Model Design Space for Cooperative Concurrency

This paper critically discusses the different choices that have to be made when defining a fault model for an object-oriented programming language. We consider in particular the

ABS

language, and analyze the interplay between the fault model and the main features of

ABS

, namely the cooperative concurrency model, based on asynchronous method invocations whose return results via futures, and its emphasis on static analysis based on invariants.

Ivan Lanese, Michael Lienhardt, Mario Bravetti, Einar Broch Johnsen, Rudolf Schlatte, Volker Stolz, Gianluigi Zavattaro
Programming with Actors in Java 8

There exist numerous languages and frameworks that support an implementation of a variety of actor-based programming models in Java using concurrency utilities and threads. Java 8 is released with fundamental new features: lambda expressions and further dynamic invocation support. We show in this paper that such features in Java 8 allow for a high-level actor-based methodology for programming distributed systems which supports the programming to interfaces discipline. The embedding of our actor-based Java API is shallow in the sense that it abstracts from the actual thread-based deployment models. We further discuss different concurrent execution and thread-based deployment models and an extension of the API for its actual parallel and distributed implementation. We present briefly the results of a set of experiments which provide evidence of the potential impact of lambda expressions in Java 8 regarding the adoption of the actor concurrency model in large-scale distributed applications.

Behrooz Nobakht, Frank S. de Boer
Contracts in CML

We describe the COMPASS Modelling Language (CML), which is used to model large-scale Systems of Systems and the contracts that bind them together. The language can be used to document the interfaces to constituent systems using formal, precise, and verifiable specifications including preconditions, postconditions, and invariants. The semantics of CML directly supports the use of these contracts for all language constructs, including the use of communication channels, parallel processes, and processes that run forever. Every process construct in CML has an associated contract, allowing clients and suppliers to check that the implementations of constituent systems conform to their interface specifications.

Jim Woodcock, Ana Cavalcanti, John Fitzgerald, Simon Foster, Peter Gorm Larsen
Distributed Energy Management Case Study: A Formal Approach to Analyzing Utility Functions

The service-oriented paradigm has been established to enable quicker development of new applications from already existing services. Service negotiation is a key technique to provide a way of deciding and choosing the most suitable service, out of possibly many services delivering similar functionality but having different response times, resource usages, prices, etc. In this paper, we present a formal approach to the clients-providers negotiation of distributed energy management. The models are described in our recently introduced

Remes

Hdcl

language, with timed automata semantics that allows us to apply

Uppaal

-based tools for model-checking various scenarios of service negotiation. Our target is to compute ways of reaching the price- and reliability-optimal values of the utility function, at the end of the service negotiation.

Aida Čaušević, Cristina Seceleanu, Paul Pettersson
Towards the Typing of Resource Deployment

In cloud computing,

resources

as files, databases, applications, and virtual machines may either scale or move from one machine to another in response to load increases and decreases (

resource deployment

). We study a type-based technique for analysing the deployments of resources in cloud computing. In particular, we design a type system for a concurrent object-oriented language with dynamic resource creations and movements. The type of a program is

behavioural

, namely it expresses the resource deployments over periods of (logical) time. Our technique admits the inference of types and may underlie the optimisation of the costs and consumption of resources.

Elena Giachino, Cosimo Laneve
Static Inference of Transmission Data Sizes in Distributed Systems

We present a static analysis to infer the amount of data that a distributed system may transmit. The different locations of a distributed system communicate and coordinate their actions by posting tasks among them. A task is posted by building a message with the task name and the data on which such task has to be executed. When the task completes, the result can be retrieved by means of another message from which the result of the computation can be obtained. Thus, the transmission data size of a distributed system mainly depends on the amount of messages posted among the locations of the system, and the sizes of the data transferred in the messages. Our static analysis has two main parts: (1) we over-approximate the sizes of the data at the program points where tasks are spawned and where the results are received, and (2) we over-approximate the total number of messages. Knowledge of the transmission data sizes is essential, among other things, to predict the bandwidth required to achieve a certain response time, or conversely, to estimate the response time for a given bandwidth. A prototype implementation in the SACO system demonstrates the accuracy and feasibility of the proposed analysis.

Elvira Albert, Jesús Correas, Enrique Martin-Martin, Guillermo Román-Díez
Fully Abstract Operation Contracts

Proof reuse in formal software verification is crucial in presence of constant evolutionary changes to the verification target. Contract-based verification makes it possible to verify large programs, because each method in a program can be verified against its contract separately. A small change to some contract, however, invalidates all proofs that rely on it, which makes reuse difficult. We introduce fully abstract contracts and class invariants which permit to completely decouple reasoning about programs from the applicability check of contracts. We implemented tool support for abstract contracts as part of the KeY verification system and empirically show the considerable reuse potential of our approach.

Richard Bubel, Reiner Hähnle, Maria Pelevina

Statistical Model Checking

Statistical Model Checking Past, Present, and Future
(Track Introduction)

This short note introduces statistical model checking and gives a brief overview of the

Statistical Model Checking, past present and future

session at Isola 2014.

Kim G. Larsen, Axel Legay
An Effective Heuristic for Adaptive Importance Splitting in Statistical Model Checking

Statistical model checking avoids the intractable growth of states associated with numerical model checking by estimating the probability of a property from simulations. Rare properties pose a challenge because the relative error of the estimate is unbounded. In [13] we describe how importance splitting may be used with SMC to overcome this problem. The basic idea is to decompose a logical property into nested properties whose probabilities are easier to estimate. To improve performance it is desirable to decompose the property into many equi-probable

levels

, but logical decomposition alone may be too coarse.

In this article we make use of the notion of a

score function

to improve the granularity of a logical property. We show that such a score function may take advantage of heuristics, so long as it also rigorously respects certain properties. To demonstrate our importance splitting approach we present an optimal adaptive importance splitting algorithm and an heuristic score function. We give experimental results that demonstrate a significant improvement in performance over alternative approaches.

Cyrille Jegourel, Axel Legay, Sean Sedwards
A Formalism for Stochastic Adaptive Systems

Complex systems such as systems of systems result from the combination of several components that are organized in a hierarchical manner. One of the main characteristics of those systems is their ability to adapt to new situations by modifying their architecture. Those systems have recently been the subject of a series of works in the software engineering community. Most of those works do not consider quantitative features. The objective of this paper is to propose a modeling language for adaptive systems whose behaviors depend on stochastic features. Our language relies on an extension of stochastic transition systems equipped with (1) an adaptive operator that allows to reason about the probability that a system has to adapt its architecture over time, and (2) dynamic interactions between processes. As a second contribution, we propose a contract-based extension of probabilistic linear temporal logic suited to reason about assumptions and guarantees of such systems. Our work has been implemented in the

Plasma-Lab

tool developed at Inria. This tool allows us to define stochastic adaptive systems with an extension of the Prism language, and properties with patterns. In addition,

Plasma-Lab

offers a simulation-based model checking procedure to reason about finite executions of the system. First experiments on a large case study coming from an industrial driven European project give encouraging results.

Benoît Boyer, Axel Legay, Louis-Marie Traonouez
A Review of Statistical Model Checking Pitfalls on Real-Time Stochastic Models

Statistical model checking (SMC) is a technique inspired by Monte-Carlo simulation for verifying time-bounded temporal logical properties. SMC originally focused on fully stochastic models such as Markov chains, but its scope has recently been extended to cover formalisms that mix functional real-time aspects, concurrency and non-determinism. We show by various examples using the tools UPPAAL SMC and

Modes

that combining the stochastic interpretation of such models with SMC algorithms is extremely subtle. This may yield significant discrepancies in the analysis results. As these subtleties are not so obvious to the end-user, we present five semantic caveats and give a classification scheme for SMC algorithms. We argue that caution is needed and believe that the caveats and classification scheme in this paper serve as a guiding reference for thoroughly understanding them.

Dimitri Bohlender, Harold Bruintjes, Sebastian Junges, Jens Katelaan, Viet Yen Nguyen, Thomas Noll
Formal Analysis of the Wnt/β-catenin through Statistical Model Checking

The Wnt/

β

-catenin signalling pathway plays an important role in the proliferation of neural cells, and hence it is the main focus of several research aimed at understanding neurodegenerative pathologies. In this paper we consider a compact model of the basic mechanisms of the Wnt/

β

-catenin pathway and we analyse its dynamics by application of an expressive temporal logic formalism, namely the Hybrid Automata Stochastic Logic. This allows us to formally characterise, and effectively assess, sophisticated aspects of the Wnt/

β

-catenin pathway dynamics.

Paolo Ballarini, Emmanuelle Gallet, Pascale Le Gall, Matthieu Manceny
Battery-Aware Scheduling of Mixed Criticality Systems

Wireless systems such as satellites and sensor networks are often battery-powered. To operate optimally they must therefore take the performance properties of real batteries into account. Additionally, these systems, and therefore their batteries, are often exposed to loads with uncertain timings. Mixed criticality and soft real-time systems may accept deadline violations and therefore enable trade-offs and evaluation of performance by criteria such as the number of tasks that can be completed with a given battery. We model a task set in combination with the kinetic battery model as a stochastic hybrid system and study its performance under battery-aware scheduling strategies. We believe that this evaluation does not scale with current verification techniques for stochastic hybrid systems. Instead statistical model checking provides a viable alternative with statistical guarantees. Based on our model we also calculate an upper bound on the attainable number of task instances from a battery, and we provide a battery-aware scheduler that wastes no energy on instances that are not guaranteed to make their deadlines.

Erik Ramsgaard Wognsen, René Rydhof Hansen, Kim Guldstrand Larsen
Using Statistical Model Checking for Measuring Systems

State spaces represent the way a system evolves through its different possible executions. Automatic verification techniques are used to check whether the system satisfies certain properties, expressed using automata or logic-based formalisms. This provides a Boolean indication of the system’s fitness. It is sometimes desirable to obtain other indications, measuring e.g., duration, energy or probability. Certain measurements are inherently harder than others. This can be explained by appealing to the difference in complexity of checking CTL and LTL properties. While the former can be done in time linear in the size of the property, the latter is PSPACE in the size of the property; hence practical algorithms take exponential time. While the CTL-type of properties measure specifications that are based on adjacency of states (up to a fixpoint calculation), LTL properties have the flavor of expecting some multiple complicated requirements from each execution sequence. In order to quickly measure LTL-style properties from a structure, we use a form of statistical model checking; we exploit the fact that LTL-style properties on a path behave like CTL-style properties on a structure. We then use CTL-based measuring on paths, and generalize the measurement results to the full structure using optimal Monte Carlo estimation techniques. To experimentally validate our framework, we present measurements for a flocking model of bird-like agents.

Radu Grosu, Doron Peled, C. R. Ramakrishnan, Scott A. Smolka, Scott D. Stoller, Junxing Yang
Blocking Advertisements on Android Devices Using Monitoring Techniques

This paper explores the effectiveness and challenges of using monitoring techniques, based on Aspect-Oriented Programming, to block adware at the library level, on mobile devices based on Android. Our method is systematic and general: it can be applied to block advertisements from existing and future advertisement networks. We also present miAdBlocker, an industrial proof-of-concept application, based on this technique, for disabling advertisements on a per-application basis. Our experimental results show a high success rate on most applications. Finally, we present the lessons learned from this experience, and we identify some challenges when applying runtime monitoring techniques to real-world case studies.

Khalil El-Harake, Yliès Falcone, Wassim Jerad, Mattieu Langet, Mariem Mamlouk
Monitoring with Data Automata

We present a form of automaton, referred to as

data automata

, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the

Rete

algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the

Scala

programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.

Klaus Havelund

Risk-Based Testing

Risk-Based Testing
(Track Introduction)

In many development projects, testing has to be done under severe pressure due to limited resources, a challenging time schedule, and the demand to guarantee security and safety of the released software system. Risk-based testing, which utilizes identified risks of a software system for testing purposes, has a high potential to improve testing in this context. It optimizes the allocation of resources and time, is a means for mitigating risks, helps to early identify critical areas, and provides decision support for the management [1, 2].

Michael Felderer, Marc-Florian Wendland, Ina Schieferdecker
A Technique for Risk-Based Test Procedure Identification, Prioritization and Selection

We present a technique for risk-based test procedure identification, prioritization, and selection. The technique takes a risk model in the form of a risk graph as input, and produces a list of prioritized selected test procedures as output. The technique is general in the sense that it can be used with many existing risk documentation languages and many kinds of likelihood and risk types.

Fredrik Seehusen
A Risk Assessment Framework for Software Testing

In industry, testing has to be performed under severe pressure due to limited resources. Risk-based testing which uses risks to guide the test process is applied to allocate resources and to reduce product risks. Risk assessment, i.e., risk identification, analysis and evaluation, determines the significance of the risk values assigned to tests and therefore the quality of the overall risk-based test process. In this paper we provide a risk assessment model and its integration into an established test process. This framework is derived on the basis of best practices extracted from published risk-based testing approaches and applied to an industrial test process.

Michael Felderer, Christian Haisjackl, Viktor Pekar, Ruth Breu
Data Driven Testing of Open Source Software

The increasing adoption of open source software (OSS) components in software systems introduces new quality risks and testing challenges. OSS components are developed and maintained by open communities and the fluctuation of community members and structures can result in instability of the software quality. Hence, an investigation is necessary to analyze the impact open community dynamics and the quality of the OSS, such as the level and trends in internal communications and content distribution. The analysis results provide inputs to drive selective testing for effective validation and verification of OSS components. The paper suggests an approach for monitoring community dynamics continuously, including communications like email and blogs, and repositories of bugs and fixes. Detection of patterns in the monitored behavior such as changes in traffic levels within and across clusters can be used in turn to drive testing efforts. Our proposal is demonstrated in the case of the XWiki OSS, a Java-based environment that allows for the storing of structured data and the execution of server side scripts within the wiki interface. We illustrate our concepts, methods and approach behind this approach for risk based testing of OSS.

Inbal Yahav, Ron S. Kenett, Xiaoying Bai
Combining Risk Analysis and Security Testing

A systematic integration of risk analysis and security testing allows for optimizing the test process as well as the risk assessment itself. The result of the risk assessment, i.e. the identified vulnerabilities, threat scenarios and unwanted incidents, can be used to guide the test identification and may complement requirements engineering results with systematic information concerning the threats and vulnerabilities of a system and their probabilities and consequences. This information can be used to weight threat scenarios and thus help identifying the ones that need to be treated and tested more carefully. On the other side, risk-based testing approaches can help to optimize the risk assessment itself by gaining empirical knowledge on the existence of vulnerabilities, the applicability and consequences of threat scenarios and the quality of countermeasures. This paper outlines a tool-based approach for risk-based security testing that combines the notion of risk-assessment with a pattern-based approach for automatic test generation relying on test directives and strategies and shows how results from the testing are systematically fed back into the risk assessment.

Jürgen Großmann, Martin Schneider, Johannes Viehmann, Marc-Florian Wendland
Risk-Based Vulnerability Testing Using Security Test Patterns

This paper introduces an original security testing approach guided by risk assessment, by means of risk coverage, to perform and automate vulnerability testing for Web applications. This approach, called Risk-Based Vulnerability Testing, adapts Model-Based Testing techniques, which are mostly used currently to address functional features. It also extends Model-Based Vulnerability Testing techniques by driving the testing process using security test patterns selected from risk assessment results. The adaptation of such techniques for Risk-Based Vulnerability Testing defines novel features in this research domain. In this paper, we describe the principles of our approach, which is based on a mixed modeling of the System Under Test: the model used for automated test generation captures some behavioral aspects of the Web applications, but also includes vulnerability test purposes to drive the test generation process.

Julien Botella, Bruno Legeard, Fabien Peureux, Alexandre Vernotte

Medical Cyber-Physical Systems

Medical Cyber-Physical Systems
(Track Introduction)

Rapid progress in modern medical technologies has led to a new generation of healthcare devices and treatment strategies. Examples include electro-anatomical mapping and intervention, bio-compatible and implantable devices, minimally invasive embedded devices, and robotic prosthetics.

Ezio Bartocci, Sicun Gao, Scott A. Smolka
Compositional, Approximate, and Quantitative Reasoning for Medical Cyber-Physical Systems with Application to Patient-Specific Cardiac Dynamics and Devices

The design of bug-free and safe medical device software is challenging, especially in complex implantable devices that control and actuate organs who’s response is not fully understood. Safety recalls of pacemakers and implantable cardioverter defibrillators between 1990 and 2000 affected over 600,000 devices. Of these, 200,000 or 41%, were due to firmware issues that continue to increase in frequency. According to the FDA, software failures resulted in 24% of

all

medical device recalls in 2011. There is currently no formal methodology or open experimental platform to test and verify the correct operation of medical-device software within the closed-loop context of the patient.

The goal of this effort is to develop the foundations of modeling, synthesis and development of

verified medical device software

and systems

from verified closed-loop models

of the device and organ(s). Our research spans both implantable medical devices such as cardiac pacemakers and physiological control systems such as drug infusion pumps which have multiple networked medical systems. These devices are physically connected to the body and exert direct control over the physiology and safety of the patient. The focus of this effort is on (a) Extending current binary safety properties to quantitative verification; (b) Development of patient-specific models and therapies; (c) Multi-scale modeling of complex physiological phenomena and compositional reasoning across a range of model abstractions and refinements; and (d) Bridging the formal reasoning and automated generation of safe and effective software for future medical devices.

Radu Grosu, Elizabeth Cherry, Edmund M. Clarke, Rance Cleaveland, Sanjay Dixit, Flavio H. Fenton, Sicun Gao, James Glimm, Richard A. Gray, Rahul Mangharam, Arnab Ray, Scott A. Smolka
On Quantitative Software Quality Assurance Methodologies for Cardiac Pacemakers

Embedded software is at the heart of implantable medical devices such as cardiac pacemakers, and rigorous software design methodologies are needed to ensure their safety and reliability. This paper gives an overview of ongoing research aimed at providing software quality assurance methodologies for pacemakers. A model-based framework has been developed based on hybrid automata, which can be configured with a variety of heart and pacemaker models. The framework supports a range of quantitative verification techniques for the analysis of safety, reliability and energy usage of pacemakers. It also provides techniques for parametric analysis of personalised physiological properties that can be performed

in silico

, which can reduce the cost and discomfort of testing new designs on patients. We describe the framework, summarise the results obtained, and identify future research directions in this area.

Marta Kwiatkowska, Alexandru Mereacre, Nicola Paoletti
Model Checking Hybrid Systems
(Invited Talk)

We present the framework of delta-complete analysis for bounded reachability problems of hybrid systems. We perform bounded reachability checking through solving delta-decision problems over the reals. The techniques take into account of robustness properties of the systems under numerical perturbations. Our implementation of the techniques scales well on several highly nonlinear hybrid system models that arise in biomedical applications.

Edmund M. Clarke, Sicun Gao
Challenges for the Dynamic Interconnection of Medical Devices

Medical devices, especially when operated in the surgery room, are safety critical systems, as the patient’s life may depend on them. As such, there are high legal requirements to meet by the manufacturers of such devices. One of the typical requirements is that whenever medical devices are interconnected, the whole setup has to be approved by the corresponding legal body. For economical reasons, however, it is desirable to interconnect devices from different manufacturers in an individual fashion for each surgery room. Then however no integration test has been carried out a priori and thus the whole setup could not have been approved. In other words such economical demands impose challenges both on the technical as well as the legal situation. In this contribution, we report on these challenges as well as on first ideas to address them.

Martin Leucker
Temporal Logic Based Monitoring of Assisted Ventilation in Intensive Care Patients

We introduce a novel approach to automatically detect ineffective breathing efforts in patients in intensive care subject to assisted ventilation. The method is based on synthesising from data temporal logic formulae which are able to discriminate between normal and ineffective breaths. The learning procedure consists in first constructing statistical models of normal and abnormal breath signals, and then in looking for an optimally discriminating formula. The space of formula structures, and the space of parameters of each formula, are searched with an evolutionary algorithm and with a Bayesian optimisation scheme, respectively. We present here our preliminary results and we discuss our future research directions.

Sara Bufo, Ezio Bartocci, Guido Sanguinetti, Massimo Borelli, Umberto Lucangelo, Luca Bortolussi

Scientific Workflows

Track Introduction: Scientific Workflows

In recent years, numerous software systems have been developed specifically for supporting the management of scientific processes and workflows (see, e.g., [1] or [2] for surveys). Research in this comparatively new field is currently evolving in interesting new directions. Already at the ISoLA symposium in 2010 we focused on workflow management for scientific applications in the scope of a symposium track on “Tools in scientific workflow composition” [3].

Joost N. Kok, Anna-Lena Lamprecht, Kenneth J. Turner, Katy Wolstencroft
Meta-analysis of Disjoint Sets of Attributes in Large Cohort Studies

We will introduce the problem of classification in large cohort studies containing heterogeneous data. The data in a cohort study comes in separate groups, which can be turned on or off. Each group consists of data coming from one specific measurement instrument. We provide a “cross-sectional” investigation on this data to see the relative power of the different groups. We also propose a way of improving on the classification performance in individual cohort studies using other cohort studies by using an intuitive workflow approach.

Jonathan K. Vis, Joost N. Kok
Towards a Flexible Assessment of Climate Impacts: The Example of Agile Workflows for the ci:grasp Platform

The Climate Impacts: Global and Regional Adaptation Support Platform (ci:grasp) is a web-based climate information service for exploring climate change related information in its geographical context. We have used the jABC workflow modeling and execution framework to make flexibilized versions of the processes implemented in ci:grasp available to the scientific community. The jABC permits us to leverage the processes to an easily accessible conceptual level, which enables users to flexibly define and adapt workflows according to their specific needs. The workflows are suitable as graphical documentation of the processes and are directly repeatable and reusable, which facilitates reproducibility of results and eventually increases the productivity of researchers working on climate impact risk assessment. In this paper, we use variations of workflows for the assessment of the impacts of sea-level rise to demonstrate the flexibility we gained by following this approach.

Samih Al-Areqi, Steffen Kriewald, Anna-Lena Lamprecht, Dominik Reusser, Markus Wrobel, Tiziana Margaria
A Visual Programming Approach to Beat-Driven Humanoid Robot Dancing

The paper presents a workflow-based approach to the classic task of teaching a humanoid robot to dance to a given song with respect to the detected beat. Our goal is to develop workflow components which enable the construction of complex dance choreographies using visual programming only. This eliminates tedious programming of dance moves and their synchronisation to the music thus enabling the robot animator to design the choreography at a higher conceptual level. The presented work is based on the Choregraphe visual programming environment for the NAO robot platform and the Aubio open source tool for the extraction of annotations from audio signals.

Vid Podpečan
jABCstats: An Extensible Process Library for the Empirical Analysis of jABC Workflows

The jABC is a multi-purpose modeling framework that has been used for model-driven development of workflows and processes in different application domains. In this paper we present jABCstats, an extensible process library for analyzing jABC workflows empirically. We also discuss first results of its application to scientific workflows modeled with the jABC, which give insights into typical workflow sizes and into the kinds of services and the workflow patterns commonly used.

Alexander Wickert, Anna-Lena Lamprecht
Automatic Annotation of Bioinformatics Workflows with Biomedical Ontologies

Legacy scientific workflows, and the services within them, often present scarce and unstructured (i.e. textual) descriptions. This makes it difficult to find, share and reuse them, thus dramatically reducing their value to the community. This paper presents an approach to annotating workflows and their subcomponents with ontology terms, in an attempt to describe these artifacts in a structured way. Despite a dearth of even textual descriptions, we automatically annotated 530 myExperiment bioinformatics-related workflows, including more than 2600 workflow-associated services, with relevant ontological terms. Quantitative evaluation of the Information Content of these terms suggests that, in cases where annotation was possible at all, the annotation quality was comparable to manually curated bioinformatics resources.

Beatriz García-Jiménez, Mark D. Wilkinson

Evaluation and Reproducibility of Program Analysis

Evaluation and Reproducibility of Program Analysis (Track Introduction)

Today’s popular languages have a large number of different language constructs and standardized library interfaces. The number is further increasing with every new language standard.Most published analyses therefore focus on a subset of such languages or define a language with a few essential constructs of interest.More recently, program-analysis competitions [4,6,7,1] aim to evaluate comparatively implemented analyses for a given set of benchmarks. The comparison of the analyses focuses on various aspects: (a) the quality of established structures and automata describing the behavior of the analyzed program, (b) the verification of various specified program properties, (c) the impact on a client analysis of particular interest, and (d) the impact of analysis precision on program optimizations.

Markus Schordan, Welf Löwe, Dirk Beyer
SWEET – A Tool for WCET Flow Analysis (Extended Abstract)

Worst-Case Execution Time (WCET) analysis [14] aims to estimate the longest possible execution time for a piece of code executing uninterrupted on a particular hardware. Such WCET estimates are used when analysing real-time systems with respect to possible deadline violations. For safety-critical real-time systems,

safe

(surely not underestimating) estimates are desirable. Such estimates can be produced by a

static WCET analysis

that takes all possible execution paths and corresponding hardware states into account.

Björn Lisper
Test-Driving Static Analysis Tools in Search of C Code Vulnerabilities II
(Extended Abstract)

A large number of tools that automate the process of finding errors in programs has recently emerged in the software development community. Many of them use static analysis as the main method for analyzing and capturing faults in the source code. Static analysis is deployed as an approximation of the programs’ runtime behavior with inherent limitations regarding its ability to detect actual code errors. It belongs to the class of computational problems which are undecidable [2]. For any such analysis, the major issues are: (1) the programming language of the source code where the analysis is applied (2) the type of errors to be detected (3) the effectiveness of the analysis and (4) the efficiency of the analysis.

George Chatzieleftheriou, Apostolos Chatzopoulos, Panagiotis Katsaros
Construction of Abstract Domains for Heterogeneous Properties (Position Paper)

The aim of static analysis is to infer invariants about programs that are tight enough to establish semantic properties, like the absence of run-time errors. In the last decades, several branches of the static analysis of imperative programs have made significant progress, such as in the inference of numeric invariants or the computation of data structures properties (using pointer abstractions or shape analyzers). Although simultaneous inference of shape-numeric invariants is often needed, this case is especially challenging and less well explored. Notably, simultaneous shape-numeric inference raises complex issues in the design of the static analyzer itself. We study the modular construction of static analyzers, based on combinations of atomic abstract domains to describe several kinds of memory properties and value properties.

Xavier Rival, Antoine Toubhans, Bor-Yuh Evan Chang
Verification of Polyhedral Optimizations with Constant Loop Bounds in Finite State Space Computations

As processors gain in complexity and heterogeneity, compilers are asked to perform program transformations of ever-increasing complexity to effectively map an input program to the target hardware. It is critical to develop methods and tools to automatically assert the correctness of programs generated by such modern optimizing compilers.

We present a framework to verify if two programs (one possibly being a transformed variant of the other) are semantically equivalent. We focus on scientific kernels and a state-of-the-art polyhedral compiler implemented in ROSE. We check the correctness of a set of polyhedral transformations by combining the computation of a state transition graph with a rewrite system to transform floating point computations and array update operations of one program such that we can match them as terms with those of the other program. We demonstrate our approach on a collection of benchmarks from the PolyBench/C suite.

Markus Schordan, Pei-Hung Lin, Dan Quinlan, Louis-Noël Pouchet
The Guided System Development Framework: Modeling and Verifying Communication Systems

In a world that increasingly relies on the Internet to function, application developers rely on the implementations of protocols to guarantee the security of data transferred. Whether a chosen protocol gives the required guarantees, and whether the implementation does the same, is usually unclear. The Guided System Development framework contributes to more secure communication systems by aiding the development of such systems. The framework features a simple modelling language, step-wise refinement from models to implementation, interfaces to security verification tools, and code generation from the verified specification. The refinement process carries thus security properties from the model to the implementation. Our approach also supports verification of systems previously developed and deployed. Internally, the reasoning in our framework is based on the Beliefs and Knowledge tool, a verification tool based on belief logics and explicit attacker knowledge.

Jose Quaresma, Christian W. Probst, Flemming Nielson

Processes and Data Integration in the Networked Healthcare

Processes and Data Integration in the Networked Healthcare
(Track Introduction)

Forward-looking issues in the Information and Communication Technology (ICT) for healthcare and medical applications include process integration, data annotation, ontologies and semantics, but also any automatization approach that is not imposed from the IT experts (in-house support or external consultants) but instead allows more flexibility and also a more direct ownership of the processes and data by the healthcare professionals themselves. In the ISoLA-Med workshop in Potsdam in June 2009 and in this track at ISoLA 2012 we showed that a set of innovative research topics related to the future of healthcare systems hinge on the notion of simplicity, for both the end users and the designer, developers, and to support change management and the agility of evolution.

Tiziana Margaria, Christoph Rasche
Simple Management of High Assurance Data in Long-Lived Interdisciplinary Healthcare Research: A Proposal

Healthcare research data is typically produced, curated, and used by scientists, physicians, and other experts that have little or no professional affinity to programming and IT system design. In the context of evidence-based medicine or translational medicine, however the production, reliability, and long term availability of high quality and high assurance data is of paramount importance. In this paper we reflect on the data management needs we encountered in our experience as associated partners of a large interdisciplinary research project coordinated at the Cancer Metabolism Research Group, Institute of Biomedical Sciences at University of São Paulo in Brazil. Their research project involves extensive collection of detailed sample data within a complicated environment of clinical and research methods, medical, assessment, and measurement equipment and the regulatory requirements of maintaining privacy, data quality and security.

We use this example as an illustrative case of a category of needs and a diversity of professional and skills profiles that is representative of what happens today in any large scale research endeavor. We derive a catalogue of requirements that an IT system for the definition and management of data and processes should have, how this relates to the IT development and XMDD philosophy, and we briefly sketch how the DyWA + jABC combination provides a foundation for meeting those needs.

Tiziana Margaria, Barry D. Floyd, Rodolfo Gonzalez Camargo, Anna-Lena Lamprecht, Johannes Neubauer, Marilia Seelaender
Domain-Specific Business Modeling with the Business Model Developer

Discussing business models of companies and organizations based on graphical representations that emphasize essential factors has increased in recent years, particularly from a business perspective. However, feasible implementations of business modeling tools are rare, as they tend to be domain specific but at the same time tailored towards the requirements of a heterogeneous group of stakeholders. We present the Business Model Developer (BMD) and the underlying framework for the creation of domain-specific business models. The approach relies on the definition of a domain-specific library of model components as well as structured parameters with different scopes, i.e. declared areas of applicability. This setup forms the basis for the development of custom techniques for model analysis. We align the development process with the approach of Extreme Model Driven Development (XMDD) to keep it as simple as possible for domain experts to contribute in tailoring the tool towards specific needs. Furthermore, we present a practical application in the healthcare domain, as the BMD has been developed and applied in the course of a joint project in the area of Personalized Medicine.

Steve Boßelmann, Tiziana Margaria
Dr. Watson? Balancing Automation and Human Expertise in Healthcare Delivery

IBM’s Watson, the supercomputer that beat two Jeopardy champions in a televised competition, has inevitably engendered speculation about what this surprising performance bodes for the role of computers in the workplace. How can Dr. Watson be best utilized in medicine and clinical support systems? This paper defines Computer Enhanced Medicine technology for healthcare as any medical application where the required tasks are split between a human being and a computer-controlled device.

Mark Gaynor, George Wyner, Amar Gupta

Semantic Heterogeneity in the Formal Development of Complex Systems

Semantic Heterogeneity in the Formal Development of Complex Systems: An Introduction

System engineering is a complex discipline[1], which is becoming more and more complicated by the heterogeneity of the subsystem components[2] and of the models involved in their design. This complexity can be managed only through the use of formal methods[3]. However, in general the engineering of software in such systems leads to a need for a mix of modelling languages and semantics; and this often leads to unexpected and undesirable interactions between components at all levels of abstraction[4]. There are currently no generally applicable tools for dealing with this heterogeneity of interactions in the engineering of complex systems.

J. Paul Gibson, Idir Ait-Sadoune
Modelling and Verifying an Evolving Distributed Control System Using an Event-Based Approach

Evolving distributed systems are made of several physical devices distributed through a network and a set of functionalities or applications hosted by the physical devices. The configuration of the physical components may be modified through the time, hence the continuous evolving of the whole system. This should affect neither the hosted software components nor the global functionning of the whole system. The components of the systems are software components or physical components but their abstract models are considered with the aim of modelling and reasoning. We show that an event-based approach can be benefically used to model and verify this kind of evolving control systems. The proposed approach is first presented, then the CCTV case study is introduced and modelled. The resulting model is structured as a B abstract machine. The functional properties of the case study are captured, modelled and proved. The refinement technique of Event-B is used to introduce and prove some properties.

Christian Attiogbé
Requirements Driven Data Warehouse Design: We Can Go Further

Data warehouses (

$\mathcal{D}\mathcal{W}$

) are defined as data integration systems constructed from a set of heterogeneous sources and user’s requirements. Heterogeneity is due to syntactic and semantic conflicts occurring between used concepts. Existing

$\mathcal{D}\mathcal{W}$

design methods associate heterogeneity only to data sources. We claim in this paper that heterogeneity is also associated to users’ requirements. Actually, requirements are collected from heterogeneous target users, which can cause semantic conflicts between concepts expressed. Besides, requirements can be analyzed by heterogeneous designers having different design skills, which can cause formalism heterogeneity. Integration is the process that manages heterogeneity in

$\mathcal{D}\mathcal{W}$

design. Ontologies are recognized as the key solution for ensuring an automatic integration process. We propose to extend the use of ontologies to resolve conflicts between requirements. A pivot model is proposed for integrating requirements schemas expressed in different formalisms. A

$\mathcal{D}\mathcal{W}$

design method is proposed for providing the target

$\mathcal{D}\mathcal{W}$

schema (star or snowflake schema) that meets a uniformed and consistent set of requirements.

Selma Khouri, Ladjel Bellatreche, Stéphane Jean, Yamine Ait-Ameur
On Implicit and Explicit Semantics: Integration Issues in Proof-Based Development of Systems
Version to Read

All software systems execute within an environment or context. Reasoning about the correct behavior of such systems is a ternary relation linking the requirements, system and context models. Formal methods are concerned with providing tool (automated) support for the synthesis and analysis of such models. These methods have quite successfully focused on binary relationships, for example: validation of a formal model against an informal one, verification of one formal model against another formal model, generation of code from a design, and generation of tests from requirements. The contexts of the systems in these cases are treated as second-class citizens: in general, the modelling is implicit and usually distributed between the requirements model and the system model. This paper is concerned with the explicit modelling of contexts as first-class citizens and illustrates concepts related to implicit and explicit semantics on an example using the Event B language.

Yamine Ait-Ameur, J. Paul Gibson, Dominique Méry
Backmatter
Metadaten
Titel
Leveraging Applications of Formal Methods, Verification and Validation. Specialized Techniques and Applications
herausgegeben von
Tiziana Margaria
Bernhard Steffen
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-45231-8
Print ISBN
978-3-662-45230-1
DOI
https://doi.org/10.1007/978-3-662-45231-8