Skip to main content

2010 | Buch

Leveraging Applications of Formal Methods, Verification, and Validation

4th International Symposium on Leveraging Applications, ISoLA 2010, Heraklion, Crete, Greece, October 18-21, 2010, Proceedings, Part II

herausgegeben von: Tiziana Margaria, Bernhard Steffen

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

EternalS: Mission and Roadmap

Introduction to the EternalS Track: Trustworthy Eternal Systems via Evolving Software, Data and Knowledge

Latest research work within ICT has outlined that future systems must possess the ability of adapting to changes in user requirements and application domains. Adaptation and evolution depend on several dimensions, e.g., time, location, and security conditions, expressing the diversity of the context in which systems operate.

The Coordination Action (CA): Trustworthy Eternal Systems via Evolving Software, Data and Knowledge (EternalS) aims at coordinating research in the above-mentioned areas by means of a researcher Task Force and community building activities, where the organization of workshops and conferences is one of the important tools used for such purpose. EternalS aims at creating the conditions for mutual awareness and cross-fertilization among the four ICT-Forever Yours - FET projects (FP7-ICT-Call 3): LivingKnowledge, HATS, Connect and SecureChange. These projects are currently conducting research in key ICT areas: (i) automatic learning of systems capable of analyzing knowledge and diversity with respect to their complex semantic interactions and evolution over time, (ii) exploitation of formal methods for the design and networking of adaptive and evolving software systems; and (iii) design of security policies and fully connected environment. The above-mentioned projects will help EternalS to actively involve many researchers from both academic and industrial world in its action.

Alessandro Moschitti
HATS: Highly Adaptable and Trustworthy Software Using Formal Methods

The HATS project develops a formal method for the design, analysis, and implementation of highly adaptable software systems that are characterized by high demand on trustworthiness. Existing modeling formalisms leave gap between highly abstract, largely structural models and executable code on the implementation level. HATS aims to close this gap with an object-oriented, executable modeling language for adaptable, concurrent software components. It comes with tool suite based on analysis methods developed hand in hand with the language.

Reiner Hähnle
SecureChange: Security Engineering for Lifelong Evolvable Systems

The challenge of

SecureChange

is to re-invent security engineering for “eternal” systems. The project focuses on methods, techniques and tools to make change a first-class citizen in the software development process so that security is built into software-based systems in a resilient and certifiable way. Solving this challenge requires a significant re-thinking of all development phases, from requirements engineering to design and testing.

Riccardo Scandariato, Fabio Massacci
3DLife: Bringing the Media Internet to Life

“Bringing the Media Internet to Life” - or simply, 3DLife - is a European Union funded project that aims to integrate research conducted within Europe in the field of Media Internet. In this contribution, we give an overview of the project’s main objectives and activities.

Ebroul Izquierdo, Tomas Piatrik, Qianni Zhang
LivingKnowledge: Kernel Methods for Relational Learning and Semantic Modeling

Latest results of statistical learning theory have provided techniques such us pattern analysis and relational learning, which help in modeling system behavior, e.g. the semantics expressed in text, images, speech for information search applications (e.g. as carried out by Google, Yahoo,..) or the semantics encoded in DNA sequences studied in Bioinformatics. These represent distinguished cases of successful use of statistical machine learning. The reason of this success relies on the ability of the latter to overcome the critical limitations of logic/rule-based approaches to semantic modeling: although, from a knowledge engineer perspective, hand-crafted rules are natural methods to encode system semantics, noise, ambiguity and errors, affecting dynamic systems, prevent them from being effective.

One drawback of statistical approaches relates to the complexity of modeling world objects in terms of simple parameters. In this paper, we describe kernel methods (KM), which are one of the most interesting results of statistical learning theory capable to abstract system design and make it simpler. We provide an example of effective use of KM for the design of a natural language application required in the European Project LivingKnowledge.

Alessandro Moschitti
Task Forces in the EternalS Coordination Action

We describe the scope, organization, and expected outcomes of the Task Forces of the

EternalS

Coordination Action. The goal of the Task Forces is to provide structure and focus to the activities pursued in

EternalS

while retaining essential openness to bottom-up initiatives.

Reiner Hähnle
Modeling and Analyzing Diversity
Description of EternalS Task Force 1

We describe the objectives and vision of the

EternalS

Task Force 1 ”Diversity Awareness and Management”. We present its organization and workplan. The goal of the task force is to provide a platform for collaboration towards a unifying framework for modeling and analyzing diversity of system data and behavior in all development phases.

Ina Schaefer
Modeling and Managing System Evolution
Description of EternalS Task Force 2

In this contribution we describe the scope, obectives, and outcomes of the EternalS Task Force 2 (TF2) -

Time Awareness and Management

. Task Force 2 will foster the discussion and the collection of novel ideas around the problem of maintaining the quality of software-based systems that evolve over time. Security is a topic of special interest.

Michael Hafner
Self-adaptation and Evolution by Learning
Description of EternalS Task Force 3

The

EternalS

Task Force 3 focuses on

self-adaptation and evolution

of software systems and knowledge resources; this includes evolution over time and adaptation to new domains. The primary method for achieving this will be the application of modern

machine learning

methods.

Richard Johansson
Overview of Roadmapping by EternalS

The roadmapping activity in EternalS project focuses on developing and contributing a vision and direction for future Framework Programme research, arising from the work of a cluster of current projects towards

Trustworthy Eternal Systems

.

Jim Clarke, Keith Howker

Formal Methods in Model-Driven Development for Service-Oriented and Cloud Computing

Adaptive Composition of Conversational Services through Graph Planning Encoding

Service-Oriented Computing supports description, publication, discovery and composition of services to fulfil end-user needs. Yet, service composition processes commonly assume that service descriptions and user needs share the same abstraction level, and that services have been pre-designed to integrate. To release these strong assumptions and to augment the possibilities of composition, we add adaptation features into the service composition process using semantic structures for exchanged data, for service functionalities, and for user needs. Graph planning encodings enable us to retrieve service compositions efficiently. Our composition technique supports conversations for both services and user needs, and it is fully automated thanks to a tool,

pycompose

, which can interact with state-of-the-art graph planning tools.

Pascal Poizat, Yuhong Yan
Performance Prediction of Service-Oriented Systems with Layered Queueing Networks

We present a method for the prediction of the performance of a service-oriented architecture during its early stage of development. The system under scrutiny is modelled with the UML and two profiles: UML4SOA for specifying the functional behaviour, and MARTE for the non-functional performance-related characterisation. By means of a case study, we show how such a model can be interpreted as a layered queueing network. This target technique has the advantage to employ as constituent blocks entities, such as threads and processors, which arise very frequently in real deployment scenarios. Furthermore, the analytical methods for the solution of the performance model scale very well with increasing problem sizes, making it possible to efficiently evaluate the behaviour of large-scale systems.

Mirco Tribastone, Philip Mayer, Martin Wirsing
Error Handling: From Theory to Practice

We describe the different issues that a language designer has to tackle when defining error handling mechanisms for service-oriented computing. We first discuss the issues that have to be considered when developing error handling mechanisms inside a process calculus, i.e. an abstract model. We then analyze how these issues change when moving from a process calculus to a full-fledged language based on it. We consider as an example the language

Jolie

, and the calculus

SOCK

it is based upon.

Ivan Lanese, Fabrizio Montesi
Modeling and Reasoning about Service Behaviors and Their Compositions

Service-oriented systems have recently emerged as context-indepen-dent component-based systems. Unlike components, services can be created, invoked, composed, and destroyed at run-time. Consequently, all services need a way of advertising their capabilities to the entities that will use them, and service-oriented modeling should cater for various kinds of service composition. In this paper, we show how services can be formally described by the resource-aware timed behavioral language

Remes

, which we extend with service-specific information, such as type, capacity, time-to-serve, etc., as well as boolean constraints on inputs, and output guarantees. Assuming a Hoare-triple model of service correctness, we show how to check it by using the strongest postcondition semantics. To provide means for connecting

Remes

services, we propose a hierarchical language for service composition, which allows for verifying the latter’s correctness. The approach is applied on an abstracted version of an intelligent shuttle system.

Aida Čaušević, Cristina Seceleanu, Paul Pettersson
Design and Verification of Systems with Exogenous Coordination Using Vereofy

The feasibility of formal methods for the analysis of complex systems crucially depends on a modeling framework that supports compositional design, stepwise refinement and abstractions. An important feature is the clear separation of coordination and computation which permits to apply various verification techniques for the computation performed by components and interactions as well as dependencies between the components. We report here on a model-checking approach using the tool Vereofy that is based on an exogenous coordination model, where the components are represented by their behavioral interfaces. Vereofy supports the verification of the components and their communication structure. Our approach is illustrated by means of a case study with a sensor network where Vereofy has been used to establish several properties of the sensor nodes and their routing procedures.

Christel Baier, Tobias Blechmann, Joachim Klein, Sascha Klüppelholz, Wolfgang Leister
A Case Study in Model-Based Adaptation of Web Services

Developing systems through the composition of reusable software services is not straightforward in most situations since different kinds of mismatch may occur among their public interfaces. Service adaptation plays a key role in the development of such systems by solving, as automatically as possible, mismatch cases at the different interoperability levels among interfaces by synthesizing a mediating adaptor between services. In this paper, we show the application of model-based adaptation techniques for the construction of service-based systems on a case study. We describe each step of the adaptation process, starting with the automatic extraction of behavioural models from existing interface descriptions, until the final adaptor implementation is generated for the target platform.

Javier Cámara, José Antonio Martín, Gwen Salaün, Carlos Canal, Ernesto Pimentel

Quantitative Verification in Practice

Quantitative Verification in Practice

Soon after the birth of model checking, the first theoretical achievements have been reported on the automated verification of quantitative system aspects such as discrete probabilities and continuous time. These theories have been extended in various dimensions, such as continuous probabilities, cost constraints, discounting, hybrid phenomena, and combinations thereof. Due to unremitting improvements of underlying algorithms and data structures, together with the availability of more advanced computing engines, these techniques are nowadays applicable to realistic designs. Powerful software tools allow these techniques to be applied by non-specialists, and efforts have been made to embed these techniques into industrial system design processes. Quantitative verification has a broad application potential — successful applications in embedded system design, hardware, security, safety-critical software, schedulability analysis, and systems biology exemplify this. It is fair to say, that over the years this application area grows rapidly and there is no sign that this will not continue. This session reports on applying state-of-the-art quantitative verification techniques and tools to a variety of industrial case studies.

Boudewijn R. Haverkort, Joost-Pieter Katoen, Kim G. Larsen
Ten Years of Performance Evaluation for Concurrent Systems Using CADP

This article comprehensively surveys the work accomplished during the past decade on an approach to analyze concurrent systems qualitatively and quantitatively, by combining functional verification and performance evaluation. This approach lays its foundations on semantic models, such as

Imc

(

Interactive Markov Chain

) and

Ipc

(

Interactive Probabilistic Chain

), at the crossroads of concurrency theory and mathematical statistics. To support the approach, a number of software tools have been devised and integrated within the

Cadp

(

Construction and Analysis of Distributed Processes

) toolbox. These tools provide various functionalities, ranging from state space generation (

Cæsar

and

Exp.Open

), state space minimization (

Bcg_Min

and

Determinator

), numerical analysis (

Bcg_Steady

and

Bcg_Transient

), to simulation (

Cunctator

). Several applications of increasing complexity have been successfully handled using these tools, namely the Hubble telescope lifetime prediction, performance comparison of mutual exclusion protocols, the

Scsi

-2 bus arbitration protocol, the Send/Receive and Barrier primitives of

Mpi

(

Message Passing Interface

) implemented on a cache-coherent multiprocessor architecture, and the

xSTream

multiprocessor data-flow architecture for embedded multimedia streaming applications.

Nicolas Coste, Hubert Garavel, Holger Hermanns, Frédéric Lang, Radu Mateescu, Wendelin Serwe
Towards Dynamic Adaptation of Probabilistic Systems

Dynamic system adaptation is modeled in the coordination language Paradigm as coordination of collaborating components. A special component McPal allows for addition of new behavior, of new constraints and of new control in view of a new collaboration.McPal gradually adapts the system dynamics. It is shown that the approach also applies to the probabilistic setting. For a client-server example, where McPal adds, step-by-step, probabilistic behavior to deterministic components, precise modeling of changing system dynamics is achieved. This modeling of the transient behavior, spanning the complete migration range from as-is collaboration to to-be collaboration, serves as a stepping stone to quantitative analysis of the system during adaptation.

S. Andova, L. P. J. Groenewegen, E. P. de Vink
UPPAAL in Practice: Quantitative Verification of a RapidIO Network

Packet switched networks are widely used for interconnecting distributed computing platforms. RapidIO (Rapid Input/Output) is an industry standard for packet switched networks to interconnect multiple processor boards. Key performance metrics for these platforms include average-case and worst-case packet transfer latencies. We focus on verifying such quantitative properties for a RapidIO based multi-processor platform that executes a motion control application. A performance model is available in the Parallel Object-Oriented Specification Language (POOSL) that allows for simulation based estimation results. It is however required to determine the exact worst-case latency as the application is time-critical. A model checking approach has been proposed in our previous work which transforms the POOSL model into an UPPAAL model. However, such an approach only works for a fairly small system. We extend the transformation approach with various heuristics to reduce the underlying state space, thereby providing an effective approximation approach that scales to industrial problems of a reasonable complexity.

Jiansheng Xing, Bart D. Theelen, Rom Langerak, Jaco van de Pol, Jan Tretmans, J. P. M. Voeten
Schedulability Analysis Using Uppaal: Herschel-Planck Case Study

We propose a modeling framework for performing schedulability analysis by using

Uppaal

real-time model-checker [2]. The framework is inspired by a case study where schedulability analysis of a satellite system is performed. The framework assumes a single CPU hardware where a fixed priority preemptive scheduler is used in a combination with two resource sharing protocols and in addition voluntary task suspension is considered. The contributions include the modeling framework, its application on an industrial case study and a comparison of results with classical response time analysis.

Marius Mikučionis, Kim Guldstrand Larsen, Jacob Illum Rasmussen, Brian Nielsen, Arne Skou, Steen Ulrik Palm, Jan Storbank Pedersen, Poul Hougaard
Model-Checking Temporal Properties of Real-Time HTL Programs

This paper describes a tool-supported method for the formal verification of timed properties of HTL programs, supported by the automated translation tool HTL2XTA, which extracts from a HTL program (i) an Uppaal model and (ii) a set of properties that state the compliance of the model with certain automatically inferred temporal constraints. These can be manually extended with other temporal properties provided by the user. The paper introduces the details of the proposed mechanisms as well as the results of our experimental validation.

André Carvalho, Joel Carvalho, Jorge Sousa Pinto, Simão Melo de Sousa

CONNECT: Status and Plans

Towards an Architecture for Runtime Interoperability

Interoperability remains a fundamental challenge when connecting heterogeneous systems which encounter and spontaneously communicate with one another in pervasive computing environments. This challenge is exasperated by the highly heterogeneous technologies employed by each of the interacting parties, i.e., in terms of hardware, operating system, middleware protocols, and application protocols. This paper introduces

Connect

, a software framework which aims to resolve this interoperability challenge in a fundamentally different way.

Connect

dynamically discovers information about the running systems, uses learning to build a richer view of a system’s behaviour and then uses synthesis techniques to generate a connector to achieve interoperability between heterogeneous systems. Here, we introduce the key elements of

Connect

and describe its application to a distributed marketplace application involving heterogeneous technologies.

Amel Bennaceur, Gordon Blair, Franck Chauvel, Huang Gang, Nikolaos Georgantas, Paul Grace, Falk Howar, Paola Inverardi, Valrie Issarny, Massimo Paolucci, Animesh Pathak, Romina Spalazzese, Bernhard Steffen, Bertrand Souville
On Handling Data in Automata Learning
Considerations from the CONNECT Perspective

Most communication with real-life systems involves data values being relevant to the communication context and thus influencing the observable behavior of the communication endpoints. When applying methods from the realm of automata learning, it is necessary to handle such data-occurrences. In this paper, we consider how the techniques of automata learning can be adapted to the problem of learning interaction models in which data parameters are an essential element. Especially, we will focus on how test-drivers for real-word systems can be generated automatically. Our main contribution is an analysis of (1) the requirements on information contained in models produced by the

learning enabler

in the

Connect

project and (2) the resulting preconditions for generating test-drivers automatically.

Falk Howar, Bengt Jonsson, Maik Merten, Bernhard Steffen, Sofia Cassel
A Theory of Mediators for Eternal Connectors

On the fly synthesis of mediators is a revolutionary approach to the seamless networking of today’s and future digital systems that increasingly need be connected. The resulting emergent mediators (or

Connect

ors) adapt the interaction protocols run by the connected systems to let them communicate. However, although the mediator concept has been studied and used quite extensively to cope with many heterogeneity dimensions, a remaining key challenge is to support on-the-fly synthesis of mediators. Towards this end, this paper introduces a theory of mediators for the ubiquitous networking environment. The proposed formal model: (i) precisely characterizes the problem of interoperability between networked systems, and (ii) paves the way for automated reasoning about protocol matching (interoperability) and related mediator synthesis.

Paola Inverardi, Valérie Issarny, Romina Spalazzese
On-the-Fly Interoperability through Automated Mediator Synthesis and Monitoring

Interoperability is a key and challenging requirement in today’s and future systems, which are often characterized by an extreme level of heterogeneity. To build an interoperability solution between the networked systems populating the environment, both their functional and non-functional requirements have to be met.

Because of the continuous evolution of such systems, mechanisms that are fixed a-priori are inadequate to achieve interoperability. In such challenging settings, on-the-fly approaches are best suited.

This paper presents, as an interoperability solution, an approach that integrates an automated technique for the synthesis of mediator protocols with a monitoring mechanism. The former aims to provide interoperability taking care of functional characteristics of the networked systems, whereas the latter makes it possible to assess the non-functional characteristics of the connected system.

Antonia Bertolino, Paola Inverardi, Valérie Issarny, Antonino Sabetta, Romina Spalazzese
Dependability Analysis and Verification for Connected Systems

The

Connect

project aims to enable the seamless composition of heterogeneous networked systems. In this context, Verification and Validation (V&V) techniques are sought to ensure that the

Connect

ed system satisfies dependability requirements. Stochastic model checking and state-based stochastic methods are two appealing V&V approaches to accomplish this task. In this paper, we report on the application of the two approaches in a typical

Connect

scenario. Specifically, we make clear (i) how the two approaches can be employed to enhance the confidence in the correctness of the analysis, and (ii) how the complementarity of these approaches can be fruitfully exploited to extend the analysis.

Felicita Di Giandomenico, Marta Kwiatkowska, Marco Martinucci, Paolo Masci, Hongyang Qu
Towards a Connector Algebra

Interoperability of heterogeneous networked systems has yet to reach the maturity required by ubiquitous computing due to the technology-dependent nature of solutions. The

Connect

Integrated Project attempts to develop a novel network infrastructure to allow heterogeneous networked systems to freely communicate with one another by synthesising the required connectors on-the-fly. A key objective of

Connect

is to build a comprehensive theory of composable connectors, by devising an algebra for rigorously characterising complex interaction protocols in order to support automated reasoning. With this aim in mind, we formalise a high-level algebra for reasoning about protocol mismatches. Basic mismatches can be solved by suitably defined primitives, while complex mismatches can be settled by composition operators that build connectors out of simpler ones. The semantics of the algebra is given in terms of

Interface Automata

, and an example in the domain of instant messaging is used to illustrate how the algebra can characterise the interaction behaviour of a connector for mediating protocols.

Marco Autili, Chris Chilton, Paola Inverardi, Marta Kwiatkowska, Massimo Tivoli

Certification of Software-Driven Medical Devices

Certification of Software-Driven Medical Devices

This track focuses on the issue of certification for modern medical devices. These devices rely more and more on software and are paradigmatic examples of safety critical systems. Existing approaches to software safety and certification are invariably process based; at best, this gives us only indirect, statistical evidence of safety. Thus, they do not offer the kinds of product based guarantees expected by engineers in other classical disciplines as a basis for product certification. This track focuses on the state of the art and proposals for improvement in this crucial area of research. The track includes four papers on relevant topics and a panel discussing the crucial scientific issues involved in making certification judgements.

Mark Lawford, Tom Maibaum, Alan Wassyng
Arguing for Software Quality in an IEC 62304 Compliant Development Process

Safety regulations for medical device software are stipulated in numerous international standards. IEC 62304 addresses software life-cycle processes and identifies core processes, software development activities, and tasks that aim for high-integrity software as a prerequisite for dependability of medical devices controlled by this software. However, these standards prescribe neither a process model nor particular software engineering methods to accomplish the normative requirements. Hence, the manufacturer has to argue in the software development and quality management plans that the selected methods cover the required tasks and are appropriate in order to accomplish high-quality artifacts.

We propose a method for assessing quality- and engineering-centric arguments in dependability cases to assure IEC 62304-compliant software development. Our method is based on an activity-based quality model representing the

impact

of facts about methods and design artifacts on development activities. The impact makes the relation between characteristics of design artifacts and activities contributing to the software safety process explicit. It is derived from state-of-the-art software engineering knowledge and best practices recommended in current safety standards like IEC 61508-3.

Michaela Huhn, Axel Zechner
Trustable Formal Specification for Software Certification

Formal methods have emerged as a complementary approach to ensuring quality and correctness of high-confidence medical systems, overcoming limitations of traditional validation techniques such as simulation and testing. In this paper, we propose a new methodology to obtain certification assurance for complex medical systems design, based on the use of formal methods. The methodology consists of five main phases: first, informal requirements, resulting in a structured version of the requirements, where each fragment is classified according to a fixed taxonomy. In the second phase, informal requirements are represented in formal modeling language, with a precise semantics, and enriched with invariants and temporal constraints. The third phase consists of refinement-based formal verification to test the internal consistency and correctness of the specifications. The fourth phase is the process of determining the degree to which a formal model is an accurate representation of the real world from the perspective of the intended uses of the model using model-checker. Last phase provides an animation framework for the formal model with

real-time data

set instead of

toy-data

, and offers a simple way for specifiers to build a domain specific visualization that can be used by domain experts to check whether a formal specification corresponds to their expectations. Furthermore, we show the effectiveness of this methodology for modeling of a cardiac pacemaker system.

Dominique Méry, Neeraj Kumar Singh
Design Choices for High-Confidence Distributed Real-Time Software

Safety-critical distributed real-time systems, such as networked medical devices, must operate according to their specification, because incorrect behaviour can have fatal consequences. A system’s design and architecture influences how difficult it is to provide confidence that the system follows the specification. In this work, we summarize and discuss three design choices and the underlying concepts that aim at increasing predictability and analyzability. We investigate mandatory resource reservation to guarantee resource availability, separation of resource consumptions to better manage resource inter-dependency, and enumerative reconfiguration. We use the example of a distributed monitoring system for the human cardiovascular system to substantiate our arguments.

Sebastian Fischmeister, Akramul Azim
Assurance Cases in Model-Driven Development of the Pacemaker Software

We discuss the construction of an assurance case for the pacemaker software. The software is developed following a model-based technique that combined formal modeling of the system, systematic code generation from the formal model, and measurement of timing behavior of the implementation. We show how the structure of the assurance case reflects our development approach.

Eunkyoung Jee, Insup Lee, Oleg Sokolsky

Modeling and Formalizing Industrial Software for Verification, Validation and Certification

Improving Portability of Linux Applications by Early Detection of Interoperability Issues

This paper presents an approach aimed at simplifying development of portable Linux applications, suggesting a method of detecting compatibility problems between any Linux application and distribution by means of static analysis of executable files and shared libraries.

In the paper we concern the idea of successful launching of application in a distribution. A formal model is constructed that describes interfaces invoked during the program launching. A set of conditions is derived that should be satisfied by application’s and distribution’s files in order to make it possible for application to successfully launch in distribution. The Linux Application Checker tool is described that supports the approach and allows to detect portability problems of applications at early stage of development.

Denis Silakov, Andrey Smachev
Specification Based Conformance Testing for Email Protocols

The paper presents a method for conformance testing of Internet electronic mail protocols. The method is based on formal specification of the standards following the approach of the contract specification, and designing tests as traversal of a state machine. The paper presents the implementation of the method for the most widely used e-mail protocols SMTP, POP3 and IMAP4 and is illustrated by the results of testing of popular e-mail servers.

Nikolay Pakulin, Anastasia Tugaenko
Covering Arrays Generation Methods Survey

This paper is a survey of methods for covering arrays generation. Covering arrays are used in test data generation for program interfaces with many parameters. The effectiveness and the range of application of these methods are analysed. The algorithms used in these methods are analysed for their time complexity and memory usage. In this paper combinatorial, recursive, greedy algorithms are observed. Several heuristics for reducing the size and the time for construction of covering arrays are observed.

Victor Kuliamin, Alexander Petukhov

Resource and Timing Analysis

A Scalable Approach for the Description of Dependencies in Hard Real-Time Systems

During the design iterations of embedded systems, the schedulability analysis is an important method to verify whether the real-time constraints are satisfied. In order to achieve a wide acceptance in industrial companies, the analysis must be as accurate as possible and as fast as possible. The system context of the tasks has to be considered in order to accomplish an exact analysis. As this leads to longer analysis times, there is a tradeoff between accuracy and runtime. This paper introduces a general approach for the description of dependencies between tasks in distributed hard real-time systems that is able to scale the exactness and the runtime of the analysis. We will show how this concept can be applied to a real automotive application.

Steffen Kollmann, Victor Pollex, Kilian Kempf, Frank Slomka
Verification of Printer Datapaths Using Timed Automata

In multiprocessor systems with many data-intensive tasks, a bus may be among the most critical resources. Typically, allocation of bandwidth to one (high-priority) task may lead to a reduction of the bandwidth of other tasks, and thereby effectively slow down these tasks. WCET analysis for these types of systems is a major research challenge. In this paper, we show how the dynamic behavior of a memory bus and a USB in a realistic printer application can be faithfully modeled using timed automata. We analyze, using Uppaal, the worst case latency of scan jobs with uncertain arrival times in a setting where the printer is concurrently processing an infinite stream of print jobs.

Georgeta Igna, Frits Vaandrager
Resource Analysis of Automotive/Infotainment Systems Based on Domain-Specific Models – A Real-World Example

High-end infotainment units for the Automotive domain have to provide a huge set of features, implemented in complex software on distributed embedded system hardware. In order to analyse the computational resources necessary for such systems, abstract models can be created which are used as a starting point for simulation and static analysis methods. In this paper, a domain-specific modeling method and derived analysis methods will be presented. A real-world scenario will be shown which could also serve as a challenging example for future tools and formal methods.

Klaus Birken, Daniel Hünig, Thomas Rustemeyer, Ralph Wittmann
Source-Level Support for Timing Analysis

Timing analysis is an important prerequisite for the design of embedded real-time systems. In order to get tight and safe bounds for the timing of a program, precise information about its control flow and data flow is needed. While actual timings can only be derived from the machine code, many of the supporting analyses (deriving timing-relevant data such as points-to and loop bound information) operate much more effectively on the source code level. At this level, they can use high-level information that would otherwise be lost during the compilation to machine code.

During the optimization stage, compilers often apply transformations, such as loop unrolling, that modify the program’s control flow. Such transformations can invalidate information derived from the source code. In our approach, we therefore apply such optimizations already at the source-code level and transform the analyzed information accordingly. This way, we can marry the goals of precise timing analysis and optimizing compilation.

In this article we present our implementation of this concept within the SATIrE source-to-source analysis and transformation framework. SATIrE forms the basis for the TuBound timing analyzer. In the ALL-TIMES EU FP7 project we extended SATIrE to exchange timing-relevant analysis data with other European timing analysis tools. In this context, we explain how timing-relevant information from the source code level can be communicated to a wide variety of tools that apply various forms of static and dynamic analysis on different levels.

Gergö Barany, Adrian Prantl
Practical Experiences of Applying Source-Level WCET Flow Analysis on Industrial Code

Code-level timing analysis, such as Worst-Case Execution Time (WCET) analysis, takes place at the binary level. However, much information that is important for the analysis, such as constraints on possible program flows, are easier to derive at the source code level since this code contains much more information. Therefore, different source-level analyses can provide valuable support for timing analysis. However, source-level analysis is not always smoothly applicable in industrial projects. In this paper we report on the experiences of applying source-level analysis to industrial code in the ALL-TIMES FP7 project: the promises, the pitfalls, and the workarounds that were developed. We also discuss various approaches to how the difficulties that were encountered can be tackled.

Björn Lisper, Andreas Ermedahl, Dietmar Schreiner, Jens Knoop, Peter Gliwa
Worst-Case Analysis of Heap Allocations

In object oriented languages, dynamic memory allocation is a fundamental concept. When using such a language in hard real-time systems, it becomes important to bound both the worst-case execution time and the worst-case memory consumption. In this paper, we present an analysis to determine the worst-case heap allocations of tasks. The analysis builds upon techniques that are well established for worst-case execution time analysis. The difference is that the cost function is not the execution time of instructions in clock cycles, but the allocation in bytes. In contrast to worst-case execution time analysis, worst-case heap allocation analysis is not processor dependent. However, the cost function depends on the object layout of the runtime system. The analysis is evaluated with several real-time benchmarks to establish the usefulness of the analysis, and to compare the memory consumption of different object layouts.

Wolfgang Puffitsch, Benedikt Huber, Martin Schoeberl
Partial Flow Analysis with oRange

In order to ensure that timing constrains are met for a Real-Time Systems, a bound of the Worst-Case Execution Time (WCET) of each part of the system must be known. Current WCET computation methods are applied on whole programs which means that all the source code should be available. However, more and more, embedded software uses COTS (Components ...), often afforded only as a binary code. Partialisation is a way to solve this problem.

In general, static WCET analysis uses upper bound on the number of loop iterations. oRange is our method and its associated tool which provide mainly loop bound values or equations and other flow facts information. In this article, we present how we can do partial flow analysis with oRange in order to obtain component partial results. These partial results can be used, in order to compute the flow analysis in the context of a full application. Additionally, we show that the partial analysis enables us to reduce the analysis time while introducing very little pessimism.

Marianne de Michiel, Armelle Bonenfant, Clément Ballabriga, Hugues Cassé
Towards an Evaluation Infrastructure for Automotive Multicore Real-Time Operating Systems

The automotive industry is on the road to multicore and already included supporting features in their AUTOSAR standard, yet they could not decide for a multicore resource locking protocol. It is crucial for the future acceptance and possibilities of multicore systems to allow for informed decisions on this topic, as it immediately impacts the inter-core communication performance and thereby the value-cost ratio of such systems. We present the design of a real-time operating system simulator that allows to evaluate the different multicore synchronisation mechanisms of the real-time research community regarding their fitness for automotive hard real-time applications. You can reuse the key design idea of this simulator for any simulation based tool for the early timing evaluation of different real-time mechanisms, e. g. scheduling algorithms.

Jörn Schneider, Christian Eltges
Context-Sensitivity in IPET for Measurement-Based Timing Analysis

The

Implicit Path Enumeration Technique

(IPET) has become widely accepted as a powerful technique to compute upper bounds on the Worst-Case Execution Time (WCET) of time-critical software components. While the technique works fine whenever fixed execution times can be assumed for the atomic program parts, standard IPET does not consider the context-dependence of execution times. As a result, the obtained WCET bounds can often be overly pessimistic.

The issue of context-dependence has previously been addressed in the field of static timing analysis, where context-dependent execution times of program parts can be extracted from a hardware model. In the case of measurement-based execution time analysis, however, contexts must be derived from timed execution traces.

In the present extended abstract we present an overview of our work on the automatic detection and exploitation of context dependencies from timed execution traces.

Michael Zolda, Sven Bünte, Raimund Kirner
On the Role of Non-functional Properties in Compiler Verification

Works on compiler verification rarely consider non-functional properties such as time and memory consumption. This article shows that there are situations where the functional correctness of a compiler depends on non-functional properties; non-functional properties that are imposed by the target architecture, not the application. We demonstrate that this demands for an extended new notion of compilation correctness.

Jens Knoop, Wolf Zimmermann
Backmatter
Metadaten
Titel
Leveraging Applications of Formal Methods, Verification, and Validation
herausgegeben von
Tiziana Margaria
Bernhard Steffen
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-16561-0
Print ISBN
978-3-642-16560-3
DOI
https://doi.org/10.1007/978-3-642-16561-0

Premium Partner