Skip to main content

2008 | Buch

Leveraging Applications of Formal Methods, Verification and Validation

Third International Symposium, ISoLA 2008, Porto Sani, Greece, October 13-15, 2008. Proceedings

herausgegeben von: Tiziana Margaria, Bernhard Steffen

Verlag: Springer Berlin Heidelberg

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This volume contains the conference proceedings of ISoLA 2008, the Third International Symposium on Leveraging Applications of Formal Methods, Verification and Validation, which was held in Porto Sani (Kassandra, Chalkidiki), Greece during October 13–15, 2008, sponsored by EASST and in cooperation with the IEEE Technical Committee on Complex Systems. Following the tradition of its forerunners in 2004 and 2006 in Cyprus, and the ISoLA Workshops in Greenbelt (USA) in 2005 and in Poitiers (France) in 2007, ISoLA 2008 provided a forum for developers, users, and researchers to discuss issues related to the adoption and use of rigorous tools and methods for the specification, analysis, verification, certification, construction, test, and maintenance of systems from the point of view of their different application domains. Thus, the ISoLA series of events serves the purpose of bridging the gap between designers and developers of rigorous tools, and users in engineering and in other disciplines, and to foster and exploit synergetic relationships among scientists, engineers, software developers, decision makers, and other critical thinkers in companies and organizations. In p- ticular, by providing a venue for the discussion of common problems, requirements, algorithms, methodologies, and practices, ISoLA aims at supporting researchers in their quest to improve the utility, reliability, flexibility, and efficiency of tools for building systems, and users in their search for adequate solutions to their problems.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Architecture Based Specification and Verification of Embedded Software Systems (Work in Progress)

Large scale embedded software intensive systems as we find them, for instance, in cars today need structured techniques in terms of comprehensive architectures for mastering their specification, development, and verification. Comprehensive system architectures provide the appropriate levels of abstraction separating logical from technical views. We show how logical architecture provides a systematic focal point for specification and refinement based development with early verification.

Manfred Broy
Information System Engineering Supporting Observation, Orientation, Decision, and Compliant Action

The majority of today’s software systems and organizational/business structures have been built on the foundation of solving problems via long-term data collection, analysis, and solution design. This traditional approach of solving problems and building corresponding software systems and business processes, falls short in providing the necessary solutions needed to deal with many problems that require agility as the main ingredient of their solution. For example, such agility is needed in responding to an emergency, in military command control, physical security, price-based competition in business, investing in the stock market, video gaming, network monitoring and self-healing, diagnosis in emergency health care, and many other areas that are too numerous to list here. The concept of Observe, Orient, Decide, and Act (OODA) loops is a guiding principal that captures the fundamental issues and approach for engineering information systems that deal with many of these problem areas. However, there are currently few software systems that are capable of supporting OODA. In this talk, we provide a tour of the research issues and state of the art solutions for supporting OODA. In addition, we provide specific examples of OODA solutions we have developed for the video surveillance and emergency response domains.

Dimitrios Georgakopoulos

Keynote

Modelling Coordination and Compensation

Transaction-based services are increasingly being applied in solving many universal interoperability problems. Exception and failure are the typical phenomena of the execution of long-running transactions. To accommodate these new program features, we extend the Guarded Command Language [10] by addition of compensation and coordination combinators, and enrich the standard design model [15] with new healthiness conditions. This paper shows that such an extension is conservative one because it preserves the algebraic laws for designs, which can be used to reduce all programs to a normal form algebraically. We also explore a Galois link between the standard design model with our new model, and show that the embedding from the former to the latter is actually a homomorphism.

He Jifeng

Tools and Applications in Industrial Software Quality Control

Animating Event B Models by Formal Data Models

We present a formal approach allowing to animate event B formal models. Invariants, deadlock freeness properties are expressed and proved on these models. This paper presents an approach that suggests to complete the proof activity in the event B method by animation activity. The obtained animator may be used to check if the event B models obtained fulfill user requirements, or to provide a help to the developer when describing its formal event B models and particularly in defining event B invariants and guards. More precisely, event B models are translated into data models expressed in the EXPRESS formal data modeling technique. The obtained data models are instantiated and provide an animation of the original B models. Following this approach, it becomes possible to trigger event B models, which themselves trigger entity instantiation on the EXPRESS side. As a further step, we show that the B models can be used as a monitoring system raising alarms in case of incorrect systems behavior. The proposed approach is operationally implemented in the B2EXPRESS tool which handles animation of event B models. It has been experimented for the validation of multimodal human interfaces in the context of VERBATIM project.

Idir Ait-Sadoune, Yamine Ait-Ameur
Automated Formal Testing of C API Using T2C Framework

A problem of automated test development for checking basic functionality of program interfaces (API) is discussed. Different technologies and corresponding tools are surveyed. And T2C technology developed in ISPRAS is presented. The technology and associated tools facilitate development of ”medium quality” (and ”medium cost”) tests. An important feature of T2C technology is that it enforces that each check in a developed test is explicitly linked to the corresponding place in the standard. T2C tools provide convenient means to create such linkage. The results of using T2C are considered by example of a project for testing interfaces of Linux system libraries defined by the LSB standard.

Alexey V. Khoroshilov, Vladimir V. Rubanov, Eugene A. Shatokhin

Introduction of Multi-core Systems in Automotive Applications

Tailoring and Optimising Software for Automotive Multicore Systems

The software architecture of embedded systems is heavily influenced by limitations of the underlying hardware. Additionally, real-time requirements constrain the design of applications. On the other hand, embedded systems implement specific functionalities and hence give the designer the opportunity to optimize the system despite of limitations. Multicore systems compromise the predictability of real-time requirements. Again, with the knowledge of the application the software design can benefit from the multicore architecture. This paper discusses how to decide on software design based on use-cases and shows new avenues how to efficiently implement the design with an example.

Torsten Polle, Michael Uelschen
Fault Handling Approaches on Dual-Core Microcontrollers in Safety-Critical Automotive Applications

The number of safety-critical applications is increasing in the automotive domain. Accordingly, requirements given by recent safety standards have to be met in these applications. These requirements include a demonstration of sufficient measures for the handling of permanent and transient hardware faults. Moreover, a consideration of software faults is required. In this work, approaches based on dual-core microcontrollers are investigated with respect to their fault handling capabilities. Therefore,

function monitoring architectures

that are based on a supervision of the implemented function and

generic architectures

, which monitor the hardware executing the application, are compared. This comparison is then further illustrated by an application example. Summarizing, both approaches come with their specific advantages and disadvantages, which should be considered during the development of the functional safety concept.

Eva Beckschulze, Falk Salewski, Thomas Siegbert, Stefan Kowalewski
Timing Validation of Automotive Software

Embedded hard real-time systems need reliable guarantees for the satisfaction of their timing constraints. During the last years sophisticated analysis tools for timing analysis at the code-level, controller-level and networked system-level have been developed. This trend is exemplified by two tools: AbsInt’s timing analyzer aiT, and and Symtavision’s SymTA/S. aiT determines safe upper bounds for the execution times (WCETs) of non-interrupted tasks. SymTA/S computes the worst-case response times (WCRTs) of an entire system from the task WCETs and from information about possible interrupts and their priorities. A seamless integration between both tools provides for a holistic approach to timing validation: starting from a system model, a designer can perform timing budgeting, performance optimization and timing verification, thus covering both the code and the system aspects. However, the precision of the results and the efficiency of the analysis methods are highly dependent on the predictability of the execution platform. Especially on multi-core architectures this aspect becomes of critical importance. This paper describes an industry-strength tool flow for timing validation, and discusses prerequisites at the hardware level for ascertaining high analysis precision.

Daniel Kästner, Reinhard Wilhelm, Reinhold Heckmann, Marc Schlickling, Markus Pister, Marek Jersak, Kai Richter, Christian Ferdinand

Model-Driven SOA

Towards Using Reo for Compliance-Aware Business Process Modeling

Business process modeling and implementation of process supporting infrastructures are two challenging tasks that are not fully aligned. On the one hand, languages such as Business Process Modeling Notation (BPMN) exist to capture business processes at the level of domain analysis. On the other hand, programming paradigms and technologies such as Service-Oriented Computing (SOC) and web services have emerged to simplify the development of distributed web systems that underly business processes. BPMN is the most recognized language for specifying process workflows at the early design steps. However, it is rather declarative and may lead to the executable models which are incomplete or semantically erroneous. Therefore, an approach for expressing and analyzing BPMN models in a formal setting is required. In this paper we describe how BPMN diagrams can be represented by means of a semantically precise channel-based coordination language called Reo which admits formal analysis using model checking and bisimulation techniques. Moreover, since additional requirements may come from various regulatory/legislative documents, we discuss the opportunities offered by Reo and its mathematical abstractions for expressing process-related constraints such as Quality of Service (QoS) or time-aware conditions on process states.

Farhad Arbab, Natallia Kokash, Sun Meng
On the Risk Management and Auditing of SOA Based Business Processes

SOA-enabled business processes stretch across many cooperating and coordinated systems, possibly crossing organizational boundaries, and technologies like XML and Web services are used for making system-to-system interactions commonplace. Business processes form the foundation for all organizations, and as such, are impacted by industry regulations. This requires organizations to review their business processes and ensure that they meet the compliance standards set forth in legislation. In this paper we sketch a SOA-based service risk management and auditing methodology including a compliance enforcement and verification system that assures verifiable business process compliance. This is done on the basis of a knowledge-based system that allows integration of internal control systems into business processes conform pre-defined compliance rules, monitor both the normal process behavior and those of the control systems during process execution, and log these behaviors to facilitate retrospective auditing.

Bart Orriens, Willem-Jan v/d Heuvel, Mike Papazoglou
SCA and jABC: Bringing a Service-Oriented Paradigm to Web-Service Construction

Extensibility, flexibility, easy maintainability, and long-term robustness are core requirements for modern, highly distributed information and computation systems. Such systems in turn show a steady increase in complexity. In pursuit of these goals, software engineering has seen a rapid evolution of architectural paradigms aiming towards increasingly modular, hierarchical, and compositional approaches. Object-orientation, component orientation, middleware components, product-lines, and - recently - service orientation.

We compare two approaches towards a service-oriented paradigm, the Service Component Architecture (SCA) and the jABC.

Georg Jung, Tiziana Margaria, Ralf Nagel, Wolfgang Schubert, Bernhard Steffen, Horst Voigt

Applications of Formal Approaches to Service-Oriented Computing

A Use-Case Driven Approach to Formal Service-Oriented Modelling

We put forward a use-case based approach for SRML – a formal framework that is being defined by the SENSORIA consortium for service-oriented modelling. We expand on the way SRML contributes to the engineering of software systems and we propose a number of extensions to the UML for supporting that approach. We use a mortgage brokerage scenario for illustrating our approach.

Laura Bocchi, José Luiz Fiadeiro, Antónia Lopes
Sensoria Patterns: Augmenting Service Engineering with Formal Analysis, Transformation and Dynamicity

The IST-FET Integrated Project

Sensoria

is developing a novel comprehensive approach to the engineering of service-oriented software systems where foundational theories, techniques and methods are fully integrated into pragmatic software engineering processes. The techniques and tools of

Sensoria

encompass the whole software development cycle, from business and architectural design, to quantitative and qualitative analysis of system properties, and to transformation and code generation. The

Sensoria

approach takes also into account reconfiguration of service-oriented architectures (SOAs) and re-engineering of legacy systems.

In this paper we give first a short overview of

Sensoria

and then present a pattern language for augmenting service engineering with formal analysis, transformation and dynamicity. The patterns are designed to help software developers choose appropriate tools and techniques to develop service-oriented systems with support from formal methods. They support the whole development process, from the modelling stage to deployment activities and give an overview of many of the research areas pursued in the

Sensoria

project.

Martin Wirsing, Matthias Hölzl, Lucia Acciai, Federico Banti, Allan Clark, Alessandro Fantechi, Stephen Gilmore, Stefania Gnesi, László Gönczy, Nora Koch, Alessandro Lapadula, Philip Mayer, Franco Mazzanti, Rosario Pugliese, Andreas Schroeder, Francesco Tiezzi, Mirco Tribastone, Dániel Varró
Safety and Response-Time Analysis of an Automotive Accident Assistance Service

In the present paper we assess both the safety properties and the response-time profile of a subscription service which provides medical assistance to drivers who are injured in vehicular collisions. We use both timed and untimed process calculi cooperatively to perform the required analysis. The formal analysis tools used are hosted on a high-level modelling platform with support for scripting and orchestration which enables users to build custom analysis processes from the general-purpose analysers which are hosted as services on the platform.

Ashok Argent-Katwala, Allan Clark, Howard Foster, Stephen Gilmore, Philip Mayer, Mirco Tribastone
A Framework for Analyzing and Testing the Performance of Software Services

Networks “Beyond the 3rd Generation” (B3G) are characterized by mobile and resource-limited devices that communicate through different kinds of network interfaces. Software services deployed in such networks shall adapt themselves according to possible execution contexts and requirement changes. At the same time, software services have to be competitive in terms of the Quality of Service (QoS) provided, or perceived by the end user.

The PLASTIC project proposes an integrated model-based solution to the development and maintenance of services deployable over B3G networks. Notably, the PLASTIC solution includes formal techniques that combine predictive and empirical evaluation of QoS-aware services.

In this paper we provide an overview of the PLASTIC approach to the assessment of QoS properties. Referring to a complex eHealth service, we first generate and analyze performance models to establish requirements for stand-alone services. Then we use an empirical technique to test the QoS of an orchestration of services even when the actual implementations of the orchestrated services are not available.

Antonia Bertolino, Guglielmo De Angelis, Antinisca Di Marco, Paola Inverardi, Antonino Sabetta, Massimo Tivoli
A Framework for Contract-Policy Matching Based on Symbolic Simulations for Securing Mobile Device Application

There is a growing interest on programming models based on the notion of contract. In particular, in the security realm one could imagine the situation where either downloaded code or software service exposes their security-relevant behavior in a contract (that must to be fulfilled). Assuming to have already a mechanism to ensure that the program/service adheres to the contract, it just remains to check that the contract matches with the user security policy. We refer to this testing procedure as

contract-policy matching

.

We specialize this framework in the ambit of mobile devices. The contract and the user policy are formally expressed by using (symbolic) transition systems.

Then,

contract-policy matching

amounts to simulation checking,

i.e.

, a contract transition system is simulated by a policy one. This means that we check if for each transition corresponding to a certain security action of the contract (and so possibly performed by the program), the policy system has a similar transition and resulting contract system is again simulated by the resulting policy one.

Showing some running examples, we eventually present an implementation of simulation-matching algorithm, developed in J2ME and suitable to run also on smart phones.

Paolo Greci, Fabio Martinelli, Ilaria Matteucci

Trustworthy Computing: Theories, Methods, Tools and Experience in China and South East Asia

ASERE: Assuring the Satisfiability of Sequential Extended Regular Expressions

One purpose of Property Assurance is to check the satisfiability of properties. The Sequential Extended Regular Expressions (SEREs) play important roles in composing PSL properties. The SEREs are regular expressions with repetition and conjunction. Current assurance method for LTL formulas are not applicable to SEREs.

In this paper, we present a method for checking the satisfiability of SEREs. We propose an extension of Alternating Finite Automata with internal transitions and logs of universal branches (IAFA). The new representation enables

memoryful

synchronization of parallel words. The compilation from SEREs to IAFAs is in linear space. An algorithm, and two optimizations are proposed for searching satisfying words of SEREs. They reduce the stepwise search space to the product of universal branches’ guard sets. Experiments confirm their effectiveness.

Naiyong Jin, Huibiao Zhu
Computing Must and May Alias to Detect Null Pointer Dereference

This paper presents a novel algorithm to detect null pointer dereference errors. The algorithm utilizes both of the must and may alias information in a compact way to improve the precision of the detection. Using may alias information obtained by a fast flow- and context- insensitive analysis algorithm, we compute the must alias generated by the assignment statements and the must alias information is also used to improve the precision of the may alias. We can strong update more expressions using the must alias information, which will reduce the false positives of the detection for null pointer dereference. We have implemented our algorithm in the SUIF2 compiler infrastructure and the experiments results are as expected.

Xiaodong Ma, Ji Wang, Wei Dong
A Partial Order Reduction Technique for Parallel Timed Automaton Model Checking

We propose a partial order reduction technique for timed automaton model checking in this paper. We first show that the symbolic successors w.r.t. partial order paths can be computed using DBMs. An algorithm is presented to compute such successors incrementally. This algorithm can avoid splitting the symbolic states because of the enumeration order of independent transitions. A reachability analysis algorithm based on this successor computation algorithm is presented. Our technique can be combined with some static analysis techniques in the literate. Further more, we present a rule to avoid exploring all enabled transitions, thus the space requirements of model checking are further reduced.

Zhao Jianhua, Wang Linzhang, Li Xuandong
Program Verification by Reduction to Semi-algebraic Systems Solving

The discovery of invariants and ranking functions plays a central role in program verification. In our previous work, we investigated invariant generation and non-linear ranking function discovering of polynomial programs by reduction to semi-algebraic systems solving. In this paper we will first summarize our results on the two topics and then show how to generalize the approach to discovering more expressive invariants and ranking functions, and applying to more general programs.

Bican Xia, Lu Yang, Naijun Zhan
Debugging Statecharts Via Model-Code Traceability

Model-driven software development involves constructing behavioral models from informal English requirements. These models are then used to guide software construction. The compilation of behavioral models into software is the topic of many existing research works. There also exist a number of UML-based modeling tools which support such model compilation. In this paper, we show how Statechart models can be validated/debugged by (a) generating code from the Statechart models, (b) employing established software debugging methods like program slicing on the generated code, and (c) relating the program slice back to the Statechart level. Our study is presented concretely in terms of dynamic slicing of Java code produced from Statechart models. The slice produced at the code level is mapped back to the model level for enhanced design comprehension. We use the open-source JSlice tool for dynamic slicing of Java programs in our experiments. We present results on a wide variety of real-life control systems which are modeled as Statecharts (from the informal English requirements) and debugged using our methodology. We feel that our debugging methodology fits in well with design flows in model-driven software development.

Liang Guo, Abhik Roychoudhury
Model Checking CSP Revisited: Introducing a Process Analysis Toolkit

FDR, initially introduced decades ago, is the

de facto

analyzer for Communicating Sequential Processes (CSP). Model checking techniques have been evolved rapidly since then. This paper describes

Pat

, i.e., a

p

rocess

a

nalysis

t

oolkit which complements FDR in several aspects.

Pat

is designed to analyze event-based compositional system models specified using CSP as well as shared variables and asynchronous message passing. It supports automated refinement checking, model checking of LTL extended with events, etc. In this paper, we highlight how partial order reduction is applied to improve refinement checking in

Pat

. Experiment results show that

Pat

outperforms FDR in some cases.

Jun Sun, Yang Liu, Jin Song Dong
Formal Use of Design Patterns and Refactoring

Design patterns

has been used very effectively in object-oriented design for a long time.

Refactoring

is also widely used for producing better maintainable and reusable designs and programs. In this paper, we investigate how design patterns and refactoring rules are used in a formal method by formulating and showing them as

refinement laws

in the calculus of refinement of component and object-oriented systems, known as

rCOS

. We also combine refactoring and design patterns to provide some big-step rules of

pattern-directed refactoring

.

Long Quan, Qiu Zongyan, Zhiming Liu
A Component-Based Access Control Monitor

A control of access to information is increasingly becoming necessary as the systems managing this information is more and more open and available through non secure networks. Integrating an access control monitor within a large system is a complex task, since it has to be an “all or nothing” integration. The least error or mistake could lead to jeopardize the whole system. We present a formal specification of an access control monitor using the calculus of refinement of component and object systems (rCOS). We illustrate this implementation with the well known Role Based Access Control (RBAC) policy and we show how to integrate it within a larger system. Keywords: Component, Access Control, RBAC, Composition

Zhiming Liu, Charles Morisset, Volker Stolz

Non-functional Requirements in Embedded Systems

Navigating the Requirements Jungle

Research on validation and verification of requirements specifications has thus far focused on functional properties. Yet, in embedded systems, functional requirements constitute only a small fraction of the properties that must hold to guarantee proper and safe operation of the system under design.

In this paper we try to shine some light on the kinds of requirements occurring in current embedded systems design processes. We present a set of categories together with real-life examples. For each of them, we briefly describe possible approaches towards formal modeling and automated verification of the respective properties.

Boris Langer, Michael Tautschnig
Non-functional Avionics Requirements

Embedded systems in aerospace become more and more integrated in order to reduce weight, volume/size, and power of hardware for more fuel-effi ciency. Such integration tendencies change architectural approaches of system ar chi tec tures, which subsequently change non-functional requirements for plat forms. This paper provides some insight into state-of-the-practice of non-func tional requirements for developing ultra-critical embedded systems in the aero space industry, including recent changes and trends. In particular, formal requi re ment capture and formal analysis of non-functional requirements of avionic systems – including hard-real time, fault-tolerance, reliability, and per for mance – are exemplified by means of recent developments in SAL and HiLiTE.

Michael Paulitsch, Harald Ruess, Maria Sorea
A Simulation Approach for Performance Validation during Embedded Systems Design

Due to the time-to-market pressure, it is highly desirable to design hardware and software of embedded systems in parallel. However, hardware and software are developed mostly using very different methods, so that performance evaluation and validation of the whole system is not an easy task. In this paper, we propose a simulation approach to bridge the gap between model-driven software development and simulation based hardware design, by merging hardware and software models into a SystemC based simulation environment. An automated procedure has been established to generate software simulation models from formal models, while the hardware design is originally modeled in SystemC. As the simulation models are annotated with timing information, performance issues are tackled in the same pass as system functionality, rather than in a dedicated approach.

For designing real-time systems, although performance evaluation based on simulation cannot provide guarantees of safety, it can provide realistic performance values to validate whether the performance requirements are really satisfied or not and show how pessimistic the static analysis is. Further, the simulative approach is also able to provide the developers an insight into the system architecture to help find bottlenecks of the system. We use the simulative approach as a complement of static analysis and combine them in an integral development cycle.

Zhonglei Wang, Wolfgang Haberl, Andreas Herkersdorf, Martin Wechs
Optimizing Automatic Deployment Using Non-functional Requirement Annotations

Model-driven development has become common practice in design of safety-critical real-time systems. High-level modeling constructs help to reduce the overall system complexity apparent to developers. This abstraction caters for fewer implementation errors in the resulting systems. In order to retain correctness of the model down to the software executed on a concrete platform, human faults during implementation must be avoided. This calls for an automatic, unattended deployment process including allocation, scheduling, and platform configuration.

In this paper we introduce the concept of a

systems compiler

using non-functional requirements (NFR) as a guidance for deployment of real-time systems. The postulated requirements are then used to optimize the allocation decision, i.e., the process of mapping model entities to available computing nodes, as well as the subsequent generation of schedules.

Stefan Kugele, Wolfgang Haberl, Michael Tautschnig, Martin Wechs
Experiences with Evolutionary Timing Test of Automotive Software Components

This paper reports our experiences in estimating the worst case execution time (WCET) of automotive software components with evolutionary testing (ET). The concept maximizes the runtime of software components (SWCs) with internal states by evolving the applied test sequences. We show that the use of timing tests is strongly facilitated by the automotive architecture framework AUTOSAR. A problem of the testing concept is the high temporal effort, that comes along with measuring the execution time of a test sequence on the target hardware. An analysis of the evolutionary testability shows, that the high number of input parameters makes it hard to find the maximum execution time. The WCET estimates obtained with genetic algorithms (GAs) are inferior compared with the results of random testing. GAs run into local optima in case of flat execution time profiles, whereas random testing keeps searching globally. Random testing is outperformed by extended GAs which are adaptive to the underlying optimization problem.

Florian Franz
Measurement-Based Timing Analysis

In this paper we present a measurement-based worst-case execution time (WCET) analysis method. Exhaustive end-to-end execution-time measurements are computationally intractable in most cases. Therefore, we propose to measure execution times of subparts of the application code and then compose these times into a safe WCET bound.

This raises a number of challenges to be solved. First, there is the question of how to define and subsequently calculate adequate subparts. Second, a huge amount of test data is required enforcing the execution of selected paths to perform the desired runtime measurements.

The presented method provides solutions to both problems. In a number of experiments we show the usefulness of the theoretical concepts and the practical feasibility by using current state-of-the-art industrial case studies from project partners.

Ingomar Wenzel, Raimund Kirner, Bernhard Rieder, Peter Puschner
ALL-TIMES – A European Project on Integrating Timing Technology

ALL-TIMES is a research project within the EC 7th Framework Programme. The project concerns embedded systems that are subject to safety, availability, reliability, and performance requirements. Increasingly, these requirements relate to correct timing. Consequently, the need for appropriate timing analysis methods and tools is growing rapidly. An increasing number of sophisticated and technically mature timing analysis tools and methods are becoming available commercially and in academia. However, tools and methods have historically been developed in isolation, and the potential users are missing a process-related and continuous tool- and methodology-support. Due to this fragmentation, the timing analysis tool landscape does not yet fully exploit its potential.

The ALL-TIMES project aims at: combining independent research results into a consistent methodology, integrating available timing tools into a single framework, and developing new timing analysis methods and tools where appropriate.

ALL-TIMES will enable interoperability of the various tools from leading commercial vendors and universities alike, and develop integrated tool chains using as well as creating open tool frameworks and interfaces. In order to evaluate the tool integrations, a number of industrial case studies will be performed. This paper describes the aims of the ALL-TIMES project, the partners, and the planned work.

Jan Gustafsson, Björn Lisper, Markus Schordan, Christian Ferdinand, Peter Gliwa, Marek Jersak, Guillem Bernat

Processes, Methods and Tools for Developing Educational Modules to Support Teaching and Technology Transfer

Weaving a Formal Methods Education with Problem-Based Learning

The idea of weaving formal methods through computing (or software engineering) degrees is not a new one. However, there has been little success in developing and implementing such a curriculum. Formal methods continue to be taught as stand-alone modules and students, in general, fail to see how fundamental these methods are to the engineering of software. A major problem is one of motivation — how can the students be expected to enthusiastically embrace a challenging subject when the learning benefits, beyond passing an exam and achieving curriculum credits, are not clear? Problem-based learning has gradually moved from being an innovative pedagogique technique, commonly used to better-motivate students, to being widely adopted in the teaching of many different disciplines, including computer science and software engineering. Our experience shows that a good problem can be re-used throughout a student’s academic life. In fact, the best computing problems can be used with children (young and old), undergraduates and postgraduates. In this paper we present a process for weaving formal methods through a University curriculum that is founded on the application of problem-based learning and a library of good software engineering problems, where students learn about formal methods without sitting a traditional formal methods module. The process of constructing good problems and integrating them into the curriculum is shown to be analagous to the process of engineering software. This approach is not intended to replace more traditional formal methods modules: it will better prepare students for such specialised modules and ensure that all students have an understanding and appreciation for formal methods even if they do not go on to specialise in them.

J Paul Gibson
Encouraging the Uptake of Formal Methods Training in an Industrial Context
(Extended Abstract)

I recently had occasion to revisit a collection of papers edited by myself and Jonathan Bowen published way back in 1995. The collection,

Applications of Formal Methods

[1], sprung from the obvious need in the formal methods community for detailed examples, insights from industrial best practice, and experience reports.

Michael G. Hinchey
Computer-Supported Collaborative Learning with Mind-Maps

Collaborative learning is a set of various approaches in education that involve joint intellectual effort by students or students and teachers. It opposed to the traditional ’direct transmission’ model, in which learners are assumed to be passive, receptive, isolated receivers of knowledge and skills delivered by an external source. Today Computer-Supported Collaborative Learning (CSCL) is actively developed to support collaborative learning process with the help of modern information and communication technologies. Mindmaps is one of the wide-known learning technique which can be used in CSCL. In this paper we present new mindmaps Web-based tool Comapping, which provides a wide variety of ways to organize collaborative processes in education. We also describe our experiments with applying the CSLC-paradigm using Comapping while teaching Software Engineering in Saint Petersburg State University.

Dmitrij Koznov, Michel Pliskin
Agile IT: Thinking in User-Centric Models

We advocate a new teaching direction for modern CS curricula: extreme model-driven development (XMDD), a new development paradigm designed to continuously involve the customer/application expert throughout the whole systems’ life cycle. Based on the ‘One-Thing Approach’, which works by successively enriching and refining one single artifact, system development becomes in essence a user-centric orchestration of intuitive service functionality. XMDD differs radically from classical software development, which, in our opinion is no longer adequate for the bulk of application programming – in particular when it comes to heterogeneous, cross organizational systems which must adapt to rapidly changing market requirements. Thus there is a need for new curricula addressing this model-driven, lightweight, and cooperative development paradigm that puts the

user process

in the center of the development and the

application expert

in control of the process evolution.

Tiziana Margaria, Bernhard Steffen
Specialization and Instantiation Aspects of a Standard Process for Developing Educational Modules

Educational modules can be seen as relevant mechanisms to improve the learning processes in general. The goal is to produce quality educational products, capable of motivating the learners and effectively contribute to their knowledge construction process. Despite their relevance, none of the initiatives to address the problem of creating educational modules considers a systematic process for developing them. The establishment of a well-defined set of guidelines and supporting mechanisms should ease the distributed and cooperative work to create, reuse and evolve educational modules, taking also into account the impact on the learning process. In this work we present a standardized process we have established aiming at creating well-designed, highly flexible and configurable educational modules. We focus on the aspects of process specialization and instantiation, illustrating the practical application of the instantiated process by the development of an educational module for teaching the fundamentals of software testing. Particularly, the availability of learning facilities, allied to the development of testing tools, should facilitate the apprenticeship of specific testing theories and skills, promoting better dissemination conditions to the practical evaluation and application of testing strategies, both in academic and industrial sets. The produced module has been applied and preliminarily evaluated in terms of the learning effectiveness. The results obtained give us some evidences on the practical use of the standard process as a supporting mechanism to the development of effective educational modules.

Ellen Francine Barbosa, José Carlos Maldonado

Ubiquitous and Context Aware Systems

A Formal Framework for Modeling Context-Aware Behavior in Ubiquitous Computing

A formal framework to contextualize ontologies, proposed in [3], provides several ways of composing ontologies, contexts or both. The proposed algebra can be used to model applications in which the meaning of an entity depends on environment constraints or where dynamic changes in the environment have to be considered. In this article we use this algebra to formalize the problem of interpreting context information in ubiquitous systems, based on a concrete scenario. The main goal is to verify, on one hand, how the formal approach can contribute with a better understanding of the fundamental concepts of ubiquitous computing and, on the other hand, if this formal framework is flexible and rich enough to adequately express specific characteristics of the concrete application domain and scenario.

Isabel Cafezeiro, José Viterbo, Alexandre Rademaker, Edward Hermann Haeusler, Markus Endler
Contexts and Context Awareness in View of the Diagram Predicate Framework

The paper presents a formal model of

entities

and of

contexts with entry points

, and a formal description of the scenarios

an entity enters a context

and

an entity works in a context

. Our model is defined within the Diagram Predicate Framework and therefore is automatically generic since any instantiation of the framework results in a special modeling technique including schema-based, graph-based, object-oriented or ontology-based techniques.

Uwe Wolter, Zinovy Diskin
The Use of Adaptive Semantic Hypermedia for Ubiquitous Collaboration Systems

Adaptation techniques often consider applications developed for single users and static scenarios. With the evolution of collaborative environments and the widespread use of mobile technologies, adaptation must take into account the use of systems at any time, anywhere and in a collaborative way. This work suggests a way to extend the Adaptive Semantic Hypermedia Design Model to include these new dimensions.

Patricia Seefelder de Assis, Daniel Schwabe
The Use of Formal Ontology to Specify Context in Ubiquitous Computing

Although context-awareness is a central paradigm for the implementation of ubiquitous systems, it still lacks adequate representation models, methods and tools that support the development of such systems. Particularly, in order to secure interoperability and allow device interaction, software applications are required to provide unambiguous data and device representation models. In this paper we argue in favor of the use of formal ontology as the tool to formalize the notion of context, describe the interplay between systems and environments and, ultimately, enable verification. Ontologies allow machines to process and integrate devices intelligently, enable quick and accurate search, facilitate communication between a multitude of heterogeneous devices and enable reasoning [22].

Karin K. Breitman, Michael G. Hinchey
High Service Availability in MaTRICS for the OCS

Internet services are becoming an integral part of our daily environment. Nobody sees the background of the configuration of the services and its backend systems, but it is expected that critical services, like flight booking systems, online banking, etc. are resilient against server and service faults. Therefore those services often run in a cluster on several machines in an Active/Standby configuration in a clustered mode.

We present how we realize remote configuration and fault tolerance of the Online Conference Service (OCS) with our service oriented framework MaTRICS. Our solution lets the services untouched and uses the cluster management software

heartbeat

to provide high availability.

Markus Bajohr, Tiziana Margaria
Supporting Requirements Definition and Quality Assurance in Ubiquitous Software Project

The development of ubiquitous software project demands the use of specific software technologies to deal with the inherent complexity of this type of project. Despite the advances in the software engineering field, the building of ubiquitous software still represents a grand challenge. For instance, secondary and primary studies indicated the existence of 13 ubiquity characteristics that can influence ubiquitous software projects. Therefore, in this paper we describe these ubiquity characteristics organized into a body of knowledge regarding ubiquitous computing and used to characterize ubiquitous software projects. Besides, an ongoing research concerned with supporting ubiquity requirements definition and verification (checklist based inspection) activities is also introduced.

Rodrigo O. Spínola, Felipe C. R. Pinto, Guilherme H. Travassos

Formal Methods for Analysing and Verifying Very Large Systems

Squeeze All the Power Out of Your Hardware to Verify Your Software!

The computer industry is undergoing a paradigm shift. Chip manufacturers are shifting development resources away from single- processor chips to a new generation of multi-processor chips, huge clusters of multi-core workstations are easily accessible everywhere, external memory devices, such as hard disks or solid state disks, are getting more powerful both in terms of capacity and access speed. This fundamental technological shift in core computing architecture will require a fundamental change in how we ensure the quality of software. The key issue is that verification techniques need to undergo a similarly deep technological transition to catch up with the complexity of software designed for the new hardware. In this position paper we would like to advocate the necessity of fully exploiting the power offered by the new computer hardware to make the verification techniques capable of handling next-generation software.

Jiří Barnat, Luboš Brim
Static Partial-Order Reduction of Concurrent Systems in Polynomial Time

We present an algorithm for attacking the state explosion problem in analyzing multithreaded programs. Our approach employs partial-order reduction and static virtual coarsening. It uses information on shared variables to generate and interleave blocks of statements.

Our algorithm performs polynomially as long as the number of shared variables is constant.

Robert Mittermayr, Johann Blieberger
An Extensible Space-Based Coordination Approach for Modeling Complex Patterns in Large Systems,

Coordination is frequently associated with shared data spaces employing Linda coordination. But in practice, communication between parallel and distributed processes is carried out with message exchange patterns. What, actually, do shared data spaces contribute beyond these? In this paper we present a formal representation for a definition of shared spaces by introducing an “extensible tuple model”, based on existing research on Linda coordination, some Linda extensions, and virtual shared memory. The main enhancements of the extensible tuple model comprise: means for structuring of spaces, Internet- compatible addressing of resources, more powerful coordination capabilities, a clear separation of user data and coordination information, support of symmetric peer application architectures, and extensibility through programmable aspects. The advantages of the extensible tuple model (XTM) are that it allows for a specification of complex coordination patterns.

Eva Kühn, Richard Mordinyi, Christian Schreiber

Tools for Service-Oriented Discovery of Knowledge

On the Design of Knowledge Discovery Services Design Patterns and Their Application in a Use Case Implementation

As service-orientation becomes more and more popular in the computer science community, more research is done in applying service-oriented applications in specific fields, including knowledge discovery. In this paper we investigate how the service-oriented paradigm can benefit knowledge discovery, and how specific services and the knowledge discovery process as a whole should be designed. We propose a model for the design of a service-oriented knowledge discovery process, and provide guidelines for the types of functionalities it requires. We also provide a case design to show the application and benefits of the proposed model and design pattern in practise.

Jeroen de Bruin, Joost N. Kok, Nada Lavrac, Igor Trajkovski
The ASK System and the Challenge of Distributed Knowledge Discovery

ASK is an industrial software system for connecting people to each other. The system uses intelligent matching functionality and learning mechanisms in order to find effective connections between requesters and responders in a community. Currently, Almende investigates ways to connect multiple distributed configurations of ASK to each other and to different existing systems. Thereby, we face the issue of how to derive knowledge about connections between people in such distributed heterogeneous settings. In this paper, we introduce ASK, indicate its future development and discuss the imposed challenges.

Andries Stam
A Scenario Implementation in R for SubtypeDiscovery Examplified on Chemoinformatics Data

We developed a methodology that both facilitates and enhances the search for homogeneous subtypes in data. We applied this methodology to medical research on Osteoarthritis and Parkinson’s Disease and to chemoinformatics research on the chemical structure of molecule profiles. We release this methodology as the

R

SubtypeDiscovery

package to enable

reproducibility

of our analyses. In this paper, we present the package implementation and we illustrate its output on molecular data from chemoinformatics. Our methodology includes different techniques to process the data, a computational approach repeating data modelling to select for a number of subtypes or a type of model, and additional methods to characterize, compare and evaluate the top ranking models. Therefore, this methodology does not solely cluster data but it also produces a complete set of results to conduct a subtype discovery analysis.

Fabrice Colas, Ingrid Meulenbelt, Jeanine J. Houwing-Duistermaat, Margreet Kloppenburg, Iain Watt, Stephanie M. van Rooden, Martine Visser, Johan Marinus, Edward O. Cannon, Andreas Bender, Jacobus J. van Hilten, P. Eline Slagboom, Joost N. Kok
Requirements for Ontology Based Design Project Assessment

Ontologies are formal knowledge representation. Looking at ontologies in more detail can offer concealed details on the ontology or the conceptualized domain. This requires technologies for ontology analysis. By analysing the use case of R&D management this paper identifies typical metrics on ontologies and evaluates metric calculation mechanisms.

Axel Hahn, Stephan große Austing, Stefan Häusler, Matthias Reinelt
Organizing the World’s Machine Learning Information

All around the globe, thousands of learning experiments are being executed on a daily basis, only to be discarded after interpretation. Yet, the information contained in these experiments might have uses beyond their original intent and, if properly stored, could be of great use to future research. In this paper, we hope to stimulate the development of such learning experiment repositories by providing a bird’s-eye view of how they can be created and used in practice, bringing together existing approaches and new ideas. We draw parallels between how experiments are being curated in other sciences, and consecutively discuss how both the empirical and theoretical details of learning experiments can be expressed, organized and made universally accessible. Finally, we discuss a range of possible services such a resource can offer, either used directly or integrated into data mining tools.

Joaquin Vanschoren, Hendrik Blockeel, Bernhard Pfahringer, Geoff Holmes

Tackling the Challenges of Software Development Process for SMEs with Rigorous Support and Open Source

Workflow Testing

The increased use of workflow management systems and orchestrated services has created the need for frameworks supporting the quality assurance of workflows. In this paper we present WorkflowInspector, a prototype of a workflow test framework for the Windows Workflow Foundation.

R. Breu, A. Lechner, M. Willburger, B. Katt
The jABC Approach to Rigorous Collaborative Development of SCM Applications

Our approach to the model-driven collaborative design of IKEA’s P3 Delivery Management Process uses the jABC [9] for model driven mediation and choreography to complement a RUP-based (Rational Unified Process) development process. jABC is a framework for service development based on Lightweight Process Coordination. Users (product developers and system/software designers) easily develop services and applications by composing reusable building-blocks into (flow-) graph structures that can be animated, analyzed, simulated, verified, executed, and compiled. This way of handling the collaborative design of complex embedded systems has proven to be effective and adequate for the cooperation of

non-programmers

and

non-technical people

, which is the focus of this contribution, and it is now being rolled out in the operative practice.

Martina Hörmann, Tiziana Margaria, Thomas Mender, Ralf Nagel, Bernhard Steffen, Hong Trinh
Gesper: Support to Capitalize on Experience in a Network of SMEs

Small and medium enterprises (SMEs) are the most affected by the exponentially increasing complexity of the average software system: not losing the grip on the new technologies may turn out to be an unsustainable drain on productive effort, as their developers need to devote a substantial part of their time to learning instead of producing.

In order to support continuous education and gathering/reusing solutions, SMEs need a tool supported management of experience and knowledge, providing problem-driven searches.

We describe the experience gained in designing and developing

Gesper

, a tool for knowledge sharing that provides semantic searches based on ontologies. This tool has been tailored to the needs of SMEs and a prototype implementation has been built using free and open source tools.

Maura Cerioli, Giovanni Lagorio, Enrico Morten, Gianna Reggio

Regular Papers

Directed Generation of Test Data for Static Semantics Checker

We present an automatic method, named SemaTESK, for generation of test sets for a translator front end. We focus on the validation and verification of static semantics checker. Most the know methods for semantics test generation produce test suites by filtering a pre-generated set of random texts in the target language. In contrast, SemaTESK allows to generate tests for context conditions directly. It significantly reduces generation time and allows reaching completeness criteria defined in the paper. The presented method to specify static semantics allows to formalize informal requirements described in normative documents (e.g. standard). The method includes SRL notation for compact formal specification of context conditions and STG tool for efficient generation of test suite from SRL specification.

The SemaTESK method has been used in a number of projects, including testing static semantics checkers of C and Java.

M. V. Arkhipova, S. V. Zelenov
Event-Based Approach to Modelling Dynamic Architecture: Application to Mobile Ad-Hoc Network

We describe an event-based approach to specifiy systems with dynamically evolving architecture; the study is illustrated with the structuring and routing in Mobile Ad-hoc Network. The resulting specification is augmented with desired properties and then analysed using theorem proving and model checking tools.

Christian Attiogbé
Trusted Theorem Proving: A Case Study in SLD-Resolution

Prolog’s implementation of SLD-resolution furnishes an efficient theorem-proving technique for the Horn-clause subset of first-order logic, and makes for a powerful addition to any automatic or semi-automatic verification system. However, due to the complexity of SLD-resolution, a naive incorporation of a Prolog engine into such a system would inordinately increase the overall trusted base. In this paper we show how to integrate this procedure in a disciplined, trusted manner, by making the Prolog engine justify its results with very simple natural deduction reasoning. In effect, instead of taking SLD-resolution as a

primitive

inference rule, we express it as a

derived

inference rule in terms of much simpler rules such as conditional elimination.

This reduction is an example of a general methodology for building highly reliable software systems called

certified computation

, whereby a program not only produces a result

r

for a given input

x

but also

proves

that

r

is correct for

x

. Such a proof can be viewed as a

certificate

for the result

r

, and can significantly enhance the latter’s credibility: if we trust the axioms and inference rules used in the proof, we can trust the result. We present a complete implementation of a certifying Prolog interpreter that relies only on three exceptionally simple inference rules: conditional elimination, universal specialization, and conjunction introduction.

Konstantine Arkoudas, Olin Shivers
High Level Analysis, Design and Validation of Distributed Mobile Systems with CoreASM

System design is a creative activity calling for abstract models that facilitate reasoning about the key system attributes (desired requirements and resulting properties) so as to ensure these attributes are properly established prior to actually building a system. We explore here the practical side of using the

abstract state machine

(ASM) formalism in combination with the

CoreASM

open source tool environment for high-level design and experimental validation of complex distributed systems. Emphasizing the early phases of the design process, a guiding principle is to support freedom of experimentation by minimizing the need for encoding.

CoreASM

has been developed and tested building on a broad scope of applications, spanning computational criminology, maritime surveillance and situation analysis. We critically reexamine here the

CoreASM

project in light of three different application scenarios.

R. Farahbod, U. Glässer, P. J. Jackson, M. Vajihollahi
Optimizing the System Observability Level for Diagnosability

A system model is

diagnosable

when every fault can be unambiguously detected from its observable events. Diagnosability is a desirable system property, enabling large and complex systems to be designed with automatic fault detection and isolation mechanisms.

In this paper we study the relation between a system’s level of observability and its diagnosability. We provide both necessary and sufficient conditions on the observable events maintained by the system in order to be diagnosable. We concentrate on two problems: First, we show how to transform a diagnosable system into another one which is still diagnosable but also has a minimal level of observability. Second, we show how to transform a non-diagnosable system into a diagnosable by subsequently increasing the level of observability.

Finally, we expand our framework with several extensions, dealing with distinguishability, predictability and extended fault models.

Laura Brandán Briones, Alexander Lazovik, Philippe Dague
Weaving Authentication and Authorization Requirements into the Functional Model of a System Using Z Promotion

The use of Z in software development has focused on specifying the functionality of a system. However, when developing secure system, it is important to address fundamental security aspects, such as authentication, authorization, and auditing. In this paper, we show an approach for building systems from generic and modular security components using promotion technique in Z. The approach focuses on weaving security component into the functionality of a system using

promotion

technique in Z. For each component,

Z

notation is used to construct its state-based model and the relevant operations. Once a component is introduced, the defined local operations are promoted to work on the global state. We illustrate this approach on the development of a “secure” model for a conference management system. With this approach, it is possible to specify the core functionalities of a system independently from the security mechanisms. Authentication and authorization are viewed as components which are carefully integrated with the functional system.

Ali Nasrat Haidar, Ali E. Abdallah
Simple Gedanken Experiments in Leveraging Applications of Formal Methods

As experience in established engineering disciplines shows, the most (maybe only) effective way for leveraging formal methods (FM) into daily practice is by developing mathematical modeling abilities. Laying a solid theoretical basis early is best assisted by simple example problems with minimal technical content. It is shown how simplicity still allows covering all practical aspects of FM and even finding new insights because, as in basic science, simple problems lead to a variety of gedanken experiments. Of the wide realm of opportunities, three are illustrated: (a) microsemantics in algorithmic problem solving and reasoning about invariants, (b) experimenting with data abstractions to capture informal statements faithfully, (c) expressing puzzles involving procedures, possibly with nondeterminism and multiple loops, by simple mathematics. The proper rôle of software tools in leveraging FM is discussed alongside.

Raymond Boute
Composition of Web Services Using Wrappers

Web services (WSs) compositions deal with specifying how to assemble a complex WS system from elementary services. These services can be provided on the Web by third parties as WSs, COTS, or bespoke components. Wrappers are becoming the norm for customising existing components in order to integrate them into larger WS systems. In many cases, using a component “as-is” is very unlikely to occur. A component has to be customized because of, for example, incompatibilities between the interfaces of components that need to communicate with one another, need for extra security features, or, blocking unneeded functionality. This paper presents an approach for modeling several wrapping techniques that can be used for composing WS application using Hoare’s CSP process algebra.

Ali Nasrat Haidar, Ali E. Abdallah
Backmatter
Metadaten
Titel
Leveraging Applications of Formal Methods, Verification and Validation
herausgegeben von
Tiziana Margaria
Bernhard Steffen
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-88479-8
Print ISBN
978-3-540-88478-1
DOI
https://doi.org/10.1007/978-3-540-88479-8