Skip to main content

2010 | Buch

Leveraging Applications of Formal Methods, Verification, and Validation

4th International Symposium on Leveraging Applications, ISoLA 2010, Heraklion, Crete, Greece, October 18-21, 2010, Proceedings, Part I

herausgegeben von: Tiziana Margaria, Bernhard Steffen

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

New Challenges in the Development of Critical Embedded Systems – An “aeromotive” Perspective

New Challenges in the Development of Critical Embedded Systems—An “aeromotive” Perspective

During the last decades, embedded systems have become increasingly important in highly safety-critical areas such as power plants, medical equipment, cars, and aeroplanes. The automotive and avionics domains are prominent examples of classical engineering disciplines where conflicts between costs, short product cycles and legal requirements concerning dependability, robustness, security, carbon footprint and spatial demands have become a pressing problem.

Visar Januzaj, Stefan Kugele, Boris Langer, Christian Schallhart, Helmut Veith
Certification of Embedded Software – Impact of ISO DIS 26262 in the Automotive Domain

The publication of the ISO 26262 (”Road vehicles – Functional safety”) as Draft International Standard (DIS) and its expected release as international standard in 2011 has a substantial impact on the development of automotive software. By defining the current state of technique for the development of safe automotive software, the lack of or inadequate use of these techniques has severe legal consequences.

Like its ancestor, IEC 61508, as a process standard the ISO DIS 26262 defines artifacts and activities of the development process; consequently, Part 6 of the ISO standard (”Product development: Software Level”) defines the artifacts and activities for requirements specification, architectural design, unit implementation and testing, as well as system integration and verification. Depending on the hazard analysis and risk assessment, and on the resulting Automotive Safety Integrity Level (ASIL) of the function under development, the standard, e.g., prescribes the use of (semi)formal methods for the verification of requirements, (semi-)formal notations for software design, the use of control and data flow analysis techniques, static and semantic code analysis, the use of test case generation, or in-the-loop verification mechanisms. Furthermore, the standard specifically acknowledges the application of model-based development in automotive software engineering.

Currently, several of these rather advanced techniques are only required for higher safety integrity levels. Consequently, even though embedded software has become the leading innovation factor in automotive applications, many highly safety-critical automotive functionalities are only reluctantly implemented with software-based solutions. Here, by advancing the applicability and scalability of these advanced technologies and providing support in form of qualified tool chains, a substantial change in the development of automotive software can be achieved, allowing not only to virtualize and thus substitute physical solutions of automotive functions (e.g., X-by-wire solutions), but also to implement a new range of functionalities (e.g., autonomic driving).

Bernhard Schätz
Enforcing Applicability of Real-Time Scheduling Theory Feasibility Tests with the Use of Design-Patterns

This article deals with performance verifications of architecture models of real-time embedded systems. We focus on models verified with the real-time scheduling theory. To perform verifications with the real-time scheduling theory, the architecture designers must check that their models are compliant with the assumptions of this theory. Unfortunately, this task is difficult since it requires that designers have a deep understanding of the real-time scheduling theory. In this article, we investigate how to help designers to check that an architecture model is compliant with this theory. We focus on feasibility tests. Feasibility tests are analytical methods proposed by the real-time scheduling theory. We show how to explicitly model the relationships between an architectural model and feasibility tests. From these models, we apply a model-based engineering process to generate a decision tool what is able to detect from an architecture model which are the feasibility tests that the designer can apply.

Alain Plantec, Frank Singhoff, Pierre Dissaux, Jérôme Legrand
Seamless Model-Driven Development Put into Practice

Model-driven development (MDD) today is the most promising approach to handle the complexity of software development for distributed embedded systems. Still, no single tool-chain exists that meets all needs of companies employing MDD. Moving back and forth between the tools in today’s iterative development processes thus requires manual integration steps, which are error-prone and hamper reuse and refinement of models. A possible workaround is developing adapters for each pair of tools. Despite its large overhead, industry started pursuing this approach because of a lack of better alternatives. A proper solution is a tool-chain building on an

integrated

modeling language. We have realized this in cooperation with BMW Research and Technology. To increase the degree of automation during development, the modeling language builds upon a core featuring a rigorous semantics. This enables automatic analysis, facilitating an automatic transition from model-based designs to a distributed system running on the target platform.

Wolfgang Haberl, Markus Herrmannsdoerfer, Stefan Kugele, Michael Tautschnig, Martin Wechs
Timely Time Estimates

Estimations of execution time are essential for design and development of safety critical embedded real-time systems, such as avionics, automotive and aerospace systems. In such systems, execution time is part of the functional specification, hence correct behaviour requires sufficiently powerful target hardware to meet deadlines or achieve required polling rates, etc. Yet, grossly overestimated resource usage results in excessive cost per unit. For a proper choice of the target platform, qualitatively good execution time estimates are required at an early stage of the development process.

In this paper we propose a framework which provides software engineers with execution time estimates of the software under development in a demand-driven manner, i. e. the engineers ask for timing information at program or function level with respect to different target hardware platforms. In a platform-independent manner we extract the necessary information from the code and combine it with platform-specific information, resulting in the time estimate. We implemented our framework on top of the test input generator FS

hell

and its query language FQL. Preliminary experiments on C code show the viability of our approach.

Andreas Holzer, Visar Januzaj, Stefan Kugele, Michael Tautschnig
Compiler-Support for Robust Multi-core Computing

Embedded computing is characterised by the limited availability of computing resources. Further, embedded systems are often used in safety-critical applications with real-time constraints. Thus, the software development has to follow rigorous procedures to minimise the risk of system failures. However, besides the inherent application complexities, there is also an increased technology-based complexity due to the shift to concurrent programming of multi-core systems. For such systems it is quite challenging to develop safe and resource-efficient systems.

In this paper we give a plea for the need of better software development tools to cope with this challenge. For example, we outline how compilers can help to simplify the writing of fault-tolerant and robust software, which keeps the application code more compact, comprehensive, and maintainable. We take a rather extreme stand by promoting a functional programming approach. This functional programming paradigm reduces the complexity of program analysis and thus allows for more efficient and powerful techniques. We will implement an almost transparent support for robustness within the S

a

C research compiler, which accepts a C-like functional program as input. Compared to conventional approaches in the field of automatic software-controlled resilience, our functional setting will allow for lower overhead, making the approach interesting for embedded computing as well as for high-performance computing.

Raimund Kirner, Stephan Herhut, Sven-Bodo Scholz

Formal Languages and Methods for Designing and Verifying Complex Embedded Systems

Thematic Track: Formal Languages and Methods for Designing and Verifying Complex Embedded Systems

Nowadays, it is well accepted that the develop- ment of critical systems involves the use of formal methods. One of the major fields where these methods made a lot of progress are the avionics, aerospace, transport areas, telecom, etc. These systems are responsible for various functions, such as navigation, guidance, stability, power management, board/ground communications, passenger entertainment.... Moreover, their complexity is continuously growing.

Due to safety constraints, these systems often have to go through certification. This requires testing, and a design process based on a set of tight rules. However, due to the increasing complexity of described systems, there is clearly no guarantee that such tight rules and rigorous testing will lead to error free systems. An alternative approach for helping system designers is formal methods, i.e. fundamental languages, techniques and tools for design, analysis, validation or transformation of systems in a provably correct way.

Yamine Ait Ameur, Frédéric Boniol, Dominique Mery, Virginie Wiels
Analyzing the Security in the GSM Radio Network Using Attack Jungles

In this paper we introduce the concept of attack jungles, which is a formalism for systematic representation of the vulnerabilities of systems. An attack jungle is a graph representation of all ways in which an attacker successfully can achieve his goal. Attack jungles are an extension of attack trees [13] that allows multiple roots, cycles and reusability of resources. We have implemented a prototype tool for constructing and analyzing attack jungles. The tool was used to analyze the security of the GSM (radio) access network.

Parosh Aziz Abdulla, Jonathan Cederberg, Lisa Kaati
Formal Modeling and Verification of Sensor Network Encryption Protocol in the OTS/CafeOBJ Method

Sensor Network Encryption Protocol (SNEP) is one of the secure building blocks of the SPINS Protocol Suite and provides data confidentiality, two-party data authentication and evidence of data freshness in a wireless sensor network. We have formally analyzed SNEP and a node-to-node key agreement protocol based on it, using the OTS/CafeOBJ method. Among other invariants, we have verified that the protocols possess the important security properties of authenticity and confidentiality of relevant message components. To our knowledge, we are the first to formally analyze SNEP using algebraic specification techniques.

Iakovos Ouranos, Petros Stefaneas, Kazuhiro Ogata
Model-Driven Design-Space Exploration for Embedded Systems: The Octopus Toolset

The complexity of today’s embedded systems and their development trajectories requires a systematic, model-driven design approach, supported by tooling wherever possible. Only then, development trajectories become manageable, with high-quality, cost-effective results.

This paper introduces the Octopus Design-Space Exploration (DSE) toolset that aims to leverage existing modeling, analysis, and DSE tools to support model-driven DSE for embedded systems. The current toolset integrates Uppaal and CPN Tools, and is centered around the DSE Intermediate Representation (DSEIR) that is specifically designed to support DSE. The toolset architecture allows: (

i

) easy reuse of models between different tools, while providing model consistency, and the combined use of these tools in DSE; (

ii

) domain-specific abstractions to support different application domains and easy reuse of tools across domains.

Twan Basten, Emiel van Benthum, Marc Geilen, Martijn Hendriks, Fred Houben, Georgeta Igna, Frans Reckers, Sebastian de Smet, Lou Somers, Egbert Teeselink, Nikola Trčka, Frits Vaandrager, Jacques Verriet, Marc Voorhoeve, Yang Yang
Contract-Based Slicing

In the last years, the concern with the correctness of programs has been leading programmers to enrich their programs with annotations following the principles of

design-by-contract

, in order to be able to guarantee their correct behaviour and to facilitate reuse of verified components without having to reconstruct proofs of correctness.

In this paper we adapt the idea of

specification-based slicing

to the scope of (contract-based) program verification systems and behaviour specification languages. In this direction, we introduce the notion of

contract-based slice

of a program and show how any specification-based slicing algorithm can be used as the basis for a contract-based slicing algorithm.

Daniela da Cruz, Pedro Rangel Henriques, Jorge Sousa Pinto

Worst-Case Traversal Time (WCTT)

Special Track on Worst Case Traversal Time (WCTT)

Real-time systems are increasingly becoming communication intensive, where different functions are implemented using distributed real-time tasks mapped onto different physical systems (sensors, processors, and actuators). To ensure global correctness, one has to ensure the correctness of each task, schedulability of the tasks on each system, and finally also bound the communication time, i.e., the worst case (network) traversal time (WCTT).

Anne Bouillard, Marc Boyer, Samarjit Chakraborty, Steven Martin, Jean-Luc Scharbarg, Giovanni Stea, Eric Thierry
The PEGASE Project: Precise and Scalable Temporal Analysis for Aerospace Communication Systems with Network Calculus

With the increase of critical data exchanges in embedded real-time systems, the computation of tight upper bounds on network traversal times is becoming a crucial industrial need especially in safety critical systems. To address this need, the French project PEGASE grouping academics and industrial partners from the aerospace field has been undertaken to improve some key aspects of the Network Calculus and its implementation.

Marc Boyer, Nicolas Navet, Xavier Olive, Eric Thierry
NC-Maude: A Rewriting Tool to Play with Network Calculus

Embedded real-time systems are more and more distributed communicating systems. Then, to ensure correctness of application, respect of task dead-line must be ensured, but communication delays must also be bounded. Network calculus is a theory designed to compute such bounds (it have been successfully applied on A380 AFDX backbone). In order to disseminate, and to experiment new results, a tool is needed. Unlike other tools, its purposes are to be open, to allow the user to see the class of function manipulated (sub-additive, star-shaped, concave), the theorems used to get results, etc. To get a code as close as possible to the mathematical context, we chose to use a rewriting language, Maude.

Marc Boyer
DEBORAH: A Tool for Worst-Case Analysis of FIFO Tandems

Recent results on Network Calculus applied to FIFO networks show that, in the general case, end-to-end delay bounds cannot be computed in a closed form, whereas solving non-linear programming problems is instead required. Furthermore, it has been shown that these bounds may be larger than the actual worst-case delay, which also calls for computing

lower

bounds on the latter. This paper presents DEBORAH (Delay Bound Rating AlgoritHm), a tool for computing upper and lower bounds to the worst-case delay in FIFO tandem networks. DEBORAH can analyze tandems of up to several tens of nodes in reasonable time on off-the-shelf hardware. We overview the various algorithms used by DEBORAH to perform the various steps of the computations, and describe its usage.

Luca Bisti, Luciano Lenzini, Enzo Mingozzi, Giovanni Stea
A Self-adversarial Approach to Delay Analysis under Arbitrary Scheduling

Non-FIFO processing of flows by network nodes is a frequent phenomenon. Unfortunately, the state-of-the-art analytical tool for the computation of performance bounds in packet-switched networks, network calculus, cannot deal well with non-FIFO systems. The problem lies in its conventional service curve definitions. Either the definition is too strict to allow for a concatenation and consequent beneficial end-to-end analysis, or it is too loose and results in infinite delay bounds. Hence, in this paper, we propose a new approach to derive tight bounds in tandems of non-FIFO nodes, the so-called self-adversarial approach. The self-adversarial approach is based on a previously proposed method for calculating performance bounds in feedforward networks [30]. By numerical examples we demonstrate the superiority of the self-adversarial approach over existing methods for the analysis of non-FIFO tandems as well as that for low to medium utilizations it even stays close to corresponding FIFO performance bounds.

Jens B. Schmitt, Hao Wang, Ivan Martinovic
Flow Control with (Min,+) Algebra

According to the theory of Network Calculus based on the (min,+) algebra, analysis and measure of worst-case performance in communication networks can be made easily. In this context, this paper deals with traffic regulation and performance guarantee of a network i.e. with flow control. At first, assuming that a minimum service provided by a network is known, we aim at finding the constraint over the input flow in order to respect a maximal delay or backlog. Then, we deal with the window flow control problem in the following manner: The data stream (from the source to the destination) and the acknowledgments stream (from the destination to the source) are assumed to be different and the service provided by the network is assumed to be known in an uncertain way, more precisely it is assumed to be in an interval. The results are obtained by considering the Residuation theory which allows functions defined over idempotent semiring to be inverted.

Euriell Le Corronc, Bertrand Cottenceau, Laurent Hardouin
An Interface Algebra for Estimating Worst-Case Traversal Times in Component Networks

Interface-based design relies on the idea that different components of a system may be developed independently and a system designer can connect them together only if their interfaces match, without knowing the details of their internals. In this paper we propose an interface algebra for analyzing networks of embedded systems components. The goal is to be able to compute worst-case traversal times and verify their compliance to provided deadlines in such component networks in an incremental manner, i.e., as and when new components are added or removed from the network. We lay the basic groundwork for this algebra and show its utility through an illustrative example.

Nikolay Stoimenov, Samarjit Chakraborty, Lothar Thiele
Towards Resource-Optimal Routing Plans for Real-Time Traffic

We discuss the issue of computing resource-optimal routing plans in a network domain. Given a number of known traffic demands, with associated required delays, we discuss how to route them and allocate resources for them at each node so that the demands are satisfied. While a

globally

optimal routing plan requires joint computation of the paths and of the associated resources (which was claimed to be NP-hard), in this paper we stick to existing approaches for path computation, and use mathematical programming to model resource allocation once the paths are computed. We show that the problem is either convex or non-convex, depending on the scheduling algorithms adopted at the nodes. Our results show that, by computing resources per-path, instead of globally, the available capacity can be exceeded even at surprisingly low utilizations.

Alessandro Lori, Giovanni Stea, Gigliola Vaglini
Partially Synchronizing Periodic Flows with Offsets Improves Worst-Case End-to-End Delay Analysis of Switched Ethernet

A switched Ethernet network needs a worst-case delay analysis to prove that real-time applications can be correctly implemented on it. Existing approaches for upper bounding end-to-end delay assume that there is either no-synchronization or a global synchronization of flows which needs a global clock. Conversely, flows emitted by the same end-system can be easily synchronized using the existing local clock. This paper explains how the partial synchronization of periodic flows can be integrated in the worst-case delay analysis provided by Network Calculus approach. End-to-end delays computed on switched Ethernet configurations show that the proposed method clearly reduces the upper bounds of end-to-end delay.

Xiaoting Li, Jean-Luc Scharbarg, Christian Fraboul
Analyzing End-to-End Functional Delays on an IMA Platform

The Integrated Modular Avionics (IMA) platform is the latest generation of embedded architecture, in which functions share both the execution and communication resources. Functions execute in predefined time slots and communicate through an AFDX network. The purpose of the analysis presented is the verification of freshness requirements on the data exchanged between IMA applications.

The two contributions of this paper are : (1) a modeling approach for IMA platforms based on networks of timed automata. For small models, it is possible to compute exact evaluation of temporal properties using symbolic reachability analysis, (2) the collaborative use of efficient methods for worst case traversal time (WCTT) computation on the AFDX network, which results are injected in the timed automata model to help the functional analysis.

Michaël Lauer, Jérôme Ermont, Claire Pagetti, Frédéric Boniol

Tools in Scientific Workflow Composition

Tools in Scientific Workflow Composition

Scientific workflows are combinations of activities and computations in order to solve scientific problems. In contrast to, for instance, business workflows that implement business processes involving different persons and information systems, scientific workflows are used to carry out computational experiments, possibly confirming or invalidating scientific hypotheses [1]. Scientific workflow systems [2][3] support and automate the execution of error-prone, repetitive tasks such as data access, transformation, and analysis. Several systems for different purposes and following different approaches have been developed in the last decade, and research in this comparatively new field is currently going into many different directions.

This ISoLA 2010 special track is devoted to “Tools in Scientific Workflow Composition”. Its papers comprise subjects such as tools and frameworks for workflow composition, semantically aware workflow development, and automatic workflow composition, as well as some case studies, examples, and experiences. The contributions are primarily from the bioinformatics domain, but do also contain examples from other (scientific) application domains.

Joost N. Kok, Anna-Lena Lamprecht, Mark D. Wilkinson
Workflows for Metabolic Flux Analysis: Data Integration and Human Interaction

Software frameworks implementing scientific workflow applications have become ubiquitous in many research fields. The most beneficial advantages of workflow-enabled applications involve automation of routine operations and distributed computing on heterogeneous systems. Particular challenges in scientific applications include grid-scale orchestration of complex tasks with interactive workflows and data management allowing for integration of heterogeneous data sets.

We present a workflow for the

13

C isotope-based Metabolic Flux Analysis (13C-MFA). The core of any 13C-MFA study is the

metabolic network modeling

workflow. It consists of sub-tasks involving model set-up and acquisition of measurement data sets within a graphical environment, the evaluation of the model equations and, finally, the visualization of data and simulation results. Human intervention and the integration of various knowledge and data sources is crucial in each step of the modeling workflow. A scientific workflow framework is presented that serves for organization and automation of complex analysis processes involved in 13C-MFA applications. By encapsulating technical details and avoiding recurrent issues, sources for errors are minimized, the evaluation procedure for

13

C labeling experiments is accelerated and, moreover, becomes documentable.

Tolga Dalman, Peter Droste, Michael Weitzel, Wolfgang Wiechert, Katharina Nöh
Intelligent Document Routing as a First Step towards Workflow Automation: A Case Study Implemented in SQL

In large and complex organizations, the development of workflow automation projects is hard. In some cases, a first important step in that direction is the automation of the routing of incoming documents. In this paper, we describe a project to develop a system for the first routing of incoming letters to the right department within a large, public portuguese institution. We followed a data mining approach, where data representing previous routings were analyzed to obtain a model that can be used to route future documents. The approach followed was strongly influenced by some of the limitations imposed by the customer: the budget available was small and the solution should be developed in SQL to facilitate integration with the existing system. The system developed was able to obtain satisfactory results. However, as in any Data Mining project, most of the effort was dedicated to activities other than modelling (e.g., data preparation), which means that there is still plenty of room for improvement.

Carlos Soares, Miguel Calejo
Combining Subgroup Discovery and Permutation Testing to Reduce Reduncancy

Scientific workflows are becoming more popular in the research community, due to their ease of creation and use, and because of the benefits of repeatability of such workflows. In this paper we investigate the benefits of workflows in a genomics experiment which requires intensive computing as well as parallelization, and show that substantial optimizations in rule redundancy reduction can be achieved by simple workflow parallelization.

Jeroen S. de Bruin, Joost N. Kok
Semantically-Guided Workflow Construction in Taverna: The SADI and BioMoby Plug-Ins

In the Taverna workflow design and enactment tool, users often find it difficult to both manually discover a service or workflow fragment that executes a desired operation on a piece of data (both semantically and syntactically), and correctly connect that service into the workflow such that appropriate connections are made between input and output data elements. The BioMoby project, and its successor the SADI project, embed semantics into their data-structures in an attempt to make the purpose and functionality of a Web Service more computable, and thereby facilitate service discovery during workflow construction. In this article, we compare and contrast the functionality of the BioMoby and SADI plug-ins to Taverna, with a particular focus on how they attempt to simplify workflow synthesis by end-users. We then compare these functionalities with other workflow-like clients we (and others) have created for the BioMoby and SADI systems, discuss the limitations to manual workflow synthesis, and contrast these with the opportunities we have found for fully automated workflow synthesis using the semantics of SADI.

David Withers, Edward Kawas, Luke McCarthy, Benjamin Vandervalk, Mark Wilkinson
Workflow Construction for Service-Oriented Knowledge Discovery

The paper proposes a Service-oriented Knowledge Discovery (SoKD) framework and a prototype implementation named Orange4WS. To provide the proposed framework with semantics, we are using the Knowledge Discovery Ontology which defines relationships among the ingredients of knowledge discovery scenarios. It enables to reason which algorithms can be used to produce the results required by a specified knowledge discovery task, and to query the results of the knowledge discovery tasks. In addition, the ontology can also be used for automatic annotation of manually created workflows facilitating their reuse. Thus, the proposed framework provides an approach to third generation data mining: integration of distributed, heterogeneous data and knowledge resources and software into a coherent and effective knowledge discovery process. The abilities of the prototype implementation have been demonstrated on a text mining use case featuring publicly available data repositories, specialized algorithms, and third-party data analysis tools.

Vid Podpečan, Monika Žakova, Nada Lavrač
Workflow Composition and Enactment Using jORCA

Support for automated service discovering and workflow composition is increasingly important as the number of Web Services and data types in bioinformatics and biomedicine grows. jORCA is a desktop client able to discover and invoke Web Services published in different types of service metadata repositories. In this paper, we report that jORCA is now able to discover, compose, edit, store, export and enact workflows. As proof of concept, we present a case study which re-creates a published workflow to demonstrate new functionality in jORCA, starting from service discovery, workflow generation and refinement; to enactment, monitoring and visualization of results. The system has been exhaustively tested and documented and is freely available at http://www.bitlab-es.com/jorca.

Johan Karlsson, Victoria Martín-Requena, Javier Ríos, Oswaldo Trelles
A Linked Data Approach to Sharing Workflows and Workflow Results

A bioinformatics analysis pipeline is often highly elaborate, due to the inherent complexity of biological systems and the variety and size of datasets. A digital equivalent of the ‘Materials and Methods’ section in wet laboratory publications would be highly beneficial to bioinformatics, for evaluating evidence and examining data across related experiments, while introducing the potential to find associated resources and integrate them as data and services. We present initial steps towards preserving bioinformatics ‘materials and methods’ by exploiting the workflow paradigm for capturing the design of a data analysis pipeline, and RDF to link the workflow, its component services, run-time provenance, and a personalized biological interpretation of the results. An example shows the reproduction of the unique graph of an analysis procedure, its results, provenance, and personal interpretation of a text mining experiment. It links data from Taverna, myExperiment.org, BioCatalogue.org, and ConceptWiki.org. The approach is relatively ‘light-weight’ and unobtrusive to bioinformatics users.

Marco Roos, Sean Bechhofer, Jun Zhao, Paolo Missier, David R. Newman, David De Roure, M. Scott Marshall

Emerging Services and Technologies for a Converging Telecommunications / Web World in Smart Environments of the Internet of Things

Towards More Adaptive Voice Applications

With the Internet designed to provide best-effort packet transmission, applications are expected to adapt dynamically to the operating conditions observed in the network. For this purpose, congestion control mechanisms have been devised for various transport and (partly) application protocols, and application programs may present, e.g., data rate information to the user. While these mechanisms work well for elastic applications (such as file transfer), the perceived performance of real-time applications may degrade quickly if a minimum required quality of service cannot be achieved. We argue that the current interpretation of adaptation specifically of real-time applications is too narrow and present a framework for expanding the scope of end-to-end adaptation, using the case study of voice communications. Our approach is general in nature, but should especially support communication in mobile environments.

Jörg Ott
Telco Service Delivery Platforms in the Last Decade - A R&D Perspective

This overview discusses the technological and architectural evolution of the telco service delivery platforms in the last decade. We argue that not only the telco business model but also the system environment contributed to the decrease in revenues from added value services and continues to be threatened by the success of the OEMs and their AppStore model. Finally, we review a number of areas in which network operators continue to play a crucial role in the service value chain.

Sandford Bessler
Ontology-Driven Pervasive Service Composition for Everyday Life

Incorporating service composition and pervasive computing into managing user’s everyday activities gives rise to the paradigm of Pervasive Service Composition for everyday life. This paper presents a novel generic model for services supporting everyday activities. The resulting service composition consists of local services within service peers and services are executed as specified in peer coordination and service collaboration. We suggest a task-based, pervasive, semantic, and P2P-based approach for service composition for everyday life. We first address these fundamental characteristics. We give terminologies related to service composition, pervasive computing, ontology, and Pervasive Service Composition. Secondly, we analyze requirements for describing everyday activities. To meet the requirements we design an initial ontology model for capturing user’s everyday activity and accommodating peer coordination and service collaboration in Pervasive Service Composition. Finally, we classify existing approaches to Web service composition.

Jiehan Zhou, Ekaterina Gilman, Jukka Riekki, Mika Rautiainen, Mika Ylianttila
Navigating the Web of Things: Visualizing and Interacting with Web-Enabled Objects

The Web of Things vision aims to turn real-world objects into resources of the Web. The creation of accessible and addressable Virtual Objects, i.e. services that expose the status and capabilities of connected real-world objects using REST APIs, allows for new machine-to-machine interactions to be designed but also for a new user experience to be shaped. Indeed, the change brought about by the connectivity of objects and their ability to share information raises design issues that have to be considered by manufacturers and service providers alike. In this paper, we present an approach to the Web of Things based on both technical and user-centered points of view. We argue that new user interfaces have to be designed to allow users to visualize and interact with virtual objects and environments that host them. We illustrate our vision with an interaction mockup that allows the user to navigate the Web of Things through three devices (a smart phone, a web tablet and a desktop web browser).

Mathieu Boussard, Pierrick Thébault
Shaping Future Service Environments with the Cloud and Internet of Things: Networking Challenges and Service Evolution

To address the new paradigm of future services, cloud computing will be essential for integrating storage and computing functions with the network. As many new types of devices will be connected to networks in the future, it is very important to provide ubiquitous networking capabilities for “connecting to anything” between humans and objects for realize the Internet of Things (IoT). This paper introduces several challenges for the cloud computing in telecom perspectives and ubiquitous networking capabilities to support the IoT. For this, we present the basic concepts and present our vision related to this topic. In addition, we clearly identify characteristics and additional capabilities to support key technologies to be used for the IoT. For various services using ubiquitous networking of IoT, we propose the cloud-based IoT which aims to efficiently support varies services using cloud technology from different kinds of objects (e.g., devices, machines, etc). We also emphasize the necessity of virtualization for service evolution using smart environment of the cloud and the IoT.

Gyu Myoung Lee, Noel Crespi
Relay Placement Problem in Smart Grid Deployment

In this paper, we give an overview of power grid, smart grid, Advanced Metering Infrastructure (AMI), and the deployment cost analysis step by step. The importance between Relay Placement Problem (RPP) and the deployment cost in an AMI system is highlighted. Additionally, a decision supporting system in a pilot AMI system funded by National Science Council (NSC) is briefly described where RPP is solved by an approximation algorithm.

Wei-Lun Wang, Quincy Wu

Web Science

Towards a Research Agenda for Enterprise Crowdsourcing

Over the past few years the crowdsourcing paradigm has evolved from its humble beginnings as isolated purpose-built initiatives, such as Wikipedia and Elance and Mechanical Turk to a growth industry employing over 2 million knowledge workers, contributing over half a billion dollars to the digital economy. Web 2.0 provides the technological foundations upon which the crowdsourcing paradigm evolves and operates, enabling networked experts to work collaboratively to complete a specific task. Enterprise crowdsourcing poses interesting challenges for both academic and industrial research along the social, legal, and technological dimensions.

In this paper we describe the challenges that researchers and practitioners face when thinking about various aspects of enterprise crowdsourcing. First, to establish technological foundations, what are the interaction models and protocols between the Enterprise and the crowd. Secondly, how is crowdsourcing going to face the challenges in quality assurance, enabling Enterprises to optimally leverage the scalable workforce. Thirdly, what are the novel (Web) applications enabled by Enterprise crowdsourcing.

Maja Vukovic, Claudio Bartolini
Analyzing Collaboration in Software Development Processes through Social Networks

Plan-driven, agile or free/open source are software development models that although effective, cannot fully address all the variability of projects and organizations alone. In this work, it is argued that two distinct characteristics of these models – collaboration and discipline – can be the drivers to tailor software development processes to meet projects and organizations needs. In particular, this article focuses on the aspect of collaboration and argues that it can be analyzed through social networks. In this sense, we studied several tools and identified the requirements necessary to explore collaboration, through social networks, in software development processes. These requirements motivated the construction of EvolTrack-SocialNetwork tool.

Andréa Magalhães Magdaleno, Cláudia Maria Lima Werner, Renata Mendes de Araujo
A Web-Based Framework for Collaborative Innovation

Twenty-first century global change in most sectors is challenging our use and management of resources. For a community to adapt to this systemic change, while maintaining and even enhancing its economy and quality of life, the World Economic Forum has recognized the need for new approaches to enable collaborative innovation (CI) and related action among both the leadership and concerned members of the community. Many see the web as an approach to CI and as a new form of creativity machine that can augment our intelligence.

This paper outlines the concepts of an approach to CI based on asset mapping and how it has been supported through a web-based technological framework that requires both communication and operations on the asset map. Based on the experience using the framework in designing and building over 50 systems that incorporate asset-mapping CI, it is clear that CI takes many forms. We illustrate some of these forms through specific examples in environment, cultural heritage, socio-economic development and planning. We conclude that it is not possible to build a single set of tools to support CI and that the users need access to meta-tools and frameworks to implement tailored systems supporting CI directly rather than relying on people with in-depth knowledge of the technologies. WIDE is an example of the type of meta-tools that are needed. Lessons learned from WIDE are being applied in the creation of WIDE 2.0.

Donald Cowan, Paulo Alencar, Fred McGarry, Carlos Lucena, Ingrid Nunes
A Distributed Dynamics for WebGraph Decontamination

In order to increase the visibility of a target pageT, web spammers createhyperlink structures called

web bubbles

, or

link farms

. As countermeasure, special mobile agents, called

web marshals

, are deployed in the detection and disassembling oflink farms. Interestingly, the process of minimizing the number of web marshals and the number of hops needed to dismantle a web bubble is analogous to the graph decontamination problem. A novel distributed algorithm for graph decontamination, which can be used to define the behavior of web marshals, is introduced in this work. The new algorithm is asynchronous and topology independent. Moreover, it presents equal or better performance and needs smaller numbers of web marshals when compared to recent related works targeting only circulant graphs, a typical structure of link farms.

Vanessa C. F. Gonçalves, Priscila M. V. Lima, Nelson Maculan, Felipe M. G. França
Increasing Users’ Trust on Personal Assistance Software Using a Domain-Neutral High-Level User Model

People delegate tasks only if they trust the one that is going to execute them, who can be a person or a system. Current approaches mostly focus on creating methods (elicitation approaches or learning algorithms) that aim at increasing the accuracy of (internal) user models. However, the existence of a chance of a method giving a wrong answer decreases users’ trust on software systems, thus preventing the task delegation. We aim at increasing users’ trust on personal assistance software based on agents by exposing a high-level user model to users, which brings two main advantages: (i) users are able to understand and verify how the system is modeling them (transparency); and (ii) it empowers users to control and make adjustments on their agents. This paper focuses on describing a domain-neutral user metamodel, which allows instantiating high-level user models with configurations and preferences. In addition, we present a two-level software architecture that supports the development of systems with high-level user models and a mechanism that keeps this model consistent with the underlying implementation.

Ingrid Nunes, Simone D. J. Barbosa, Carlos J. P. de Lucena
Understanding IT Organizations

Understanding IT organization is essential for ensuring a successful transformation phase in IT outsourcing deals. We present a model of IT organization, a methodology for deriving it – based both on ethnography and data mining - and a suite of tools for representing and visualizing the model, and to help design changes to bring the organization from its current (AS-IS) state to a desired (TO-BE) state, along with tools for comparing models of organizations based on qualities and characteristics that are expected to have a bearing on the success of the IT transformation step.

Claudio Bartolini, Karin Breitman, Simone Diniz Junqueira Barbosa, Mathias Salle, Rita Berardi, Glaucia Melissa Campos, Erik Eidt
On the 2-Categorical View of Proofs

The relationship between logic and Category Theory was initiated by Lambek when viewing deductive systems as free categories with deductions as morphisms, formulas as objects and the cut-rule as composition. MacLane coherence theorems on monoidal categories showed how equality between morphism in a category resembles equality between proofs in a system with a kind of cut-elimination theorem.This raised what is nowadays known as categorical logic. Intuitionistic Natural Deduction systems are mapped into suitable categories according to

formula-as-objects

and

proofs-as-morphisms

notions of interpretation, extended to include functors as the categorical counterpart of the logical connectives. On the Proof-Theoretical side, Prawitz reductionistic conjecture plays a main role on dening an identity criteria between logical derivations. Reductions between proofs are worth knowing and representing whenever a deeper understanding of equality is present. From the 1-categorical point of view, morphisms are compared only by means of equations. This brings asymmetries into the proof-theoretical and categorical relationship. In the 70s Seely considered a 2-categorical interpretation as a solution to this problem. This article details Seely’s proposal and shows how even under this broader interpretation Prawitz based identity criteria cannot be completely supported. The article also considers the recent use of structural reductions, a kind of global reduction between proofs, as a help for supporting Prawitz based identity criteria.

Cecilia Englander, Edward Hermann Haeusler

Model Transformation and Analysis for Industrial Scale Validation

WOMM: A Weak Operational Memory Model

Memory models of shared memory concurrent programs define the values a read of a shared memory location is allowed to see. Such memory models are typically weaker than the intuitive sequential consistency semantics to allow efficient execution. In this paper, we present WOMM (abbreviation for Weak Operational Memory Model) that formally unifies two sources of weak behavior in hardware memory models: reordering of instructions and weakly consistent memory. We show that a large number of optimizations are allowed by WOMM. We also show that WOMM is weaker than a number of hardware memory models. Consequently, if a program behaves correctly under WOMM, it will be correct with respect to those hardware memory models. Hence, WOMM can be used as a formally specified abstraction of the hardware memory models. Moreover, unlike most weak memory models, WOMM is described using operational semantics, making it easy to integrate into a model checker for concurrent programs. We further show that WOMM has an important property - it has sequential consistency semantics for datarace-free programs.

Arnab De, Abhik Roychoudhury, Deepak D’Souza
A Memory Model for Static Analysis of C Programs

Automatic bug finding with static analysis requires precise tracking of different memory object values. This paper describes a memory modeling method for static analysis of C programs. It is particularly suitable for precise path-sensitive analyses, e.g., symbolic execution. It can handle almost all kinds of C expressions, including arbitrary levels of pointer dereferences, pointer arithmetic, composite array and struct data types, arbitrary type casts, dynamic memory allocation, etc. It maps aliased lvalue expressions to the identical object without extra alias analysis. The model has been implemented in the Clang static analyzer and enhanced the analyzer a lot by enabling it to have precise value tracking ability.

Zhongxing Xu, Ted Kremenek, Jian Zhang
Analysing Message Sequence Graph Specifications

We give a detailed construction of a finite-state transition system for a com-connected Message Sequence Graph. Though this result is well-known in the literature and forms the basis for the solution to several analysis and verification problems concerning MSG specifications, the constructions given in the literature are either not amenable to implementation, or imprecise, or simply incorrect. In contrast we give a detailed construction along with a proof of its correctness. Our transition system is amenable to implementation, and can also be used for a bounded analysis of general (not necessarily com-connected) MSG specifications.

Joy Chakraborty, Deepak D’Souza, K. Narayan Kumar
Optimize Context-Sensitive Andersen-Style Points-To Analysis by Method Summarization and Cycle-Elimination

This paper presents an efficient context-sensitive, field-based Andersen-style points-to analysis algorithm for Java programs. This algorithm first summarizes methods of the program under analysis using directed graphs. Then it performs local circle elimination on these summary graphs to reduce their sizes. The main analysis algorithm uses these graphs to construct the main points-to graph. Topological sort and cycle-elimination is performed on the nodes of both main points-to graphs and summary graphs to speed up the transitive closure computation on the main points-to graph. A suite of Java program benchmarks are used to demonstrate the efficiency of our algorithm.

Li Qian, Zhao Jianhua, Li Xuandong
A Formal Analysis of the Web Services Atomic Transaction Protocol with UPPAAL

We present a formal analysis of the Web Services Atomic Transaction (WS-AT) protocol. WS-AT is a part of the WS-Coordination framework and describes an algorithm for reaching agreement on the outcome of a distributed transaction. The protocol is modelled and verified using the model checker

Uppaal

. Our model is based on an already available formalization using the mathematical language TLA

 + 

where the protocol was verified using the model checker TLC. We discuss the key aspects of these two approaches, including the characteristics of the specification languages, the performances of the tools, and the robustness of the specifications with respect to extensions.

Anders P. Ravn, Jiří Srba, Saleem Vighio
SPARDL: A Requirement Modeling Language for Periodic Control System

This paper develops a requirement modeling language called SPARDL for modeling and analyzing periodic control systems. The system consists of periodic behaviors together with a mode transition mechanism for different behavioral patterns, which is largely applied in the development of control systems of spacecrafts and automobiles. SPARDL can specify the features such as periodic driven behaviors, procedure invocations, timed guard, and mode transition, etc. Each mode in SPARDL can also contain complex activities such as controlling behaviors and data processing. To understand system behaviors precisely, a structural operational semantics is proposed for SPARDL. To analyze periodic control systems in SPARDL, a requirement prototype generation algorithm is proposed to simulate and test the requirements. Meanwhile, a case study is presented to illustrate our approach to requirement modeling and simulation in the development of control systems.

Zheng Wang, Jianwen Li, Yongxin Zhao, Yanxia Qi, Geguang Pu, Jifeng He, Bin Gu
AutoPA: Automatic Prototyping from Requirements

We present AutoPA, a tool to analyze and validate the consistency and functional correctness of use case designs. The tool directly generates an executable prototype from the requirements. The requirements are captured from different views of the application. Each view is constructed as UML diagram annotated with OCL specifications. Based on a formal semantics, the tool is implemented so that both syntactic and semantic consistency among the provided views can be guaranteed. Afterwards the requirements are analyzed and translated into an executable prototype, allowing the user to interactively validate the functional properties of the requirements model. We illustrate the benefits of the tool using a real-world sized example.

Xiaoshan Li, Zhiming Liu, Martin Schäf, Ling Yin
Systematic Model-Based Safety Assessment Via Probabilistic Model Checking

Safety assessment is a well-established process for assuring the safety and reliability of critical (aeronautical) systems. It uses probabilistic (quantitative) analysis to provide precise measures about the safety requirements of a system. Traditionally, quantitative safety assessment uses fault-tree analysis, but certification authorities also allow the use of Markov models. In this paper we propose a strategy for quantitative safety assessment based on the Prism model-checker. Prism models are extracted systematically from a high-level model via the application of translation rules. We illustrate our strategy with a representative system design from the airborne industry.

Adriano Gomes, Alexandre Mota, Augusto Sampaio, Felipe Ferri, Julio Buzzi

Learning Techniques for Software Verification and Validation

Learning Techniques for Software Verification and Validation – Special Track at ISoLA 2010

Learning techniques are used increasingly to improve software verification and validation activities. For example, automata learning techniques have been used for extracting behavioral models of software systems. Such models can serve as formal documentation of the software and they can be further verified using automated tools or used for model-based test case generation. Automata learning techniques have also been used for automating compositional verification and for building abstractions of software behavior in the context of symbolic or parameterized model checking. Furthermore, various machine-learning techniques have been used in fine-tuning heuristics used in constraint solving, in coming up with new abstraction techniques in the context of bounded model checking or shape analysis, in inferring invariants of parameterized systems, or in classifying data in black box testing.

Dimitra Giannakopoulou, Corina S. Păsăreanu
Comparing Learning Algorithms in Automated Assume-Guarantee Reasoning

We compare two learning algorithms for generating contextual assumptions in automated assume-guarantee reasoning. The CDNF algorithm implicitly represents contextual assumptions by a conjunction of DNF formulae, while the OBDD learning algorithm uses ordered binary decision diagrams as its representation. Using these learning algorithms, the performance of assume-guarantee reasoning is compared with monolithic interpolation-based Model Checking in parametrized hardware test cases.

Yu-Fang Chen, Edmund M. Clarke, Azadeh Farzan, Fei He, Ming-Hsien Tsai, Yih-Kuen Tsay, Bow-Yaw Wang, Lei Zhu
Inferring Compact Models of Communication Protocol Entities

Our overall goal is to support model-based approaches to verification and validation of communication protocols by techniques that automatically generate models of communication protocol entities from observations of their external behavior, using techniques based on regular inference (aka automata learning). In this paper, we address the problem that existing regular inference techniques produce “flat” state machines, whereas practically useful protocol models structure the internal state in terms of control locations and state variables, and describes dynamic behavior in a suitable (abstract) programming notation. We present a technique for introducing structure of an unstructured finite-state machine by introducing state variables and program-like descriptions of dynamic behavior, given a certain amount of user guidance. Our technique groups states with “similar control behavior” into control locations, and obtain program-like descriptions by means of decision tree generation. We have applied parts of our approach to an executable state machine specification of the Mobile Arts Advanced Mobile Location Center (A-MLC) protocol and evaluated the results by comparing them to the original specification.

Therese Bohlin, Bengt Jonsson, Siavash Soleimanifard
Inference and Abstraction of the Biometric Passport

Model-based testing is a promising software testing technique for the automation of test generation and test execution. One obstacle to its adoption is the difficulty of developing models. Learning techniques provide tools to automatically derive automata-based models. Automation is obtained at the cost of time and unreadability of the models. We propose an abstraction technique to reduce the alphabet and large data sets. Our idea is to extract a priori knowledge about the teacher and use this knowledge to define equivalence classes. The latter are then used to define a new and reduced alphabet. The a priori knowledge can be obtained from informal documentation or requirements. We formally prove soundness of our approach. We demonstrate the practical feasibility of our technique by learning a model of the new biometric passport. Our automatically learned model is of comparable size and complexity of a previous model manually developed in the context of testing a passport implementation. Our model can be learned within one hour and slightly refines the previous model.

Fides Aarts, Julien Schmaltz, Frits Vaandrager
From ZULU to RERS
Lessons Learned in the ZULU Challenge

This paper summarizes our experience with the ZULU challenge on active learning without equivalence queries, presents our winning solution, investigates the character of ZULU’s rating approach, and discusses how this approach can be taken further to establish a framework for the systematic investigation of domain-specific, scalable learning solutions for practically relevant application. In particular, it discusses the RERS initiative, which provides a community platform together with a learning framework that allows users to interactively compose complex learning solutions on the basis of libraries for various learning components, system connectors, and other auxiliary functionality. This framework will be the backbone for an extended challenge on learning in 2011.

Falk Howar, Bernhard Steffen, Maik Merten
Backmatter
Metadaten
Titel
Leveraging Applications of Formal Methods, Verification, and Validation
herausgegeben von
Tiziana Margaria
Bernhard Steffen
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-16558-0
Print ISBN
978-3-642-16557-3
DOI
https://doi.org/10.1007/978-3-642-16558-0