Skip to main content

2017 | Buch

Software Engineering for Self-Adaptive Systems III. Assurances

International Seminar, Dagstuhl Castle, Germany, December 15-19, 2013, Revised Selected and Invited Papers

insite
SUCHEN

Über dieses Buch

A major challenge for modern software systems is to become more cost-effective, while being versatile, flexible, resilient, energy-efficient, customizable, and configurable when reacting to run-time changes that may occur within the system itself, its environment or requirements. One of the most promising approaches to achieving such properties is to equip the software system with self-adaptation capabilities. Despite recent advances in this area, one key aspect that remains to be tackled in depth is the provision of assurances.

Originating from a Dagstuhl seminar held in December 2013, this book constitutes the third volume in the series “Software Engineering for Self-Adaptive Systems”, and looks specifically into the provision of assurances. Opening with an overview chapter on Research Challenges, the book presents 13 further chapters written and carefully reviewed by internationally leading researchers in the field. The book is divided into topical sections on research challenges, evaluation, integration and coordination, and reference architectures and platforms.

Inhaltsverzeichnis

Frontmatter

Research Challenges

Frontmatter
Software Engineering for Self-Adaptive Systems: Research Challenges in the Provision of Assurances
Abstract
The important concern for modern software systems is to become more cost-effective, while being versatile, flexible, resilient, dependable, energy-efficient, customisable, configurable and self-optimising when reacting to run-time changes that may occur within the system itself, its environment or requirements. One of the most promising approaches to achieving such properties is to equip software systems with self-managing capabilities using self-adaptation mechanisms. Despite recent advances in this area, one key aspect of self-adaptive systems that remains to be tackled in depth is the provision of assurances, i.e., the collection, analysis and synthesis of evidence that the system satisfies its stated functional and non-functional requirements during its operation in the presence of self-adaptation. The provision of assurances for self-adaptive systems is challenging since run-time changes introduce a high degree of uncertainty. This paper on research challenges complements previous roadmap papers on software engineering for self-adaptive systems covering a different set of topics, which are related to assurances, namely, perpetual assurances, composition and decomposition of assurances, and assurances obtained from control theory. This research challenges paper is one of the many results of the Dagstuhl Seminar 13511 on Software Engineering for Self-Adaptive Systems: Assurances which took place in December 2013.
Rogério de Lemos, David Garlan, Carlo Ghezzi, Holger Giese, Jesper Andersson, Marin Litoiu, Bradley Schmerl, Danny Weyns, Luciano Baresi, Nelly Bencomo, Yuriy Brun, Javier Camara, Radu Calinescu, Myra B. Cohen, Alessandra Gorla, Vincenzo Grassi, Lars Grunske, Paola Inverardi, Jean-Marc Jezequel, Sam Malek, Raffaela Mirandola, Marco Mori, Hausi A. Müller, Romain Rouvoy, Cecília M. F. Rubira, Eric Rutten, Mary Shaw, Giordano Tamburrelli, Gabriel Tamura, Norha M. Villegas, Thomas Vogel, Franco Zambonelli
Perpetual Assurances for Self-Adaptive Systems
Abstract
Providing assurances for self-adaptive systems is challenging. A primary underlying problem is uncertainty that may stem from a variety of different sources, ranging from incomplete knowledge to sensor noise and uncertain behavior of humans in the loop. Providing assurances that the self-adaptive system complies with its requirements calls for an enduring process spanning the whole lifetime of the system. In this process, humans and the system jointly derive and integrate new evidence and arguments, which we coined perpetual assurances for self-adaptive systems. In this paper, we provide a background framework and the foundation for perpetual assurances for self-adaptive systems. We elaborate on the concrete challenges of offering perpetual assurances, requirements for solutions, realization techniques and mechanisms to make solutions suitable. We also present benchmark criteria to compare solutions. We then present a concrete exemplar that researchers can use to assess and compare approaches for perpetual assurances for self-adaptation.
Danny Weyns, Nelly Bencomo, Radu Calinescu, Javier Camara, Carlo Ghezzi, Vincenzo Grassi, Lars Grunske, Paola Inverardi, Jean-Marc Jezequel, Sam Malek, Raffaela Mirandola, Marco Mori, Giordano Tamburrelli
Challenges in Composing and Decomposing Assurances for Self-Adaptive Systems
Abstract
Self-adaptive software systems adapt to changes in the environment, in the system itself, in their requirements, or in their business objectives. Typically, these systems attempt to maintain system goals at run time and often provide assurance that they will meet their goals under dynamic and uncertain circumstances. While significant research has focused on ways to engineer self-adaptive capabilities into both new and legacy software systems, less work has been conducted on how to assure that self-adaptation maintains system goals. For traditional, especially safety-critical software systems, assurance techniques decompose assurances into sub-goals and evidence that can be provided by parts of the system. Existing approaches also exist for composing assurances, in terms of composing multiple goals and composing assurances in systems of systems. While some of these techniques may be applied to self-adaptive systems, we argue that several significant challenges remain in applying them to self-adaptive systems in this chapter. We discuss how existing assurance techniques can be applied to composing and decomposing assurances for self-adaptive systems, highlight the challenges in applying them, summarize existing research to address some of these challenges, and identify gaps and opportunities to be addressed by future research.
Bradley Schmerl, Jesper Andersson, Thomas Vogel, Myra B. Cohen, Cecilia M. F. Rubira, Yuriy Brun, Alessandra Gorla, Franco Zambonelli, Luciano Baresi
What Can Control Theory Teach Us About Assurances in Self-Adaptive Software Systems?
Abstract
Self-adaptive software (SAS) systems monitor their own behavior and autonomously make dynamic adjustments to maintain desired properties in response to changes in the systems’ operational contexts. Control theory provides verifiable feedback models to realize this kind of autonomous control for a broad class of systems for which precise quantitative or logical discrete models can be defined. Recent MAPE-K models, along with variants such as the hierarchical ACRA, address a broader range of tasks. However, they do not provide the inherent assurance mechanisms that control theory does, as they do not explicitly identify and establish the properties that reliable controllers should have. These properties, in general, result not from the abstract models, but from the specifics of control strategies, which are precisely what these models fail to analyze. We show that, even for systems too complex for direct application of classical control theory, the abstractions of control theory provide design guidance that identifies important control characteristics and raises critical design issues about the details of the strategy that determine the controllability of the resulting systems. This in turn enables careful reasoning about whether the control characteristics are in fact achieved. In this chapter we examine the control theory approach, explain several control strategies illustrated with examples from both domains, classical control theory and SAS, and show how the issues addressed by these strategies can and should be seriously considered for the assurance of self-adaptive software systems. From this examination we distill challenges for developing principles that may serve as the basis of a control theory for the assurance of self-adaptive software systems.
Marin Litoiu, Mary Shaw, Gabriel Tamura, Norha M. Villegas, Hausi A. Müller, Holger Giese, Romain Rouvoy, Eric Rutten

Evaluation

Frontmatter
MCaaS: Model Checking in the Cloud for Assurances of Adaptive Systems
Abstract
Due to the uncertainty of what actual adaptations will be performed at run time, verifying adaptive systems at design time may lead to limited results or may even be infeasible. Run-time verification techniques have been proposed to cope with this uncertainty. Recently, there has been an increasing interest to use model checking (an important verification technique) at run time in order to verify the expected properties of adaptive systems. Given a system specification and expected system properties, a model checker determines whether or not the specification satisfies its properties in the presence of self-adaptation. One key concern is the generally high resource needs of model checking, which may prohibit its use on resource- and power-constrained devices, such as smart-phones or Internet-of-Things devices. To address this challenge, we introduce a cloud-based framework that delivers model checking as a service (MCaaS). MCaaS offloads computationally intensive model checking tasks to the cloud, thereby offering verification capabilities on demand. Adaptive systems running on any kind of connected device may take the advantages of model checking at run time by invoking the MCaaS service. To dynamically allocate the required cloud resources (CPU and memory), we employ machine learning to estimate the resource usage of an actual model checking task at run time. As proof of concept, we implement and validate the approach for the case of probabilistic model checking, which facilitates verifying typical properties such as reliability.
Amir Molzam Sharifloo, Andreas Metzger
Analyzing Self-Adaptation Via Model Checking of Stochastic Games
Abstract
Design decisions made during early development stages of self-adaptive systems tend to have a significant impact upon system properties at run time (e.g., safety, QoS). However, understanding the implications of these decisions a priori is difficult due to the different types and degrees of uncertainty that affect such systems (e.g., simplifying assumptions, human-in-the-loop). To provide some assurances about self-adaptive system designs, evidence can be gathered from activities such as simulations and prototyping, but these demand a significant effort and do not provide a systematic way of dealing with uncertainty. In this chapter, we describe an approach based on model checking of stochastic multiplayer games (SMGs) that enables developers to approximate the behavioral envelope of a self-adaptive system by analyzing best- and worst-case scenarios of alternative designs for self-adaptation mechanisms. Compared to other sources of evidence, such as simulations or prototypes, our approach is purely declarative and hence has the potential of providing developers with a preliminary understanding of adaptation behavior with less effort, and without the need to have any specific adaptation algorithms or infrastructure in place. We illustrate our approach by showing how it can be used to mitigate different types of uncertainty in contexts such as self-protecting systems, proactive latency-aware adaptation, and human-in-the-loop adaptation.
Javier Cámara, David Garlan, Gabriel A. Moreno, Bradley Schmerl
An Approach for Isolated Testing of Self-Organization Algorithms
Abstract
We provide a systematic approach for testing self-organization (SO) algorithms. The main challenges for such a testing domain are the strongly ramified state space, the possible error masking, the interleaving of mechanisms, and the oracle problem resulting from the main characteristics of SO algorithms: their inherent non-deterministic behavior on the one hand, and their dynamic environment on the other. A key to success for our SO algorithm testing framework is automation, since it is rarely possible to cope with the ramified state space manually. The test automation is based on a model-based testing approach where probabilistic environment profiles are used to derive test cases that are performed and evaluated on isolated SO algorithms. Besides isolation, we are able to achieve representative test results with respect to a specific application. For illustration purposes, we apply the concepts of our framework to partitioning-based SO algorithms and provide an evaluation in the context of an existing smart-grid application.
Benedikt Eberhardinger, Gerrit Anders, Hella Seebach, Florian Siefert, Alexander Knapp, Wolfgang Reif
Using Runtime Quantitative Verification to Provide Assurance Evidence for Self-Adaptive Software
Advances, Applications and Research Challenges
Abstract
Providing assurance that self-adaptive software meets its dependability, performance and other quality-of-service (QoS) requirements is a great challenge. Recent approaches to addressing it use formal methods at runtime, to drive the reconfiguration of self-adaptive software in provably correct ways. One approach that shows promise is runtime quantitative verification (RQV), which uses quantitative model checking to reverify the QoS properties of self-adaptive software after environmental, requirement and system changes. This reverification identifies QoS requirement violations and supports the dynamic reconfiguration of the software for recovery from such violations. More importantly, it provides irrefutable assurance evidence that adaptation decisions are correct. In this paper, we survey recent advances in the development of efficient RQV techniques, the application of these techniques within multiple domains and the remaining research challenges.
Radu Calinescu, Simos Gerasimou, Kenneth Johnson, Colin Paterson

Integration and Coordination

Frontmatter
Contracts-Based Control Integration into Software Systems
Abstract
Among the different techniques that are used to design self-adaptive software systems, control theory allows one to design an adaptation policy whose properties, such as stability and accuracy, can be formally guaranteed under certain assumptions. However, in the case of software systems, the integration of these controllers to build complete feedback control loops remains manual. More importantly, it requires an extensive handcrafting of non-trivial implementation code. This may lead to inconsistencies and instabilities as no systematic and automated assurance can be obtained on the fact that the initial assumptions for the designed controller still hold in the resulting system.
In this chapter, we rely on the principles of design-by-contract to ensure the correction and robustness of a self-adaptive software system built using feedback control loops. Our solution raises the level of abstraction upon which the loops are specified by allowing one to define and automatically verify system-level properties organized in contracts. They cover behavioral, structural and temporal architectural constraints as well as explicit interaction. These contracts are complemented by a first-class support for systematic fault handling. As a result, assumptions about the system operation conditions become more explicit and verifiable in a systematic way.
Filip Křikava, Philippe Collet, Romain Rouvoy, Lionel Seinturier
Synthesis of Distributed and Adaptable Coordinators to Enable Choreography Evolution
Abstract
Software systems are often built by composing together software services distributed over the Internet. Choreographies are a form of decentralized composition that models the external interaction of the participant services by specifying peer-to-peer message exchanges from a global perspective. Nowadays, very few approaches address the problem of actually realizing choreographies in an automatic way. Most current approaches are rather static and are poorly suited to the need of the Future Internet. In this chapter, we propose a method for the automatic synthesis of evolving choreographies. Coordination software entities are synthesized in order to proxify and control the participant services’ interaction. When interposed among the services, coordination entities enforce the collaboration specified by the choreography. The ability to evolve the coordination logic in a modular way enables choreography evolution in response to possible changes. We illustrate our method at work on a running example in the domain of Intelligent Transportation Systems (ITS).
Marco Autili, Paola Inverardi, Alexander Perucci, Massimo Tivoli
Models for the Consistent Interaction of Adaptations in Self-Adaptive Systems
Abstract
Self-adaptive systems enable the run-time modification, or dynamic adaptation, of a software system in order to offer the most appropriate behavior of the system according to its context of execution and the situations of its surrounding environment. Depending on the situations currently at hand, multiple and varied adaptations may affect the original behavior of a software system simultaneously. This may lead to accidental behavioral inconsistencies if not all possible interactions with other adaptations were anticipated. The behavioral inconsistencies problem becomes even more acute if adaptations are unknown beforehand, for example, when new adaptations are incorporated to the system on the fly. Self-adaptive systems must therefore provide a means to arbitrate interactions between adaptions at run time, to ensure that there will be no inconsistencies in the system’s behavior as adaptations are dynamically composed into or withdrawn from the system. This chapter presents existing approaches that allow the development of self-adaptive systems and management of the behavioral inconsistencies that may appear due to the interaction of adaptations at run time. The approaches are classified into four categories: formal, architectural modeling, rule-based, and transition system approaches. Each of these approaches is evaluated with respect to the assurances they provide for the run-time consistency of the system, in the light of dynamic behavior adaptations.
Nicolás Cardozo, Kim Mens, Siobhán Clarke
Feedback Control as MAPE-K Loop in Autonomic Computing
Abstract
Computing systems are becoming more and more dynamically reconfigurable or adaptive, to be flexible w.r.t. their environment and to automate their administration. Autonomic computing proposes a general structure of feedback loop to take this into account. In this paper, we are particularly interested in approaches where this feedback loop is considered as a case of control loop where techniques stemming from Control Theory can be used to design efficient safe, and predictable controllers. This approach is emerging, with separate and dispersed effort, in different areas of the field of reconfigurable or adaptive computing, at software or architecture level. This paper surveys these approaches from the point of view of control theory techniques, continuous and discrete (supervisory), in their application to the feedback control of computing systems, and proposes detailed interpretations of feedback control loops as MAPE-K loop, illustrated with case studies.
Eric Rutten, Nicolas Marchand, Daniel Simon

Reference Architectures and Platforms

Frontmatter
An Extended Description of MORPH: A Reference Architecture for Configuration and Behaviour Self-Adaptation
Abstract
An architectural approach to self-adaptive systems involves runtime change of system configuration (i.e., the system’s components, their bindings and operational parameters) and behaviour update (i.e., component orchestration). The architecture should allow for both configuration and behaviour changes selected from pre-computed change strategies and for synthesised change strategies at run-time to satisfy changes in the environment, changes in the specified goals of the system or in response to failures or degradation in quality attributes, such as performance, of the system itself. Although controlling configuration and behaviour at runtime has been discussed and applied to architectural adaptation, architectures for self-adaptive systems often compound these two aspects reducing the potential for adaptability. In this work we provide an extended description of our proposal for a reference architecture that allows for coordinated yet transparent and independent adaptation of system configuration and behaviour.
Victor Braberman, Nicolas D’Ippolito, Jeff Kramer, Daniel Sykes, Sebastian Uchitel
MOSES: A Platform for Experimenting with QoS-Driven Self-Adaptation Policies for Service Oriented Systems
Abstract
Architecting software systems according to the service-oriented paradigm, and designing runtime self-adaptable systems are two relevant research areas in today’s software engineering. In this chapter we present MOSES, a software platform supporting QoS-driven adaptation of service-oriented systems. It has been conceived for service-oriented systems architected as composite services that receive requests generated by different classes of users. MOSES integrates within a unified framework different adaptation mechanisms. In this way it achieves a greater flexibility in facing various operating environments and the possibly conflicting QoS requirements of several concurrent users. Besides providing its own self-adaptation functionalities, MOSES lends itself to the experimentation of alternative approaches to QoS-driven adaptation of service-oriented systems thanks to its modular architecture.
Valeria Cardellini, Emiliano Casalicchio, Vincenzo Grassi, Stefano Iannucci, Francesco Lo Presti, Raffaela Mirandola
Backmatter
Metadaten
Titel
Software Engineering for Self-Adaptive Systems III. Assurances
herausgegeben von
Rogério de Lemos
David Garlan
Carlo Ghezzi
Holger Giese
Copyright-Jahr
2017
Electronic ISBN
978-3-319-74183-3
Print ISBN
978-3-319-74182-6
DOI
https://doi.org/10.1007/978-3-319-74183-3