Zum Inhalt

From Software Engineering to Formal Methods and Tools, and Back

Essays Dedicated to Stefania Gnesi on the Occasion of Her 65th Birthday

  • 2019
  • Buch

Über dieses Buch

This volume was published in honor of Stefania Gnesi’s 65th birthday. The Festschrift volume contains 32 papers written by close collaborators and friends of Stefania and was presented to her on October 8, 2019 one-day colloquium held in Porto, Portugal,

The Festschrift consists of eight sections, seven of which reflect the main research areas to which Stefania has contributed. Following a survey of Stefania's legacy in research and a homage by her thesis supervisor, these seven sections are ordered according to Stefania's life cycle in research, from software engineering to formal methods and tools, and back: Software Engineering; Formal Methods and Tools; Requirements Engineering; Natural Language Processing; Software Product Lines; Formal Verification; and Applications.

Inhaltsverzeichnis

Nächste
  • current Page 1
  • 2
  1. Frontmatter

  2. The Legacy of Stefania Gnesi

    From Software Engineering to Formal Methods and Tools, and Back Maurice H. ter Beek, Alessandro Fantechi, Laura Semini
    Abstract
    Stefania Gnesi was born in Livorno in 1954. She studied Computer Science at the University of Pisa, where she graduated summa cum laude in 1978.
  3. From Dynamic Programming to Programming Science

    Some Recollections in Honour of Stefania Gnesi Ugo Montanari
    Abstract
    Stefania Gnesi graduated summa cum laude in Scienze dell’Informazione at the University of Pisa in June 1978.
  4. Software Engineering

    1. Frontmatter

    2. Ten Years of Self-adaptive Systems: From Dynamic Ensembles to Collective Adaptive Systems

      Antonio Bucchiarone, Marina Mongiello
      Abstract
      Self-adaptive systems have been introduced to manage situations where software systems operate under continuous perturbations due to the unpredicted behaviors of their clients and the occurrence of exogenous changes in the environment in which they operate. Adaptation is triggered by the run-time occurrence of an extraordinary circumstance, and it is handled by an adaptation process that involves components affected by the issue, and is able to handle the run-time modification of the structure and behavior of a running system. In this paper we report our experience gained in the last 10 years on models, techniques and applications in the field of self-adaptation. We present the various steps taken by means of a formal framework introduced to characterize the different aspects of an ensemble-based software engineering approach. We present (i) how to model dynamic ensembles using typed graph grammars, (ii) how to specialize and re-configure ensembles and, (ii) how to manage collective adaptations in an ensemble. All these aspects have been part of our research on self-adaptation and have been used to specify and deploy concrete solutions in different application domains.
    3. Multi-modelling and Co-simulation in the Engineering of Cyber-Physical Systems: Towards the Digital Twin

      John Fitzgerald, Peter Gorm Larsen, Ken Pierce
      Abstract
      Ensuring the dependability of Cyber-Physical Systems (CPSs) poses challenges for model-based engineering, stemming from the semantic heterogeneity of the models of computational, physical and human processes, and from the range of stakeholders involved. We argue that delivering such dependability requires a marriage of multi-disciplinary models developed during design with models derived from real operational data. Assets developed during design thus become the basis of a learning digital twin, able to support decision making both in redesign and in responsive operation. Starting from an open integrated toolchain leveraging formal models for CPS design, we consider the extension of this concept towards digital twins. A small example inspired by agricultural robotics illustrates some of the opportunities for research and innovation in delivering digital twins that contribute to dependability.
    4. Changing Software in a Changing World: How to Test in Presence of Variability, Adaptation and Evolution?

      Antonia Bertolino, Paola Inverardi
      Abstract
      Modern software-intensive and pervasive systems need to be able to manage different requirements of variability, adaptation and evolution. The latter are surely related properties, all bringing uncertainty, but covering different aspects and requiring different approaches. Testing of such systems introduces many challenges: variability would require the test of too many configurations and variants well beyond feasibility; adaptation should be based on context-aware testing over many predictable or even unpredictable scenarios; evolution would entail testing a system for which the reference model has become out-of-date. It is evident how current testing approaches are not adequate for such types of systems. We make a brief overview of testing challenges for changing software in a changing world, and hint at some promising approaches, arguing how these would need to be part of a holistic validation approach that can handle uncertainty.
    5. Improving Software Engineering Research Through Experimentation Workbenches

      Klaus Schmid, Sascha El-Sharkawy, Christian Kröher
      Abstract
      Experimentation with software prototypes plays a fundamental role in software engineering research. In contrast to many other scientific disciplines, however, explicit support for this key activity in software engineering is relatively small. While some approaches to improve this situation have been proposed by the software engineering community, experiments are still very difficult and sometimes impossible to replicate.
      In this paper, we propose the concept of an experimentation workbench as a means of explicit support for experimentation in software engineering research. In particular, we discuss core requirements that an experimentation workbench should satisfy in order to qualify as such and to offer a real benefit for researchers. Beyond their core benefits for experimentation, we stipulate that experimentation workbenches will also have benefits in regard to reproducibility and repeatability of software engineering research. Further, we illustrate this concept with a scenario and a case study, and describe relevant challenges as well as our experience with experimentation workbenches.
  5. Formal Methods and Tools

    1. Frontmatter

    2. Innovating Medical Image Analysis via Spatial Logics

      Gina Belmonte, Vincenzo Ciancia, Diego Latella, Mieke Massink
      Abstract
      Current computer-assisted medical imaging for the planning of radiotherapy requires high-level mathematical and computational skills. These are often paired with the case-by-case integration of highly specialised technologies. The lack of modularity at the right level of abstraction in this field hinders research, collaboration and transfer of expertise among medical physicists, engineers and technicians. The longer term aim of the introduction of spatial logics and spatial model checking in medical imaging is to provide an open platform introducing declarative medical image analysis. This will provide domain experts with a convenient and very concise way to specify contouring and segmentation operations, grounded on the solid mathematical foundations of Topological Spatial Logics. We show preliminary results, obtained using the spatial model checker VoxLogicA, for the automatic identification of specific brain tissues in a healthy brain and we discuss a selection of challenges for spatial model checking for medical imaging.
    3. Formal Methods in Designing Critical Cyber-Physical Systems

      Mehrnoosh Askarpour, Carlo Ghezzi, Dino Mandrioli, Matteo Rossi, Christos Tsigkanos
      Abstract
      Cyber-Physical Systems (CPS) are increasingly applied in critical contexts, where they have to support safe and secure operations, often subject to stringent timing requirements. Typical examples are scenarios involving automated living or working spaces in which humans operate, or human-robot collaborations (HRC) in modern manufacturing. Formal methods have been traditionally investigated to support modeling and verification of critical systems. In this paper, we review some of the main new challenges arising in the application of formal methods to modeling and verification of CPS. We do that by presenting two case studies (emergency response in a smart city and a smart manufacturing system), reflecting past work of the authors, from which some general lessons are distilled.
    4. Automata-Based Behavioural Contracts with Action Correlation

      Davide Basile, Rosario Pugliese, Francesco Tiezzi, Pierpaolo Degano, Gian-Luigi Ferrari
      Abstract
      The rigorous design of Service-Oriented Computing (SOC) applications has been identified as one of the primary research challenges for the next 10 years. Many foundational theories for SOC have been defined, but they often rely on mechanisms different from real-world SOC technologies, hindering actual service modelling and verification. In this paper, we propose a novel automata-based formalism of service contracts equipped with a mechanism, inspired by current web service technologies, exploiting correlation data to drive service interactions and with formal foundations enabling reasoning about service correctness.
    5. Logical Support for Bike-Sharing System Design

      Ionuţ Ţuţu, Claudia Elena Chiriţă, Antónia Lopes, José Luiz Fiadeiro
      Abstract
      Automated bicycle-sharing systems (bss) are a prominent example of reconfigurable cyber-physical systems for which the locality and connectivity of their elements are central to the way in which they operate. These features motivate us to study bss from the perspective of Actor-Network Theory – a framework for modelling cyber-physical-system protocols in which systems are networks of actors that are no longer limited to programs but can also include humans and physical artefacts. In order to support logical reasoning about information-flow properties that occur in bss, we use a logical framework that we have recently developed for actor networks, which results from a two-stage hybridization process. The first stage corresponds to a logic that captures the locality and connectivity of actors in a given configuration of the network; the second stage corresponds to a logic of possible interactions between actors, which captures the dynamics of the system in terms of network reconfigurations. To illustrate the properties that can be checked using this framework, we provide an actor-network specification of a particular bss, and use a recently developed tool for hybridized logics to highlight and correct an information-flow vulnerability of the system.
    6. A Generic Dynamic Logic with Applications to Interaction-Based Systems

      Rolf Hennicker, Martin Wirsing
      Abstract
      We propose a generic dynamic logic with the usual diamond and box modalities over structured actions. Instead of using regular expressions of actions our logic is parameterised by the form of the actions which can be given by an arbitrary language for complex, structured actions. In particular, our logic can be instantiated by languages that describe complex interactions between system components. We study two instantiations of our logic for specifying global behaviours of interaction-based systems: one on the basis of global session types and the other one using UML sequence diagrams. Moreover, we show that our proposed generic logic, and hence all its instantiations, satisfy bisimulation invariance and a Hennessy-Milner theorem.
  6. Requirements Engineering

    1. Frontmatter

    2. Ambiguity in Requirements Engineering: Towards a Unifying Framework

      Vincenzo Gervasi, Alessio Ferrari, Didar Zowghi, Paola Spoletini
      Abstract
      A long stream of research in RE has been devoted to analyzing the occurrences and consequences of ambiguity in requirements documents. Ambiguity often occurs in documents, most often in natural language (NL) ones, but occasionally also in formal specifications, be it because of abstraction, or of imprecise designation of which real-world entities are denotated by certain expressions. In many of those studies, ambiguity has been considered a defect to be avoided. In this paper, we investigate the nature of ambiguity, and advocate that the simplistic view of ambiguity as merely a defect in the document does not do justice to the complexity of this phenomenon. We offer a more extensive analysis, based on the multiple linguistic sources of ambiguity, and present a list of real-world cases, both in written matter and in oral interviews, that we analyze based on our framework. We hope that a better understanding of the phenomenon can help in the analysis of practical experiences and in the design of more effective methods to detect, mark and handle ambiguity.
    3. QuARS: A Pioneer Tool for NL Requirement Analysis

      Giuseppe Lami, Mario Fusani, Gianluca Trentanni
      Abstract
      This paper summarizes the achievements of Stefania Gnesi’s research activity in the area of the natural language requirements analysis and quality evaluation. The development of the QuARS tool has been the pivotal step of this research stream led by Stefania Gnesi at ISTI–CNR. A functional description of the QuARS tool is provided as well as a short report of its evolutions over a decade. The wide use of QuARS in several research and industrial contexts demonstrates the validity and the originality of Stefania’s contribution in such an area of software engineering.
    4. Detecting Feature Interactions in FORML Models

      Sandy Beidu, Joanne M. Atlee
      Abstract
      Requirement engineers must know how features (units of functionality) interact, in order to resolve undesired interactions. Model checking has been proposed as an effective method for detecting feature interactions. We propose a method for (1) modelling features as distinct modules (explicating intended interactions with other features), (2) composing feature modules into a system model that preserves intended interactions, (3) translating this rich model into the input language of a model checker, and (4) automatically generating correctness properties whose violations reveal unintended feature interactions.
  7. Natural Language Processing

    1. Frontmatter

    2. Comparing Results of Natural Language Disambiguation Tools with Reports of Manual Reviews of Safety-Related Standards

      Isabella Biscoglio, Attilio Ciancabilla, Mario Fusani, Giuseppe Lami, Gianluca Trentanni
      Abstract
      Methods and tools for detecting and measuring ambiguity in texts have been proposed for years, yet their efficacy is still under study for improvement, encouraged by results in various application fields (requirements, legal documents, interviews, ...). The paper presents a fresh-started process aimed at validating such methods and tools by applying some of them to a semi-structured data corpus. This corpus represents results of manual reviews, done by international experts, along with their source texts. The purpose is to check how much results of automated analysis are consistent with the reviewers reports. The application domain is that of safety-related system/software Standards in Railway. Thus, if we increase confidence in tools, then we also increase confidence in Standard correctness, which in turn impacts in conforming products. Care is taken in using, for scientific purpose only, sensitive, unpublished source data (the comments, protected by NDAs), that are kept reviewer-anonymous before statistical results are produced, while the Standards are publicly available texts. The results will also be used to improve the tools themselves, even if much elaboration is still to be carried out: the research is still at its beginning, so metrics for tool evaluation is a goal, whose characteristics are just sketched and discussed in the paper.
    3. Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence

      Roberto Garigliano, Luisa Mich
      Abstract
      Recent advances in artificial intelligence raise a number of concerns. Among the challenges to be addressed by researchers, accountability of artificial intelligence solutions is one of the most critical. This paper focuses on artificial intelligence applications using natural language to investigate if the core semantics defined for a large-scale natural language processing system could assist in addressing accountability issues. Core semantics aims to obtain a full interpretation of the content of natural language texts, representing both implicit and explicit knowledge, using only ‘subj-action-(obj)’ structures and causal, temporal, spatial and personal-world links. The first part of the paper offers a summary of the difficulties to be addressed and of the reasons why representing the meaning of a natural language text is relevant for artificial intelligence accountability. In the second part, a-proof-of-concept for the application of such a knowledge representation to support accountability, and a detailed example of the analysis obtained with a prototype system named CoreSystem is illustrated. While only preliminary, these results give some new insights and indicate that the provided knowledge representation can be used to support accountability, looking inside the box.
    4. QuOD: An NLP Tool to Improve the Quality of Business Process Descriptions

      Alessio Ferrari, Giorgio O. Spagnolo, Antonella Fiscella, Guido Parente
      Abstract
      [Context and Motivation] In real-world organisations, business processes (BPs) are often described by means of natural language (NL) documents. Indeed, although semi-formal graphical notations exist to model BPs, most of the legacy process knowledge—when not tacit—is still conveyed through textual procedures or operational manuals, in which the BPs are specified. This is particularly true for public administrations (PAs), in which a large variety of BPs exist (e.g., definition of tenders, front-desk support) that have to be understood and put into practice by civil servants. [Question/problem] Incorrect understanding of the BP descriptions in PAs may cause delays in the delivery of services to citizens, or, in some cases, incorrect execution of the BPs. [Principal idea/results] In this paper, we present the development of an NLP-based tool named QuOD (Quality Analyser for Official Documents), oriented to detect linguistic defects in BP descriptions and to provide recommendations for improvements. [Contribution] QuOD is the first tool that addresses the problem of identifying NL defects in BP descriptions of PAs. The tool is available online at http://narwhal.it/quod/index.html.
Nächste
  • current Page 1
  • 2
Titel
From Software Engineering to Formal Methods and Tools, and Back
Herausgegeben von
Maurice H. ter Beek
Alessandro Fantechi
Laura Semini
Copyright-Jahr
2019
Electronic ISBN
978-3-030-30985-5
Print ISBN
978-3-030-30984-8
DOI
https://doi.org/10.1007/978-3-030-30985-5

Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, ams.solutions GmbH/© ams.solutions GmbH, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, Ferrari electronic AG/© Ferrari electronic AG, Doxee AT GmbH/© Doxee AT GmbH , Haufe Group SE/© Haufe Group SE, NTT Data/© NTT Data