Skip to main content

2018 | Buch

Software Technologies: Applications and Foundations

STAF 2018 Collocated Workshops, Toulouse, France, June 25-29, 2018, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book contains the thoroughly refereed technical papers presented in eight workshops collocated with the International Conference on Software Technologies: Applications and Foundations, STAF 2018, held in Toulouse, France, in June 2018.

The 65 full papers presented were carefully reviewed and selected from 120 submissions.

The events whose papers are included in this volume are:

CoSim-CPS 2018: 2nd International Workshop on Formal Co-Simulation of Cyber-Physical Systems

DataMod 2018: 7th International Symposium From Data to Models and Back

FMIS 2018: 7th International Workshop on Formal Methods for Interactive Systems

FOCLASA 2018: 16th International Workshop on Foundations of Coordination Languages and Self-adaptative Systems

GCM 2018: 9th International Workshop on Graph Computation Models

MDE@DeRun 2018: 1st International Workshop on Model-Driven Engineering for Design-Runtime Interaction in Complex Systems

MSE 2018: 3rd International Workshop on Microservices: Science and Engineering

SecureMDE 2018: 1st International Workshop on Security for and by Model-Driven Engineering

Inhaltsverzeichnis

Frontmatter

Formal Co-Simulation of Cyber-Physical Systems (CoSim-CPS)

Frontmatter
Towards the Verification of Hybrid Co-simulation Algorithms

Engineering modern systems is becoming increasingly difficult due to the heterogeneity between different subsystems. Modelling and simulation techniques have traditionally been used to tackle complexity, but with increasing heterogeneity of the subsystems, it becomes impossible to find appropriate modelling languages and tools to specify and analyse the system as a whole.Co-simulation is a technique to combine multiple models and their simulators in order to analyse the behaviour of the whole system over time. Past research, however, has shown that the naïve combination of simulators can easily lead to incorrect simulation results, especially when co-simulating hybrid systems.This paper shows: (i) how co-simulation of a family of hybrid systems can fail to reproduce the order of events that should have occurred (event ordering); (ii) how to prove that a co-simulation algorithm is correct (w.r.t. event ordering), and if it is incorrect, how to obtain a counterexample; and (iii) how to correct an incorrect co-simulation algorithm. We apply the above method to two well known co-simulation algorithms used with the FMI Standard, and we show that one of them is incorrect for the family of hybrid systems under study, under the restrictions of the standard. The conclusion is that either the standard needs to be revised, or one of the algorithms should be avoided.

Casper Thule, Cláudio Gomes, Julien Deantoni, Peter Gorm Larsen, Jörg Brauer, Hans Vangheluwe
A Flexible Framework for FMI-Based Co-Simulation of Human-Centred Cyber-Physical Systems

This paper presents our on-going work on developing a flexible framework for formal co-simulation of human-centred cyber-physical systems. The framework builds on and extends an existing prototyping toolkit, adding novel functionalities for automatic generation of user interface prototypes equipped with a standard FMI-2 co-simulation interface. The framework is developed in JavaScript, and uses a flexible templating mechanism for converting stand-alone device prototypes into Functional Mockup Units (FMUs) capable of exchanging commands and data with any FMI-compliant co-simulation engine. Two concrete examples are presented to demonstrate the capabilities of the framework.

Maurizio Palmieri, Cinzia Bernardeschi, Paolo Masci
Towards Stochastic FMI Co-Simulations: Implementation of an FMU for a Stochastic Activity Networks Simulator

The advantage of co-simulation with respect to traditional single-paradigm simulation lies mainly in the modeling flexibility it affords in composing large models out of submodels, each expressed in the most appropriate formalism. One aspect of this flexibility is the modularity of the co-simulation framework, which allows developers to replace each sub-model with a new version, possibly based on a different formalism or a different simulator, without changing the rest of the co-simulation. This paper reports on the replacement of a sub-model in a co-simulation built on the INTO-CPS framework. Namely, an existing co-simulation of a water tank, available in the INTO-CPS distribution, has been modified by replacing the tank sub-model with a sub-model built as a Stochastic Activity Network simulated on Möbius, a tool used to perform statistical analyses of systems with stochastic behavior. This work discusses aspects of this redesign, including the necessary modifications to the Möbius sub-model. In this still preliminary work, the Stochastic Activity Network features related to stochastic models have not been used, but a simple deterministic model has proved useful in indicating an approach to the integration of Stochastic Activity Networks into a co-simulation framework.

Cinzia Bernardeschi, Andrea Domenici, Maurizio Palmieri
Demo: Stabilization Technique in INTO-CPS

Despite the large number of applications and growing interest in the challenges that co-simulation poses, the field is fragmented into multiple application domains, with limited sharing of knowledge.This demo promotes a deeper understanding of a well known stabilization feature in co-simulation, which is used in the INTO-CPS tool chain.We develop the techniques that explain the empirical results of instability of the double mass-spring-damper system, and how to the stabilization feature improves the results. Moreover, we show how the restrictions of the Functional Mock-up Interface Standard impacts stability.

Cláudio Gomes, Casper Thule, Kenneth Lausdahl, Peter Gorm Larsen, Hans Vangheluwe
Demo: Co-simulation of UAVs with INTO-CPS and PVSio-web

This demo shows our ongoing work on the co-simulation of co-operative Unmanned Aerial Vehicles (UAVs). The work is based on the INTO-CPS co-simulation engine, which adopts the widely accepted Functional Mockup Interface (FMI) standard for co-simulation, and the PVSioweb prototyping tool, that extends a system simulator based on the PVS logic language with a web-based graphical interface. Simple scenarios of Quadcopters with assigned different tasks, such as rendez-vous and space coverage, are shown. We assumed a linearized dynamic model for Quadcopters formalized in OpenModelica, and a linearized set of equations for the flight control module written in C language. The co-ordination algorithm is modeled in PVS, while PVSio-web is used for graphical rendering of the co-simulation.

Maurizio Palmieri, Cinzia Bernardeschi, Andrea Domenici, Adriano Fagiolini
Towards a Co-simulation Based Model Assessment Process for System Architecture

Model Based System Engineering and early Validation & Verification are now key enablers for the development of complex systems. However, the current state of the art is not sufficient to achieve a seamless use in an Extended Enterprise (EE) context. Indeed, the various stakeholders must protect their Intellectual Property (IP) while conducting system wide design exploration that relies on each part of the system. Co-simulation standards such as Functional Mock-up Interface provide technological assets to deal with IP management issues for an EE organization. However, this standard is not meant to provide reference processes to support such organizations. We target the development of such a common process based on both the system of interest design models and the EE architecture. The purpose is to build a Simulation Reference Model as a requirement model for the whole co-simulation, the derived IP-protected co-simulation components and the co-simulation platform architecture as well as the method for the validation of system models. We propose to extend the work done for the Model Identity Card and rely on detailed domain specific engineering ontologies and quantitative quality properties for models to express the requirements for the co-simulation components and to reduce the simulation quality loss induced by the co-simulation technologies.

Benjamin Bossa, Benjamin Boulbene, Sébastien Dubé, Marc Pantel
Co-simulation of Physical Model and Self-Adaptive Predictive Controller Using Hybrid Automata

Self-adaptive predictive control (SAP) systems adjust their behavior in response to the changing physical system in order to achieve improved control. As such, models of self-adaptive control systems result in time variance of parameters. This significantly increases the complexity of model checking verification and reachability analysis techniques. In this paper, we explore recent studies on co-simulation of SAP controllers and propose a novel co-simulation platform that can be used to analyze the effectiveness of verification and reachability analysis techniques developed for SAP controllers.

Imane Lamrani, Ayan Banerjee, Sandeep K. S. Gupta

From Data to Models and Back (DataMod)

Frontmatter
Formalizing a Notion of Concentration Robustness for Biochemical Networks

The main goal of systems biology is to understand the dynamical properties of biological systems by investigating the interactions among the components of a biological system. In this work, we focus on the robustness property, a behaviour observed in several biological systems that allows them to preserve their functions despite external and internal perturbations. We first propose a new formal definition of robustness using the formalism of continuous Petri nets. In particular, we focus on robustness against perturbations to the initial concentrations of species. Then, we demonstrate the validity of our definition by applying it to the models of three different robust biochemical networks.

Lucia Nasti, Roberta Gori, Paolo Milazzo
Explaining Successful Docker Images Using Pattern Mining Analysis

Docker is on the rise in today’s enterprise IT. It permits shipping applications inside portable containers, which run from so-called Docker images. Docker images are distributed in public registries, which also monitor their popularity. The popularity of an image directly impacts on its usage, and hence on the potential revenues of its developers. In this paper, we present a frequent pattern mining-based approach for understanding how to improve an image to increase its popularity. The results in this work can provide valuable insights to Docker image providers, helping them to design more competitive software products.

Riccardo Guidotti, Jacopo Soldani, Davide Neri, Antonio Brogi
Analyzing Privacy Risk in Human Mobility Data

Mobility data are of fundamental importance for understanding the patterns of human movements, developing analytical services and modeling human dynamics. Unfortunately, mobility data also contain individual sensitive information, making it necessary an accurate privacy risk assessment for the individuals involved. In this paper, we propose a methodology for assessing privacy risk in human mobility data. Given a set of individual and collective mobility features, we define the minimum data format necessary for the computation of each feature and we define a set of possible attacks on these data formats. We perform experiments computing the empirical risk in a real-world mobility dataset, and show how the distributions of the considered mobility features are affected by the removal of individuals with different levels of privacy risk.

Roberto Pellungrini, Luca Pappalardo, Francesca Pratesi, Anna Monreale
Generating Synthetic Data for Real World Detection of DoS Attacks in the IoT

Denial of service attacks are especially pertinent to the internet of things as devices have less computing power, memory and security mechanisms to defend against them. The task of mitigating these attacks must therefore be redirected from the device onto a network monitor. Network intrusion detection systems can be used as an effective and efficient technique in internet of things systems to offload computation from the devices and detect denial of service attacks before they can cause harm. However the solution of implementing a network intrusion detection system for internet of things networks is not without challenges due to the variability of these systems and specifically the difficulty in collecting data. We propose a model-hybrid approach to model the scale of the internet of things system and effectively train network intrusion detection systems. Through bespoke datasets generated by the model, the IDS is able to predict a wide spectrum of real-world attacks, and as demonstrated by an experiment construct more predictive datasets at a fraction of the time of other more standard techniques.

Luca Arnaboldi, Charles Morisset
Annotated BPMN Models for Optimised Healthcare Resource Planning

There is an unquestionable need to improve healthcare processes across all levels of care in order to optimise the use of resources whilst guaranteeing high quality care to patients. However, healthcare processes are generally very complex and have to be fully understood before enhancement suggestions can be made. Modelling with widely used notation such as BPMN (Business Process Modelling and Notation) can help gain a shared understanding of a process, but is not sufficient to understand the needs and demands of resources. We propose an approach to enrich BPMN models with structured annotations which enables us to attach further information to individual elements within the process model. We then use performance analysis (e.g., throughput and utilisation) to reason about resources across a model and propose optimisations. We show the usefulness of our approach for an A&E department of a sizeable hospital in the south of Brazil and how different stakeholders may profit from a richer annotated BPMN-based model.

Juliana Bowles, Ricardo M. Czekster, Thais Webber
Using Formal Methods to Validate Research Hypotheses: The Duolingo Case Study

In this paper we present a methodology that combines formal methods and informal research methods to validate research hypotheses. We use the CSP (Communicating Sequential Processes) process algebra to model the system as well as some aspects of the user, and PAT (Process Analysis Toolkit) to perform formal verification. We illustrate our methodology on Duolingo, a very popular application for language learning. Two kinds of data are considered: a log of the interaction of the user with the application and the assessment of the user’s level of proficiency in the language to be learned (subject profile). The goal is to validate research hypotheses that relate the subject profile to the user’s cognitive approach during interaction (cognitive profile). To this purpose, two CSP processes, one modelling the cognitive profile that is associated by the considered research hypothesis to the subject profile and one modelling the interaction log are composed in parallel with the system model. Thus, for each user with the given learner profile and specific interaction log, the verification of the functional correctness of the overall system validates the correlation between cognitive profile and subject profile.

Antonio Cerone, Aiym Zhexenbayeva
Personality Gaze Patterns Unveiled via Automatic Relevance Determination

Understanding human gaze behaviour in social context, as along a face-to-face interaction, remains an open research issue which is strictly related to personality traits. In the effort to bridge the gap between available data and models, typical approaches focus on the analysis of spatial and temporal preferences of gaze deployment over specific regions of the observed face, while adopting classic statistical methods. In this note we propose a different analysis perspective based on novel data-mining techniques and a probabilistic classification method that relies on Gaussian Processes exploiting Automatic Relevance Determination (ARD) kernel. Preliminary results obtained on a publicly available dataset are provided.

Vittorio Cuculo, Alessandro D’Amelio, Raffaella Lanzarotti, Giuseppe Boccignone
FormalMiner: A Formal Framework for Refinement Mining

Refinement mining has been inspired by process mining techniques and aims to refine an abstract non-deterministic model by sifting it using event logs as a sieve until a reasonably concise model is achieved. FormalMiner is a formal framework that implements model mining using Maude, a modelling language based on rewriting logic. Once the final formal model is attained, it can be used, within the same rewriting-logic framework, to predict the future evolution of the behaviour through simulation, to carry out further validation or to analyse properties through model checking. In this paper we focus on the refinement mining capability of FormalMiner and we illustrate it using a case study from ecology.

Antonio Cerone

Formal Methods for Interactive Systems (FMIS)

Frontmatter
Exploring Applications of Formal Methods in the INSPEX Project

As formal methods become increasingly practical, there is a need to explore their use in a variety of domains. Wearable sensing is a rapidly developing area in which formal methods can provide tangible benefits to end users, facilitating the advance of cutting-edge technology where consumer trust is critical. The INSPEX project aims to develop a miniaturized spatial exploration system incorporating multiple sensors and state of the art processing, initially focused on a navigation tool for visually impaired people. It is thus a useful test-case for formal methods in this domain. Applying formal methods in the INSPEX development process entailed adapting to realistic external pressures. The impact of these on the modelling process is described, attending in particular to the relationship between human and tool-supported reasoning.

Joseph Razavi, Richard Banach, Olivier Debicki, Nicolas Mareau, Suzanne Lesecq, Julie Foucault
Towards a Cognitive Architecture for the Formal Analysis of Human Behaviour and Learning

In this paper we propose a cognitive architecture for the modelling of automatic and deliberate human behaviour as it occurs and evolves in a living environment or in interaction with machine interfaces. Such a cognitive architecture supports the timed modelling of an environment featuring a spatial topology and consisting of an arbitrary number of systems, interfaces and human components. Alternative models of short-term memory can be considered and explored, and long-term memory evolves throughout the time by exploiting experiences and mimicking the creation of expectations as part of mental modelling.

Antonio Cerone
Towards Handling Latency in Interactive Software

Usability of an interactive software can be highly impacted by the delays of propagation of data and events and by its variations, i.e. latency and jitter. The problem is striking for applications involving tactile interactions or augmented reality, where the shifts between interaction and representation can make the system unusable. For as much, latency is often taken into account only during the validation phase of the software by means of a value which constitutes an acceptable limit. In this shor paper, we present and discuss an alternative approach: we want to handle the latency at all phases of the life cycle of the interactive software, from specification to runtime adaptation and formal validation for certification purposes. We plan to integrate and validate these ideas into Smala, our language dedicated to the development of highly interactive and visual user interfaces.

Sébastien Leriche, Stéphane Conversy, Celia Picard, Daniel Prun, Mathieu Magnaudet
Refinement Based Formal Development of Human-Machine Interface

Human factors have been considered as the most common causes of accidents, particularly for interacting with complex critical systems related to avionics, railway, nuclear and medical domains. Mostly, a human-machine interface (HMI) is developed independently and the correctness of possible interactions is heavily dependent on testing, which cannot guarantee the absence of run-time errors. The use of formal methods in HMI development may assure such guarantee. This paper presents a methodology for developing an HMI using a correct by construction approach, which allows us to introduce the HMI components, functional behaviour and the required safety properties progressively. The proposed methodology, generic refinement strategy, supports a development of the model-view-controller (MVC) architecture. The whole approach is formalized using Event-B and relies on the Rodin tools to check the internal consistency with respect to the given safety properties, invariants and events. Finally, an industrial case study is used to illustrate the effectiveness of our proposed approach for developing an HMI.

Romain Geniet, Neeraj Kumar Singh
Using Abstraction with Interaction Sequences for Interactive System Modelling

Interaction sequences can be used as an abstraction of an interactive system. We can use such models to consider or verify properties of a system for testing purposes. However, interaction sequences have the potential to become unfeasibly long, leading to models which are intractable. We propose a method of reducing the state space of such sequences using the self-containment property. This allows us to hide (and subsequently expand) some of the model describing parts of the system not currently under consideration. Interaction sequences and their models can therefore be used to control the state space size of the models we create as an abstraction of an interactive system.

Jessica Turner, Judy Bowen, Steve Reeves
Formal Modelling as a Component of User Centred Design

User centred design approaches typically focus understanding on context and producing sketch designs. These sketches are often non functional (e.g., paper) prototypes. They provide a means of exploring candidate design possibilities using techniques such as cooperative evaluation. This paper describes a further step in the process using formal analysis techniques. The sketch design of a device is enhanced into a specification that is then analysed using formal techniques, thus providing a systematic approach to checking plausibility and consistency during early design stages. Once analysed, a further prototype is constructed using an executable form of the specification, providing the next candidate for evaluation with potential users. The technique is illustrated through an example based on a pill dispenser.

Michael D. Harrison, Paolo Masci, José Creissac Campos

Foundations of Coordination Languages and Self-adaptative Systems (FOCLASA)

Frontmatter
Coordination of Complex Socio-Technical Systems: Challenges and Opportunities

The issue of coordination in Socio-Technical Systems (STS) mostly stems from “humans-in-the-loop”: besides software-software we have software-human interactions to handle, too. Also, a number of peculiarities and related engineering challenges make a socio-technical gap easy to rise, in the form of a gap between what the computational platform provides, and what the users are expecting to have. In this paper we try to shed some light on the issue of engineering coordination mechanisms and policies in STS. Accordingly, we highlight the main challenges, the opportunities we have to deal with them, and a few selected approaches for specific STS application domains.

Stefano Mariani
Reo Coordination Model for Simulation of Quantum Internet Software

The novel field of quantum technology is being promoted by academia, governments and industry. Quantum technologies offer new means for carrying out fast computation as well as secure communication, using primitives that exploit peculiar characteristics of quantum physics. While building quantum computing devices remains a challenge, the area of quantum communication and cryptography has been successful in reaching industrial applications. In particular, recently, plans for building quantum internet have been put into action and expected to be launched by 2020 in the Netherlands. Quantum internet uses quantum communication as well as quantum entanglement along with classical communication. This makes design of software platform for quantum networks very challenging and a daunting task. Seamless design and testing of platforms for quantum software, thus, becomes a necessity to develop complex simulators for this kind of networks. In this short paper, we argue that using coordination models such as Reo can significantly simplify the development of software applications for quantum internet. Moreover, formal verification of such quantum software becomes possible, thanks to the separation of concerns, compositionality, and reusability of Reo models. This paper introduces an extension of a recently developed simulator for quantum internet (SimulaQron) by incorporating Reo models extended with quantum data and operations, along with classical data. We explain the main concepts and our plan for implementing this extension as a new tool: SimulaQ(reo)n.

Ebrahim Ardeshir-Larijani, Farhad Arbab
Computing the Parallelism Degree of Timed BPMN Processes

A business process is a combination of structured and related activities that aim at fulfilling a specific organizational goal for a customer or market. An important measure when developing a business process is the degree of parallelism, namely, the maximum number of tasks that are executable in parallel at any given time in a process. This measure determines the peak demand on tasks and thus can provide valuable insight on the problem of resource allocation in business processes. This paper considers timed business processes modeled in BPMN, a workflow-based graphical notation for processes, where execution times can be associated to several BPMN constructs such as tasks and flows. An encoding of timed business processes into Maude’s rewriting logic system is presented, enabling the automatic computation of timed degrees of parallelism for business processes. The approach is illustrated with a simple yet realistic case study in which the degree of parallelism is used to improve the business process design with the ultimate goal of optimizing resources and, therefore, with the potential for reducing operating costs.

Francisco Durán, Camilo Rocha, Gwen Salaün
ReoLive: Analysing Connectors in Your Browser

Connectors describe how to combine independent components by restricting the possible interactions between their interfaces. In this work, connectors are specified using an existing calculus of connectors for Reo connectors. Currently there are no tools to automatically analyse these connectors, other than a type-checker for a embedded domain specific language. A collection of tools for different variations of Reo connectors exists, but most use a heavy Eclipse-based framework that is not actively supported.We propose a set of web-based tools for analysing connectors—named ReoLive—requiring only an offline Internet browser with JavaScript support, which also supports a client-server architecture for more complex operations. We also show that the analysis included in ReoLive are correct, by formalising the encoding of the connector calculus into Port Automata and into mCRL2 programs. We include extensions that generate such automata, mCRL2 processes, and graphical representations of instances of connectors, developed in the Scala language and compiled into JavaScript. The resulting framework is publicly available, and can be easily experimented without any installation or a running server.

Rúben Cruz, José Proença
Multi-agent Systems with Virtual Stigmergy

We introduce a simple language for multi-agent systems that lends itself to intuitive design of local specifications. Agents operate on (parts of) a decentralized data structure, the stigmergy, that contains their (partial) knowledge. Such knowledge is asynchronously propagated across local stigmergies. In this way, local changes may influence global behaviour. The main novelty is in that our interaction mechanism combines stigmergic interaction with attribute-based communication. Specific conditions for interaction can be expressed in the form of predicates over exposed features of the agents. Additionally, agents may access a global environment. After presenting the language, we show its expressiveness on some illustrative case studies. We also include some preliminary results towards automated verification by relying on a mechanizable symbolic encoding that allows to exploit verification tools for mainstream languages.

Rocco De Nicola, Luca Di Stefano, Omar Inverso
Towards a Hybrid Verification Approach

Verification methods have limitations rooted in their methodological approach. Different methods can be more appropriate in verifying some type of properties than others. We propose a “Hybrid Verification” scheme that verifies different properties using different verification methods and supports a unified specification interface, based on a suitable coordination model. Identifying appropriate verification methods for each property to be verified is a necessary prerequisite for this approach. This work introduces a categorization of properties to be verified and a corresponding mapping to suitable verification methods in accordance with and discussing existing literature. A unified modeling methodology for various assertions based on a coordination model is presented. A generic use cases from the railway domain is used to show the applicability of the proposed Hybrid Verification scheme.

Nahla Elaraby, Eva Kühn, Anita Messinger, Sophie Therese Radschek
Using Reinforcement Learning to Handle the Runtime Uncertainties in Self-adaptive Software

The growth of scale and complexity of software as well as the complex environment with high dynamic lead to the uncertainties in self-adaptive software’s decision making at run time. Self-adaptive software needs the ability to avoid negative effects of uncertainties to the quality attributes of the software. However, existing planning methods cannot handle the two types of runtime uncertainties caused by complexity of system and running environment. This paper proposes a planning method to handle these two types of runtime uncertainties based on reinforcement learning. To handle the uncertainty from the system, the proposed method can exchange ineffective self-adaptive strategies to effective ones according to the iterations of execution effects at run time. It can plan dynamically to handle uncertainty from environment by learning knowledge of relationship between system states and actions. This method can also generate new strategies to deal with unknown situations. Finally, we use a complex distributed e-commerce system, Bookstore, to validate the ability of proposed method.

Tong Wu, Qingshan Li, Lu Wang, Liu He, Yujie Li

Graph Computation Models (GCM)

Frontmatter
Model Based Development of Data Integration in Graph Databases Using Triple Graph Grammars

Graph databases such as neo4j are designed to handle and integrate big data from heterogeneous sources. For flexibility and performance they do not ensure data quality through schemata but leave it to the application level. In this paper, we present a model-driven approach for data integration through graph databases with data sources in relational databases. We model query and update operations in neo4j by triple graph grammars and map these to Gremlin code for execution. In this way we provide a model-based approach to data integration that is both visual and formal while providing the data quality assurances of a schema-based solution.

Abdullah Alqahtani, Reiko Heckel
Short-Cut Rules
Sequential Composition of Rules Avoiding Unnecessary Deletions

Sequences of rule applications in high-level replacement systems are difficult to adapt. Often, replacing a rule application at the beginning of a sequence, i.e., reverting a rule and applying another one instead, is prevented by structure created via rule applications later on in the sequence. A trivial solution would be to roll back all applications and reapply them in a proper way. This, however, has the disadvantage of being computationally expensive and, furthermore, may cause the loss of information in the process. Moreover, using existing constructions to compose the reversal of a rule with the application of another one, in particular the concurrent and amalgamated rule constructions, does not prevent the loss of information in case that the first rule deletes elements being recreated by the second one. To cope with both problems, we introduce a new kind of rule composition through ‘short-cut rules’. We present our new kind of rule composition for monotonic rules in adhesive HLR systems, as they provide a well-established generalization of graph-based transformation systems, and motivate it on the example of Triple Graph Grammars, a declarative and rule-based bidirectional transformation approach.

Lars Fritsche, Jens Kosiol, Andy Schürr, Gabriele Taentzer
Graph Repair by Graph Programs

Model repair is an essential topic in model-driven engineering. We consider the problem of graph repair: Given a graph constraint, we try to construct a graph program, such that the application to any graph yields a graph satisfying the graph constraint. We show the existence of terminating repair programs for a number of satisfiable constraints.

Annegret Habel, Christian Sandmann
Double-Pushout Rewriting in Context

Double-pushout rewriting (DPO) is the most popular algebraic approach to graph transformation. Most of its theory has been developed for linear rules, which allow deletion, preservation, and addition of vertices and edges only. Deletion takes place in a careful and circumspect way: a double pushout derivation does never delete vertices or edges which are not in the image of the applied match. Due to these restrictions, every DPO-rewrite is invertible. In this paper, we extend the DPO-approach to non-linear and still invertible rules. Some model transformation examples show that the extension is worthwhile from the practical point of view. And there is a good chance for the extension of the existing theory. In this paper, we investigate parallel independence.

Michael Löwe
From Hyperedge Replacement Grammars to Decidable Hyperedge Replacement Games

We consider correctness of hyperedge replacement grammars under adverse conditions. In contrast to existing approaches, the influence of an adverse environment is considered in addition to system behaviour. To this end, we construct a hyperedge replacement game where rules represent the moves available to players and a temporal condition specifies the desired properties of the system. In particular, the construction of parity pushdown games from hyperedge replacement grammars results in a decidable class of games.

Christoph Peuser
Verifying a Copying Garbage Collector in GP 2

Cheney’s copying garbage collector is regarded as a challenging test case for formal approaches to the verification of imperative programs with pointers. The algorithm works for possibly cyclic data structures with unrestricted sharing which cannot be handled by standard separation logics. In addition, the algorithm relocates data and requires establishing an isomorphism between the initial and the final data structure of a program run.We present an implementation of Cheney’s garbage collector in the graph programming language GP 2 and a proof that it is totally correct. Our proof is shorter and less complicated than comparable proofs in the literature. This is partly due to the fact that the GP 2 program abstracts from details of memory management such as address arithmetic. We use sound proof rules previously employed in the verification of GP 2 programs but treat assertions semantically because current assertion languages for graph transformation cannot express the existence of an isomorphism between initial and final graphs.

Gia S. Wulandari, Detlef Plump

Model-Driven Engineering for Design-Runtime Interaction in Complex Systems (MDE@DeRun)

Frontmatter
From Modeling to Test Case Generation in the Industrial Embedded System Domain

Model-based testing (MBT) is the process of generating test cases from specification models representing system requirements and the desired functionality. The generated test cases are then executed on the system under test in an attempt to obtain a pass or fail verdict. While different MBT techniques have been developed, only a few target the real-world industrial embedded system domain and show evidence on its applicability. As a consequence, there is a serious need to investigate the use of MBT and the evidence on how modeling and test generation can improve the current way of manually creating test cases based on natural language requirements. In this paper, we describe an on-going investigation being carried out to improve the current testing processes by using the MBT approach within an industrial context. Our results suggest that activity and structure diagrams, developed under MBT, are useful for describing the test specification of an accelerator pedal control function. The use of MBT results in less number of test cases compared to manual testing performed by industrial engineers.

Aliya Hussain, Saurabh Tiwari, Jagadish Suryadevara, Eduard Enoiu
Automated Consistency Preservation in Electronics Development of Cyber-Physical Systems

Computer-aided development of complex cyber-physical systems usually takes place in engineering teams with several different expert roles using a range of various software tools. This results in numerous artifacts created during this process. However, these artifacts commonly contain plenty of overlapping information. Therefore, the editing of one model by a developer may lead to inconsistencies with other models. Keeping these artifacts manually consistent is time-consuming and error-prone. In this paper, we present an automated strategy to ensure consistency between two widely used categories of software tools in electrical engineering: an electronic design automation application for designing printed circuit boards (PCBs) and an electronic circuit simulator tool to predict system behavior at runtime.Coupling these two types of tools provides the developers with the ability of efficiently testing and optimizing the behavior of the electric circuit during the PCB design process. For the proper preservation of consistency, assigning the model elements correctly between different tools is required. To avoid the disadvantages of ambiguous heuristic matching methods, we present a strategy based on annotated identifiers in order to ensure a reliable assignment of these model elements. We have implemented the described approach by using Eagle CAD as PCB software and Matlab/Simulink with the Simscape extension as the simulation tool.

Daniel Zimmermann, Ralf H. Reussner
A System Modeling Approach to Enhance Functional and Software Development

This paper presents a SysML-based approach to enhance functional and software development process within an industrial context. The recent changes in technology such as electromobility and increased automation in heavy construction machinery lead to increased complexity for embedded software. Hence there emerges a need for new development methodologies to address flexible functional development, enhance communication among development teams, and maintain traceability from design concepts to software artifacts. The discussed approach has experimented in the context of developing a new transmission system (partially electrified) and its functionality. While the modeling approach is a work-in-progress, some initial success, as well as existing gaps pointing to future works are highlighted.

Saurabh Tiwari, Emina Smajlovic, Amina Krekic, Jagadish Suryadevara
Embedded UML Model Execution to Bridge the Gap Between Design and Runtime

The number and complexity of embedded systems is rising. Consequently, their development requires increased productivity as well as means to ensure quality. Model-based techniques can help achieve both. With classical model-driven development techniques, developers start by building design models before producing actual code. Although various approaches can be used to validate models and code separately, models and code are however separated by a semantic gap. This gap typically makes it hard to link runtime measures (e.g., execution traces) to design models. The approach presented in this paper avoids this semantic gap by making it possible to execute UML design models directly on embedded microcontrollers. Therefore, any runtime measure is directly expressed in terms of the design model. The paper introduces our UML bare-metal (i.e., not requiring an operating system) interpreter. Its use is illustrated on a motivating example, which can be simulated, or debugged, and for which message sequence charts can be generated.

Valentin Besnard, Matthias Brun, Frédéric Jouault, Ciprian Teodorov, Philippe Dhaussy
Sketching a Model-Based Technique for Integrated Design and Run Time Description
Short Paper - Tool Demonstration

The paper sketches a UML- and OCL-based technique for the coherent description of design time and run time aspects of models. The basic idea is to connect a design model and a run time model with a correspondence model. We show two simple examples, one for structural modeling and one for behavioral modeling, that explain the underlying principles. As all three models are formulated in the same language, UML and OCL, one can reason about the single models and their relationships in a comprehensive way.

Andreas Kästner, Martin Gogolla, Khanh-Hoang Doan, Nisha Desai
Model-Driven Engineering for Design-Runtime Interaction in Complex Systems: Scientific Challenges and Roadmap
Report on the MDE@DeRun 2018 Workshop

This paper reports on the first Workshop on Model-Driven Engineering for Design-Runtime Interaction in Complex Systems (also called MDE@DeRun 2018) that took place during the STAF 2018 week. It explains the main objectives, content and results of the event. Based on these, the paper also proposes initial directions to explore for further research in the workshop area.

Hugo Bruneliere, Romina Eramo, Abel Gómez, Valentin Besnard, Jean Michel Bruel, Martin Gogolla, Andreas Kästner, Adrian Rutle

Microservices: Science and Engineering (MSE)

Frontmatter
Design and Implementation of a Remote Care Application Based on Microservice Architecture

Microservice Architecture (MSA) is an architectural style for service-based software systems. MSA puts a strong emphasis on high cohesion and loose coupling of the services that provide systems’ functionalities. As a result of this, MSA-based software architectures exhibit increased scalability and extensibility, and facilitate the application of continuous integration techniques. This paper presents a case study of an MSA-based Remote Care Application (RCA) that allows caregivers to remotely access smart home devices. The goal of the RCA is to assist persons being cared in Activities of Daily Living. Employing MSA for the realization of the RCA yielded several lessons learned, e.g., (i) direct transferability of domain models based on Domain-driven Design; (ii) more efficient integration of features; (iii) speedup of feature delivery due to MSA facilitating automated deployment.

Philip Nils Wizenty, Florian Rademacher, Jonas Sorgalla, Sabine Sachweh
Ambient Intelligence Users in the Loop: Towards a Model-Driven Approach

Ambient and mobile systems consist of networked devices and software components surrounding human users and providing services. From the services present in the environment, other services can be composed opportunistically and automatically by an intelligent system, then proposed to the user. The latter must not only to be aware of existing services but also be kept in the loop in order to both control actively the services and influence the automated decisions.This paper first explores the requirements for placing the user in the ambient intelligence loop. Then it describes our approach aimed at answering the requirements, which originality sets in the use of the model-driven engineering paradigm. It reports on the prototype that has been developed, and analyzes the current status of our work towards the different research questions that we have identified.

Maroun Koussaifi, Sylvie Trouilhet, Jean-Paul Arcangeli, Jean-Michel Bruel
Integrity Protection Against Insiders in Microservice-Based Infrastructures: From Threats to a Security Framework

Building microservices involves continuous modifications at design, deployment, and run times. The DevOps notion together with the “you built it, you run it” paradigm often result in a much larger number of developers with direct access to the production pipeline than in the case of monolithic systems. Reproducible builds and continuous delivery entail practices that further worsen this situation as they grant insiders with indirect accesses (scripted processes) to production machines. Moreover, managing microservices is heavily aided by governance tools (such as Kubernetes) that are configured and controlled by insiders. In this setting, accounting for malicious insiders quickly becomes a major concern. In this paper, we identify representative integrity threats to microservice-based systems in the broader context of a development process by analyzing real-world microservice-based systems. We show that even end-to-end encryption may fall short without adequate integrity protections. From the identified threats, we then derive a set of security requirements for holistic protection. Finally, we propose a framework that serves as a blueprint for insider-resistant integrity protection in microservices.

Mohsen Ahmadvand, Alexander Pretschner, Keith Ball, Daniel Eyring
The Aspect of Resilience in Microservices-Based Software Design

This paper discusses two approaches in microservices-based software design, from the perspective of failure possibility. The first approach accepts the fact that complex distributed software systems with many communicating components, such as microservices-based software, could fail (it is not important when), and is focused on the resilient software design. Resilient software design provides strategies and mechanisms for dealing with failures. While robust system just continues functioning in the presence of a failure, resilient system is prepared to adapt yourself while continuing functioning. Second approach is to try to build ideal software that will never fail. Lot of theory behind behavioral type systems is devoted to this – choreographic programming for example. Choreographic programming relies on choreographies as global descriptions of system implementations – behavior of all entities (e.g. microservices) in a system - are given in a single program. The first approach is in more tight relation with real software systems, while the second one has more theoretic background. In this paper authors discuss on the pros and cons of aforementioned methods and presents the ideas for its fusion (e.g. to use patterns for microservices).

Vaidas Giedrimas, Samir Omanovic, Dino Alic
On Collaborative Model-Driven Development of Microservices

Microservice Architecture (MSA) denotes an emerging architectural style for distributed and service-based systems whereby each microservice is highly cohesive and implements a single business capability. A microservice system consists of multiple, loosely coupled microservices. It provides complex capabilities through services interacting in choreographies. A single dedicated team, typically practicing DevOps, is responsible for each microservice, i.e., it “owns” the service. However, while systems relying on MSA have several architectural advantages especially for cloud applications, their realization is characterized by an increased accidental complexity due to redundant handcrafting of implementation, e.g., to make each service standalone runnable. A promising way to cope with such complexity is the usage of Model-driven Development (MDD) whereby models are used as first-class entities in the software development process. Although there are already first steps taken on how MDD could be applied by a single team to implement its microservices, the question of how MDD can be adapted to MSA’s development distribution across multiple teams remains an issue. In this paper we envision the application of Collaborative Model-driven Software Engineering (CMDSE) to MDD of MSA by surveying relevant characteristics of CMDSE and identifying challenges for its application to MSA. The present paper takes a first step towards enabling holistic MDD of MSA across microservice teams.

Jonas Sorgalla, Florian Rademacher, Sabine Sachweh, Albert Zündorf

Security for and by Model-Driven Engineering (MDE)

Frontmatter
A UML Profile for Privacy Enforcement

Nowadays most software applications have to deal with personal data, specially with the emergence of Web-based applications, where user profile information has become one of their main assets. Due to regulation laws and to protect the privacy of users, customers and companies; most of this information is considered private, and therefore convenient ways to gather, process and store them have to be proposed. A common problem when modeling software systems is the lack of support to specify how to enforce privacy concerns in data models. Current approaches for modeling privacy cover high-level privacy aspects to describe what should be done with the data (e.g., elements to be private) instead of how to do it (e.g., which privacy enhancing technology to use); or propose access control policies, which may cover privacy only partially. In this paper we propose a profile to define and enforce privacy concerns in UML class diagrams. Models annotated with our profile can be used in model-driven methodologies to generate privacy-aware applications.

Javier Luis Cánovas Izquierdo, Julián Salas
Specification of Information Flow Security Policies in Model-Based Systems Engineering

Model-based systems engineering provides a multi-disciplinary approach to developing cyber-physical systems. Due to their high degree of interconnection, security is a key factor for cyber-physical systems and needs to be front-loaded to the beginning of the development. However, there is a lack of model-based systems engineering approaches that enable the early specification of security policies. As a consequence, security requirements frequently remain unspecified and therefore are hard to satisfy in the downstream development phases. In this paper, we propose to integrate model-based systems engineering with the theory of information flow security. We extend systems engineering models to information flow policies, enabling systems engineers to specify the information flow security requirements of a system under development. On refinement of the resulting models, our approach allows to derive security requirements for individual software components. We illustrate our approach using a model-based design of an autonomous car.

Christopher Gerking
Towards Scenario-Based Security Requirements Engineering for Cyber-Physical Systems

Cyber-physical systems are characterized among others by strong interconnection with each other, but also with their environment. This interconnection enables on the one hand new functionality with a high complexity and leads on the other hand to a high demand on the security of the systems. Both aspects require tailored development processes with a rigorous requirements engineering. However, current requirements engineering approaches focus either on the functional or on the security aspects but lack an integrated view on modeling and analysing both aspects. Therefore, we present in this paper ongoing research for a formal, model- and scenario-based requirements engineering approach for cyber-physical systems. Our approach enables the requirements engineer in an early stage of the development whether the modeled security requirements are sufficient to mitigate attacks and whether the security requirements influence the functional behavior. We illustrate the approach by means of an advanced driver assistance system from the automotive domain.

Thorsten Koch
Towards Model-Based Communication Control for the Internet of Things

Most of existing Model-Driven Engineering (MDE) approaches for the Internet of Things (IoT) focus on means of modeling the behavior of end devices. Little attention has been paid to network-related abstractions and communication control. The paper introduces an approach towards enabling model-based communication control in a network of things. First, we suggest a Domain Specific Language (DSL) to abstract basic network features. Second, we propose a policy language to control the communications within the network. Finally, as a proof-of-concept, we present a code generation process to enforce the expressed policy at runtime.

Imad Berrouyne, Mehdi Adda, Jean-Marie Mottu, Jean-Claude Royer, Massimo Tisi
Backmatter
Metadaten
Titel
Software Technologies: Applications and Foundations
herausgegeben von
Prof. Manuel Mazzara
Iulian Ober
Gwen Salaün
Copyright-Jahr
2018
Electronic ISBN
978-3-030-04771-9
Print ISBN
978-3-030-04770-2
DOI
https://doi.org/10.1007/978-3-030-04771-9