Skip to main content

About this book

This book constitutes thoroughly revised and selected papers from the 6th International Conference on Model-Driven Engineering and Software Development, MODELSWARD 2018, held in Funchal, Madeira, Portugal, in January 2018. The 22 thoroughly revised and extended papers presented in this volume were carefully reviewed and selected from 101 submissions. They contribute to the development of highly relevant research trends in model-driven engineering and software development such as innovative methods for MDD-based development and testing of web-based applications and user interfaces, support for development of Domain-Specific Languages (DSLs), MDD-based application development on multiprocessor platforms, advances in MDD tooling, formal semantics and behaviour modelling, and MDD-based product-line engineering.

Table of Contents


Executable Modeling for Reactive Programming

After thirty years, it is reasonably time to critically look at Model Driven Software Development (MDSD). Who may nowadays claim that MDSD has been massively adopted by the software industry? Who may show numbers demonstrating that MDSD allowed/allows massive cost savings in daily software development, but, above all, software maintenance? This paper aims at investigating executable modeling as a balanced articulation between programming and modeling. Models at run-time embody the promising generation of executable models, provided that their usages are thought and intended to cost-effective software development. To envisage this not-yet-come success, this paper emphasizes expectations from the software industry about “reactive programming”. Practically, executable modeling standards like the SCXML W3C standard or the BPMN OMG standard are relevant supports for reactive programming, but success conditions still need to be defined.
Franck Barbier, Eric Cariou

A Model-Driven Method for Fast Building Consistent Web Services from OpenAPI-Compatible Models

Lots of software companies rely on web technologies to test market hypotheses in order to develop viable businesses. They often need to quickly build web services that are at the core of their Minimum Viable Products (MVPs). MVPs must be reliable whereas they are based on specifications and hypotheses that are likely to change. Web services need to be well documented, to make it easy to develop applications that consume them. Model Driven Engineering approaches have been proposed and used to develop and evolve web services on one hand, and document them on the other hand. However, these approaches lack the ability to be suitable for both (i) rapid prototyping, (ii) model verification, (iii) compatibility with common programming languages and (iv) alignment between documentation and implementation. Here we propose a meta-model to express web services, the related tool to verify models consistency and an integration of this approach into the OpenAPI Specification. We adopt a shallow verification process to allow rapid prototyping by developers who are not formal methods experts, while still offering design-time guarantees that improve product quality and development efficiency. Web services are defined using parametric components which enable to express and formally verify web service patterns and to safely reuse them in other contexts. We built a tool to check consistency of extended OpenAPI 3.0 models and associated components implementations in order to generate corresponding web services. This allows us to give flexibility and verification support to developers, even in the context of an incremental development, as illustrated by a case study.
David Sferruzza, Jérôme Rocheteau, Christian Attiogbé, Arnaud Lanoix

Reuse and Customization for Code Generators: Synergy by Transformations and Templates

Engineering languages for model-driven development (MDD) highly rely on code generators that systematically and efficiently generate source code from abstract models. Although code generation is an essential technique, there is still a lot of ad hoc mechanisms in use that prevent an efficient and reliable use and especially reuse of code generators.
The first part of the paper focuses on general mechanisms necessary to really allow reuse of flexible code generators. Based on these general considerations, we present a code generator infrastructure, that allows to easily develop a generator, but especially allows to adapt existing generators to different technology stacks and thus widely supports reusability, customizability, and flexibility.
In the second part of the paper, we present an integrated template- and transformation-based code generation approach. It enables efficient code generation of object-oriented code and retains the benefits of both approaches. Even more, its synergetic use improves usability beyond using just a single approach. Internally, an intermediate representation (IR) and a separation of the code generation process into three main steps is used. First, the input model is processed and transformed to the IR. Second, elements in the IR are manipulated by source and target language independent transformations. Target language specific implementations are added by templates, which are attached to IR elements. Third, the resulting IR is used by a template engine to generate code with a predefined set of default templates for a particular target language. The overall goal of this paper is to show how to address necessary code generator considerations to effectively and efficiently use engineering languages in MDD.
Robert Eikermann, Katrin Hölldobler, Alexander Roth, Bernhard Rumpe

Model-Based Programming for Multi-processor Platforms with TTool/DIPLODOCUS and OMC

The complexity of today’s multi-processor architectures raises the need to increase the level of abstraction of software development paradigms above third-generation programming languages (e.g., C/C++). Code generation from model-based specifications is considered as a promising approach to increase the productivity and quality of software development, with respect to traditional paradigms where code is used as the main artifact to develop software. In this context, powerful and robust tools are needed in order to accomplish the transition from code-based programming to model-based programming. In this paper we propose a novel approach and tools where system-level models are compiled into standard C code while optimizing the system’s memory footprint. We show the effectiveness of our approach with the model-based programming of UML/SysML diagrams for a 5G decoder. From the compiled C code, we generate both a software implementation for a Digital Signal Processor platform and a hardware-software implementation for a platform based on hardware Intellectual Property (IP) blocks. Our optimizations achieve a memory footprint reduction of 80.07% and 88.93%, respectively.
Andrea Enrici, Julien Lallet, Renaud Pacalet, Ludovic Apvrille, Karol Desnos, Imran Latif

Evaluating Multi-variant Model-To-Text Transformations Realized by Generic Aspects

The discipline model-driven product line engineering (MDPLE) aims at increasing the level of productivity when realizing a family of related products. Relying on model-driven software engineering (MDSE) seeks to support this effect by using models raising the level of abstraction. In MDSE model transformations are the key technology to transform in between different (model) representations. By now, model transformations are mature and successfully applied in many use cases. In annotative approaches to MDPLE model elements are typically augmented with variability annotations controlling in which products the elements are visible. For delivering products, source code is generated from the configured models in model-to-text (M2T) transformations. Applying a state-of-the-art model transformation on an annotated model, however, does not regard the annotations since such single-variant model transformations (SVMTs) are unaware of annotations and not able to transfer them to the output. In the present work we evaluate our solution which reuses the already existing SVMT support and propagates annotations orthogonally. In particular, a generic aspect, supporting any kind of input metamodel, augments the outcome of SVMTs with annotations. Comparing the transformation with the state-of-the-art approach of manually adding the annotations to the target model reveals not only \(100\%\) accuracy regarding the similarity of the derived products. It also states a significant reduction of the user effort compared to the laborious task of manually annotating the target source code. In this way our approach helps to really increase the productivity in MDPLE.
Sandra Greiner, Bernhard Westfechtel

Definition and Visualization of Virtual Meta-model Extensions with a Facet Framework

Model-driven engineering (MDE) provides gainful solutions to software development, maintenance and evolution. Model based standards and techniques emerged and proved useful to develop new applications but also for re-engineering legacy systems. Unfortunately MDE is subject to frequent new standards and upgraded releases which lead stakeholders to proceed costly maintenance operations of the software modules they depend on. The existing techniques suffer from limitations and require new solutions to ensure a better adaptability and flexibility of the tooling. We propose a solution to modify meta-models already in use without rebuilding completely the software product with graphical facilities. This solution includes an improved technique of virtual extension of meta-models with Facets and end-users facilities to handle the model mappings. An open-source implementation, including user graphical interactions, is available on-line.
Jonathan Pepin, Pascal André, Christian Attiogbé, Erwan Breton

Automated Recommendation of Related Model Elements for Domain Models

Domain modeling is an important activity in the early stages of software projects to achieve a common understanding of the problem area among project participants. Domain models describe concepts and relationships of respective application fields using a modeling language and domain-specific terms. Creating these models requires software engineers to have detailed domain knowledge and expertise in model-driven development. Collecting domain knowledge is a time-consuming manual process that is rarely supported in current modeling environments. In this paper, we describe an approach that supports domain modeling through formalized knowledge sources and information extraction from text. On the one hand, domain-specific terms and their relationships are automatically queried from existing knowledge bases. On the other hand, as these knowledge bases are not extensive enough, we have constructed a large network of semantically related terms from natural language data sets containing millions of one-word and multi-word terms and their quantified relationships. Both approaches are integrated into a domain model recommender system that provides context-aware suggestions of model elements for virtually every possible domain. We report on the experience of using the recommendations in various industrial and research environments.
Henning Agt-Rickauer, Ralf-Detlef Kutsche, Harald Sack

An Integrated Framework to Develop Domain-Specific Languages: Extended Case Study

In this paper, we propose an integrated framework to formally specify the syntax and the semantics of domain-specific languages. We build this framework by integrating the Microsoft DSL Tools, a framework to develop graphical domain-specific languages, and an extension of the ForSpec, a logic-based specification language. The motivation for proposing this framework is the lack of a formal and rigorous approach by DSL Tools for semantics specifications. We combine the aforementioned technologies under the umbrella of Microsoft Visual Studio IDE to facilitate the development of graphical DSLs within a single development environment. We use the Microsoft DSL Tools to specify the metamodel and graphical notations for DSLs, and our extension of the ForSpec, offering better support for semantic specifications. As a case study, we develop a modeling language to design domain-specific flow-based languages.
Bahram Zarrin, Hubert Baumeister, Hessam Sarjoughian

Technology Enhanced Support for Learning Interactive Software Systems

The development of useful and usable interactive software systems depends on both User Interface (UI) design and software engineering in a complementary way. However, today, application development and UI design are largely separated activities and fields of knowledge. This separation is also present in education as can be witnessed from the common independent way of teaching of both subjects. Although the development of better interactive software systems could benefit significantly from an integrative teaching approach, there is a lack of concrete and proven approaches for such way of teaching. This paper presents technology enhanced support for filling this gap. The proposed tool supports and improves learning achievements for the development of interactive software systems. The learning support includes feedback for conceptual modeling integrated with UI design. The tool applies Model Driven Engineering principles that allows the automatic generation of a working prototype from specification models. This capability allows the learner trying out the final application while validating the requirements. An experimental evaluation with novice developers demonstrates the advantages of this didactic tool.
Jenny Ruiz, Estefanía Serral, Monique Snoeck

Interactive Measures for Mining Understandable State Machines from Embedded Software: Experiments and Case Studies

State machines are a commonly used formalism for specifying the behavior of a software component, which could also be helpful for program comprehension. Therefore, it is desirable to extract state machine models from code and also from legacy models. The main drawback of fully-automatic state machine mining approaches is that the mined models are too detailed and not understandable. In our previous work [1], we presented different measures for the interaction with the state machine extraction process, such as selecting a subset of state variables, reducing the state variable range and providing additional user constraints. These measures aimed to reduce the complexity of the mined state machines to an understandable degree. In this article, which is an extended version of [1], we evaluate the approach through a case study with twelve professional developers from the automotive supplier company Bosch. The study shows that adding our interactive measures to the model mining process leads to understandable state machines, which can be very helpful in a rich set of use cases in addition to program comprehension, such as debugging, validation and verification. Furthermore, we conduct an experiment to evaluate the required computation time of the interactive measures against the fully-automatic mining. The experiment shows that the interactive approach can drastically reduce the computation time.
Wasim Said, Jochen Quante, Rainer Koschke

Adaptation and Implementation of the ISO42010 Standard to Software Design and Modeling Tools

Model cantered software development practices adoption remains limited to small niche domains. The broad development practices remain code centric. Modeling tool complexity is often cited as a significant factor limiting the adoption and negatively affecting user experience. Modeling and design tools complexity are due to multiple factors including complexity of the underlying language, weak support for methodologies, and insensitivity to users’ concerns. This results in modeling and design tools that expose all or most of their capabilities and elements at once, often overwhelming users and negatively affecting user experience. The problem is further exacerbated when a tool supports multiple domain-specific modeling languages that are defined on top of a base language such as UML. In this case, the tool customizations and visual elements necessary to support each language often interfere with each other and further exacerbate the modeling tool complexity. In this paper, we present a novel and systematic approach to reduce the complexity of design and modeling tools by introducing an interpretation and adaptation of the ISO42010 standard on architecture description specific to the software domain. We demonstrate this approach by providing a working implementation as part of the Papyrus opensource modeling framework. In this approach, we leverage the notions of Architecture Contexts and Architecture Viewpoints to enable heterogeneous UML-based languages to be independently supported and help contextualize the exposed tool capabilities. This paper presents the ISO42010 interpretation and adaptation to software design and architecture and a case study with several definitions of architecture contexts. The implementation of this novel approach demonstrates the ability for multiple modeling languages and notations to coexist without interference and provides significant reduction in the exposed capabilities in the UI. Reducing design and modeling tool complexity has a potential to significantly broaden the adoption of modeling and design practices in the software engineering sphere.
Maged Elaasar, Florian Noyrit, Omar Badreddin, Sébastien Gérard

Generation and Validation of Frame Conditions in Formal Models

Operation contracts are a popular description means in behavioral system modeling. Pre- and postconditions are used to describe the effects on model elements (such as attributes, links, etc.) that are enforced by an operation. However, it is usually not clearly stated what model elements may be affected by an operation call and what shall remain unchanged—although this information is essential in order to obtain a comprehensive description. A promising solution to this so-called frame problem is to define additional frame conditions. However, properly defining frame conditions which complete the model description in the intended way can be a non-trivial, tedious, and error-prone task. While in general there are several tools and methods for obtaining formal model descriptions and also a broad variety of approaches for the validation and verification of the generated models, corresponding methods for frame conditions have not received significant attention so far. In this work, we provide a comprehensive overview of recently proposed approaches that close this gap and support the designer in generating and validating frame conditions.
Philipp Niemann, Nils Przigoda, Robert Wille, Rolf Drechsler

Analysis and Evaluation of Conformance Preserving Graph Transformation Rules

Model transformation is a formal approach for modelling the behavior of software systems. Over the past few years, graph based modeling of software systems has gained significant attention as there are numerous techniques available to formally specify constraints and the dynamics of systems. Graph transformation rules are used to model the behavior of software systems which is the core element in model driven software engineering. However, in general, the application of graph transformation rules cannot guarantee the correctness of model transformations. In this paper, we propose to use a graph transformation technique that guarantees the correctness of transformations by checking required and forbidden graph patterns. The proposed technique is based on the application of conformance preserving transformation rules which guarantee that produced output models conform to their underlying metamodel. To determine if a rule is conformance preserving we present a new algorithm for checking conformance preserving rules with respect to a set of graph constraints. We also present a formal proof of the soundness of the algorithm. We apply our technique to homogeneous model transformations where input and output models must conform to the same meta-model. The algorithm relies on locality of a constrained graph to reduce the computational cost.
Fazle Rabbi, Yngve Lamo, Lars Michael Kristensen

Generation of Inductive Types from Ecore Metamodels

When one wants to design a language and related supporting tools, two distinct technical spaces can be considered. On the one hand, model-driven tools like Xtext or MPS automatically provide a compilation infrastructure and a full-featured integrated development environment. On the other hand, a formal workbench like a proof assistant helps in the design and verification of the language specification. But these two technical spaces can hardly be used in conjunction. In the paper, we propose an automatic transformation that takes an input Ecore metamodel, and generates a set of inductive types in Gallina and Vernacular, the language of the Coq proof assistant. By doing so, it is guaranteed that the same abstract syntax as the one described by the Ecore metamodel is used, e.g., to formally define the language’s semantics or type system or set up a proof-carrying code infrastructure. Improving over previous state of the art, our transformation supports structural elements of Ecore, with no restriction. But our transformation is not injective. A benchmark evaluation shows that our transformation is effective, including in the case of real-world metamodels like UML and OCL. We also validate our transformation in the context of an ad-hoc proof-carrying code infrastructure.
Jérémy Buisson, Seidali Rehab

Towards Automated Defect Analysis Using Execution Traces of Scenario-Based Models

Modern software systems are so complex that at times engineers find it difficult to understand why a system behaves as it does under certain conditions, and, in particular, which conditions trigger specific behavior. This adds a significant burden to tasks like debugging or maintenance. Scenario-based specifications can mitigate some of the problems engineers face thanks to the scenarios’ intuitiveness, executability and amenability to formal methods such as verification and synthesis. However, as a specification grows it becomes increasingly difficult to fully comprehend the interplay of all scenarios, thus again making it difficult to understand the final system’s behavior. Therefore we propose a (semi-)automatic trace analysis method. It incorporates automatic techniques for identifying interesting traces, or subsequences within traces, from large sets of long execution traces. Developers are interested in knowing whether certain specification properties hold: if a property holds, what are possible executions which are evidence of this? If a property does not hold, what are examples that violate it? Scenario-based specifications are well-suited to generating meaningful execution traces due to being based on events which are meaningful in the domain of the system under design. A key observation we made was that interesting properties of a trace are often encoded in just one or very few scenarios. These concise scenarios, describing desired or forbidden behavior, are often already part of the specification or should be added to it as they encode implicitly made assumptions.
This paper incorporates and substantially extends the material of the paper published in MODELSWARD 2018 “Towards Systematic and Automatic Handling of Execution Traces Associated with Scenario-based Models” [1].
Joel Greenyer, Daniel Gritzner, David Harel, Assaf Marron

A Textual Notation for Modeling and Generating Code for Composite Structure

Models of the composite structure of a software system describe its components, how they are connected or contain each other, and how they communicate using ports and connectors. Although composite structure is one of the UML diagram types, it tends to be complex to use, or requires particular library support, or suffers from weak code generation, particularly in open source tools. Our previous work has shown that software modelers can benefit from a textual notation for UML concepts as well as from high-quality code generation, both of which we have implemented in Umple. This paper explains our extensions to Umple in order create a simple textual notation and comprehensive code generation for composite structure. A particular feature of our approach is that developers do not always need to explicitly encode protocols as they can be in many cases inferred. We present case studies of the composite structure of several systems designed using Umple, and demonstrate how the volume of code and cyclomatic complexity faced by developers is far lower than if they tried to program such systems directly in C++.
Mahmoud Husseini Orabi, Ahmed Husseini Orabi, Timothy C. Lethbridge

Application of a Process-Oriented Build Tool for Flight Controller Development Along a DO-178C/DO-331 Process

Growing software size and complexity paired with its application in increasingly safety-critical environments requires to follow strict software development processes. They demand extensive documented development and verification activities as well as the creation and management of a huge number of artefacts. This paper presents a monolithic, process-oriented build tool for model-based development in MATLAB, Simulink, and Stateflow as well as its application and adaption for the implementation of a flight control algorithm in the light of RTCA DO-178C/DO-331, the accepted standard for airborne software certification. Beyond classical build automation functionality, the tool accelerates achieving a software design compliant to standards and evaluates completeness of process artefacts, their consistency, and correctness at a central place.
Markus Hochstrasser, Stephan Myschik, Florian Holzapfel

A Methodology for Generating Tests for Evaluating User-Centric Performance of Mobile Streaming Applications

Compared to other platforms, mobile apps’ quality assurance is more challenging, since their functionality is affected by the surrounding environment. In literature, a considerable volume of research has been devoted to develop frameworks that facilitate conducting performance analysis during the development life cycle. However, less attention has been given to test generation and test selection criteria for performance evaluation. In this work, a model based test generation methodology is proposed to evaluate the impact of the interaction of the environment, the wireless network, and the app configurations on the performance of a mobile streaming app and thereby on the experience of the end user. The methodology steps, inputs, and outputs are explained using an app example. The methodology assumes that the app has a network access through a WiFi access point. We evaluate the effectiveness of the methodology by comparing the time cost to design a test suite with random testing. The obtained results are very promising.
Mustafa Al-tekreeti, Kshirasagar Naik, Atef Abdrabou, Marzia Zaman, Pradeep Srivastava

Combining Model-Driven Architecture and Software Product Line Engineering: Reuse of Platform-Specific Assets

Reuse automation is a main concern of software engineering to produce high quality applications in a faster and cheaper manner. Some approaches define cross-platform model-driven software product lines to systematically and automatically reuse generic assets in software development. They improve the product line assets reusability by designing them according to the Model-Driven Architecture specifications. However, their reuse of platform-specific assets is limited due to an inefficient platform-specific variability management. This issue interfere with gains in productivity provided by reuse.
In this paper, we define platform-specific variability by identifying variation points in different software concerns based on the well-known “4+1” view model categorization. Then, we fully manage platform-specific variability by structuring the Platform-Specific Model using two sub-models: the Cross-Cutting Model, obtained by transformation of the Platform-Independent Model; and the Application Structure Model, obtained by reuse of variable platform-specific assets. This structure is supported by a framework, based on a Domain-Specific Modeling Language, helping developers to build an application model. Experiments on three concrete applications confirmed that our approach significantly improves product lines productivity.
Frédéric Verdier, Abdelhak-Djamel Seriai, Raoul Taffo Tiam

A Test Specification Language for Information Systems Based on Data Entities, Use Cases and State Machines

Testing is one of the most important activities to ensure the quality of a software system. This paper proposes and discusses the TSL (Test Specification Language) that adopts a model-based testing approach for both human-readable and computer-executable specifications of test cases. TSL is strongly inspired on the grammar, nomenclature and writing style as defined by the RSLingo RSL, which is a rigorous requirements specification language. Both RSL and TSL are controlled natural languages that share common concepts such as data entities, use cases and state machines. However, by applying black-box functional testing design techniques, TSL includes and supports four complementary testing strategies, namely: domain analysis testing; use case tests; state machine testing; and acceptance criteria. This paper focuses on the first three testing strategies of TSL. Finally, a simple but effective case study illustrates the overall approach and supports the discussion.
Alberto Rodrigues da Silva, Ana C. R. Paiva, Valter E. R. da Silva

Synchronizing Heuristics for Weakly Connected Automata with Various Topologies

Since the problem of finding a shortest synchronizing sequence for an automaton is known to be NP-hard, heuristics algorithms are used to find synchronizing sequences. There are several heuristic algorithms in the literature for this purpose. However, even the most efficient heuristic algorithm in the literature has a quadratic complexity in terms of the number of states of the automaton, and therefore can only scale up to a couple of thousands of states. It was also shown before that if an automaton is not strongly connected, then these heuristic algorithms can be used on each strongly connected component separately. This approach speeds up these heuristic algorithms and allows them to scale to much larger number of states easily. In this paper, we investigate the effect of the topology of the automaton on the performance increase obtained by these heuristic algorithms. To this end, we consider various topologies and provide an extensive experimental study on the performance increase obtained on the existing heuristic algorithms. Depending on the size and the number of components, we obtain speed-up values as high as 10000x and more.
Berk Cirisci, Barış Sevilmiş, Emre Yasin Sivri, Poyraz Kıvanç Karaçam, Kamer Kaya, Hüsnü Yenigün


Additional information

Premium Partner

    Image Credits