Skip to main content
Top

2014 | Book

Modelling Foundations and Applications

10th European Conference, ECMFA 2014, Held as Part of STAF 2014, York, UK, July 21-25, 2014. Proceedings

Editors: Jordi Cabot, Julia Rubin

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the 10th European Conference on Modelling Foundations and Applications, ECMFA 2014, held as part of STAF 2014, in York, UK, in July 2014. The 14 foundation track papers and the 3 applications track papers presented in this volume were carefully reviewed and selected from 58 submissions. They are on all aspects of MDE, including topics such as model provenance; model transformations and code generation; model synthesis; model-driven testing; formal modeling approaches; business modeling; and usability of models.

Table of Contents

Frontmatter

Foundations

Efficient Model Synchronization with View Triple Graph Grammars
Abstract
Model synchronization is a crucial task in the context of Model Driven Engineering. Especially when creating and maintaining multiple suitable abstractions or views of a complex system, a bidirectional transformation is required to keep all views and the corresponding system synchronized by automatically propagating changes in both directions. Triple Graph Grammars (TGGs) are a declarative, rule-based bidirectional transformation language, which can be used to support model synchronization. In practice, most TGG tools restrict the supported class of TGGs for efficiency reasons. These restrictions are, however, seldom intuitive and are often difficult to understand and adhere to, especially for non-experts. View Triple Graph Grammars (VTGGs) are a restricted form of TGGs, which can be highly optimized for efficient view update propagation. We argue that the restrictions posed by VTGGs are explicit and intuitive for users, as they can be adequately motivated based on the main application scenarios for VTGGs. In this paper, we present for the first time a formalization of VTGGs, stating precisely the advantages and limitations of VTGGs as compared to TGGs, and backing our claims with initial runtime measurements from a practical case study.
Anthony Anjorin, Sebastian Rose, Frederik Deckwerth, Andy Schürr
Level-Agnostic Designation of Model Elements
Abstract
A large proportion of the domain information conveyed in models is contained in the model element “designators” — the characterizing and identifying textual expressions appearing in the headers of model element visualizations. However, the notational support for representing such designators is usually non-uniform, incomplete and sensitive to the classification level at which a model element resides. Moreover, the relationship between the “names” in a model element’s designator and the values of its linguistic and ontological attributes is often unclear. In the paper we present a simple but powerful Element Designation Notation (EDN) which allows the key information characterizing model elements to be expressed in a compact, uniform and level-agnostic way for the purposes of deep modeling. This not only simplifies and enriches the designation possibilities in traditional modeling scenarios, it paves the way for more expressive models of big data in which the location of data elements within the three key hierarchies — classification, containment and specialization — can be clearly and concisely expressed.
Colin Atkinson, Ralph Gerbig
Towards Scalable Querying of Large-Scale Models
Abstract
Hawk is a modular and scalable framework that supports monitoring and indexing large collections of models stored in diverse version control repositories. Due to the aggregate size of indexed models, providing a reliable, usable, and fast mechanism for querying Hawk’s index is essential. This paper presents the integration of Hawk with an existing model querying language, discusses the efficiency challenges faced, and presents an approach based on the use of derived features and indexes as a means of improving the performance of particular classes of queries. The paper also reports on the evaluation of a prototype that implements the proposed approach against the Grabats benchmark query, focusing on the observed efficiency benefits in terms of query execution time. It also compares the size and resource use of the model index against one created without using such optimizations.
Konstantinos Barmpis, Dimitrios S. Kolovos
OCLR: A More Expressive, Pattern-Based Temporal Extension of OCL
Abstract
Modern enterprise information systems often require to specify their functional and non-functional (e.g., Quality of Service) requirements using expressions that contain temporal constraints. Specification approaches based on temporal logics demand a certain knowledge of mathematical logic, which is difficult to find among practitioners; moreover, tool support for temporal logics is limited. On the other hand, a standard language such as the Object Constraint Language (OCL), which benefits from the availability of several industrial-strength tools, does not support temporal expressions.
In this paper we propose OCLR, an extension of OCL with support for temporal constraints based on well-known property specification patterns. With respect to previous extensions, we add support for referring to a specific occurrence of an event as well as for indicating a time distance between events and/or from scope boundaries. The proposed extension defines a new syntax, very close to natural language, paving the way for a rapid adoption by practitioners. We show the application of the language in a case study in the domain of eGovernment, developed in collaboration with a public service partner.
Wei Dou, Domenico Bianculli, Lionel Briand
Interpretation of Linguistic Architecture
Abstract
The megamodeling language MegaL is designed to model the linguistic architecture of software systems: the relationships between software artifacts (e.g., files), software languages (e.g., programming languages), and software technologies (e.g., code generators) used in a system. The present paper delivers a form of interpretation for such megamodels: resolution of megamodel elements to resources (e.g., system artifacts) and evaluation of relationships, subject to designated programs (such as pluggable ‘tools’ for checking). Interpretation reduces concerns about the adequacy and meaning of megamodels, as it helps to apply the megamodels to actual systems. We leverage Linked Data principles for surfacing resolved megamodels by linking, for example, artifacts to GitHub repositories or concepts to DBpedia resources. We provide an executable specification (i.e., semantics) of interpreted megamodels and we discuss an implementation in terms of an object-oriented framework with dynamically loaded plugins.
Ralf Lämmel, Andrei Varanovich
Alloy4SPV : A Formal Framework for Software Process Verification
Abstract
In this paper we present a framework for software process verification called Alloy4SPV which uses a subset of UML2 Activity Diagrams as a process modeling language. In order to achieve software process verification, we i) define a formal model of our process modeling language using first-order logic, ii) we give it a formal semantics based on the fUML standard, and iii) we implement this formalization using the Alloy language [1]. In order to ease its adoption by process modelers, our framework comes with a graphical tool and a ready to use and customizable set of software process properties. We categorize these properties into two categories, syntactical and behavioral. We extend the set of behavioral properties we identified from the literature with two new categories that we defined, namely, organizational properties which relate to resource management and planning during process execution and business properties which are project/process specific properties.
Yoann Laurent, Reda Bendraou, Souheib Baarir, Marie-Pierre Gervais
Sensor Data Visualisation: A Composition-Based Approach to Support Domain Variability
Abstract
In the context of the Internet of Things, sensors are surrounding our environment. These small pieces of electronics are inserted in everyday life’s elements (e.g., cars, doors, radiators, smartphones) and continuously collect information about their environment. One of the biggest challenges is to support the development of accurate monitoring dashboard to visualise such data. The one-size-fits-all paradigm does not apply in this context, as user’s roles are variable and impact the way data should be visualised: a building manager does not need to work on the same data as classical users. This paper presents an approach based on model composition techniques to support the development of such monitoring dashboards, taking into account the domain variability. This variability is supported at both implementation and modelling levels. The results are validated on a case study named SmartCampus, involving sensors deployed in a real academic campus.
Ivan Logre, Sébastien Mosser, Philippe Collet, Michel Riveill
Identifying and Visualising Commonality and Variability in Model Variants
Abstract
Models, as any other software artifact, evolve over time during the development life-cycle. Different versions of the same model are thus existing at different times. Model comparison of different versions has received a lot of attention in recent years. However, existing techniques focus on comparing only two model versions at the same time to identify model differences. Independently of model versioning context, another dimension of variation, called variation in space, appears in models. Contrary to variation in time, variation in space means that a set of model variants exists and should be maintained. Comparing all these model variants to identify common and variable elements becomes thus a major challenge. Current approaches for model variants comparison lack of flexibility and appropriate visualisation paradigm. The contribution of this paper is the Model Variants Comparison approach (MoVaC). This approach compares a set of model variants and identifies both commonality and variability in the form of what is referred to as features. Each feature consists in a set of atomic model-elements. MoVaC also visualizes the identified features using a graphical representation where common and variable features are explicitly presented to users. We validate the approach on two use cases demonstrating the flexibility of MoVaC to be applied to any kind of EMF-based model variants.
Jabier Martinez, Tewfik Ziadi, Jacques Klein, Yves le Traon
Modular DSLs for Flexible Analysis: An e-Motions Reimplementation of Palladio
Abstract
We address some of the limitations for extending and validating MDE-based implementations of NFP analysis tools by presenting a modular, model-based partial reimplementation of one well-known analysis framework, namely the Palladio Architecture Simulator. We specify the key DSLs from Palladio in the e-Motions system, describing the basic simulation semantics as a set of graph transformation rules. Different properties to be analysed are then encoded as separate, parametrised DSLs, independent of the definition of Palladio. These can then be composed with the base Palladio DSL to generate specific simulation environments. Models created in the Palladio IDE can be fed directly into this simulation environment for analysis. We demonstrate two main benefits of our approach: 1) The semantics of the simulation and the non-functional properties to be analysed are made explicit in the respective DSL specifications, and 2) because of the compositional definition, we can add definitions of new non-functional properties and their analyses.
Antonio Moreno-Delgado, Francisco Durán, Steffen Zschaler, Javier Troya
Language-Independent Traceability with Lässig
Abstract
Typical programming languages, including model transformation languages, do not support traceability. Applications requiring inter-object traceability implement traceability support repeatedly for different domains. In this paper we introduce a solution for generic traceability which enables the generation of trace models for all programming languages compiling to Virtual Machine (VM) bytecode by leveraging automatically generated observer aspects.
We implement our solution in a tool called Lässig adding traceability support to all programming languages compiling to the Java Virtual Machine (JVM). We evaluate and discuss general feasibility, correctness, and the performance overhead of our solution by applying it to three model-to-model transformations.
Our generic traceability solution is capable of automatically establishing complete sets of trace links for transformation programs in various languages and at a minimum cost. Lässig is available as an open-source project for integration into modeling frameworks
Rolf-Helge Pfeiffer, Jan Reimann, Andrzej Wąsowski
A Family-Based Framework for i-DSML Adaptation
Abstract
One of the main goals of Model-Driven Engineering (MDE) is the manipulation of models as software artifacts. Model execution is in particular a means to substitute models for code. Precisely, if models of a dedicated Domain-Specific Modeling Language (DSML) are interpreted through an execution engine, then this DSML is called interpreted-DSML (i-DSML for short). The possibility of extending i-DSML to adapt models directly during their execution, allows the building of adaptable i-DSML. In this article, we demonstrate that specializing adaptable i-DSML leads to the potential definition of accurate adaptation policies. Domain-specificities are the key factors to identify adaptations that really make sense. In effect, we introduce the concept of family as a mean to encapsulate adaptation operations that are attached to a particular domain. Families can be specialized with the special purpose of defining a hierarchy of adaptation contexts.
Samson Pierre, Eric Cariou, Olivier Le Goaer, Franck Barbier
Normalizing Heterogeneous Service Description Models with Generated QVT Transformations
Abstract
Service-Oriented Architectures (SOAs) enable the reuse and substitution of software services to develop highly flexible software systems. To benefit from the growing plethora of available services, sophisticated service discovery approaches are needed that bring service requests and offers together. Such approaches rely on rich service descriptions, which specify also the behavior of provided/requested services, e.g., by pre- and postconditions of operations. As a base for the specification a data schema is used, which specifies the used data types and their relations. However, data schemas are typically heterogeneous wrt. their structure and terminology, since they are created individually in their diverse application contexts. As a consequence the behavioral models that are typed over the heterogeneous data schemas, cannot be compared directly. In this paper, we present an holistic approach to normalize rich service description models to enable behavior-aware service discovery. The approach consists of a matching algorithm that helps to resolve structural and terminological heterogeneity in data schemas by exploiting domain-specific background ontologies. The resulting data schema mappings are represented in terms of Query View Transformation (QVT) relations that even reflect complex n:m correspondences. By executing the transformation, behavioral models are automatically normalized, which is a prerequisite for a behavior-aware operation matching.
Simon Schwichtenberg, Christian Gerth, Zille Huma, Gregor Engels
Towards the Systematic Construction of Domain-Specific Transformation Languages
Abstract
General-purpose transformation languages, like ATL or QVT, are the basis for model manipulation in Model-Driven Engineering (MDE). However, as MDE moves to more complex scenarios, there is the need for specialized transformation languages for activities like model merging, migration or aspect weaving, or for specific domains of wide use like UML. Such domain-specific transformation languages (DSTLs) encapsulate transformation knowledge within a language, enabling the reuse of recurrent solutions to transformation problems.
Nowadays, many DSTLs are built in an ad-hoc manner, which requires a high development cost to achieve a full-featured implementation. Alternatively, they are realised by an embedding into general-purpose transformation or programming languages like ATL or Java.
In this paper, we propose a framework for the systematic creation of DSTLs. First, we look into the characteristics of domain-specific transformation tools, deriving a categorization which is the basis of our framework. Then, we propose a domain-specific language to describe DSTLs, from which we derive a ready-to-run workbench which includes the abstract syntax, concrete syntax and translational semantics of the DSTL.
Jesús Sánchez Cuadrado, Esther Guerra, Juan de Lara
A MOF-Based Framework for Defining Metrics to Measure the Quality of Models
Abstract
Controlled experiments in model-based software engineering, especially those involving human subjects performing modeling tasks, often require comparing models produced by experiment subjects with reference models, which are considered to be correct and complete. The purpose of such comparison is to assess the quality of models produced by experiment subjects so that experiment hypotheses can be accepted or rejected. The quality of models is typically measured quantitatively based on metrics. Manually defining such metrics for a rich modeling language is often cumbersome and error-prone. It can also result in metrics that do not systematically consider relevant details and in turn may produce biased results. In this paper, we present a framework to automatically generate quality metrics for MOF-based metamodels, which in turn can be used to measure the quality of models (instances of the MOF-based metamodels). This framework was evaluated by comparing its results with manually derived quality metrics for UML class and sequence diagrams and it has been used to derive metrics for measuring the quality of UML state machine diagrams. Results show that it is more efficient and systematic to define quality metrics with the framework than doing it manually.
Tao Yue, Shaukat Ali

Applications

Neo4EMF, A Scalable Persistence Layer for EMF Models
Abstract
Several industrial contexts require software engineering methods and tools able to handle large-size artifacts. The central idea of abstraction makes model-driven engineering (MDE) a promising approach in such contexts, but current tools do not scale to very large models (VLMs): already the task of storing and accessing VLMs from a persisting support is currently inefficient. In this paper we propose a scalable persistence layer for the de-facto standard MDE framework EMF. The layer exploits the efficiency of graph databases in storing and accessing graph structures, as EMF models are. A preliminary experimentation shows that typical queries in reverse-engineering EMF models have good performance on such persistence layer, compared to file-based backends.
Amine Benelallam, Abel Gómez, Gerson Sunyé, Massimo Tisi, David Launay
Towards an Infrastructure for Domain-Specific Languages in a Multi-domain Cloud Platform
Abstract
Recently, cloud computing gained more and more traction, not only in fast moving domains such as private and enterprise software, but also in more traditional domains like industrial automation. However, for rolling out automation software as a service solutions to low-end, long-tail markets with thousands of small customers important aspects for cloud scalability such as easy self service for the customer are still missing. There exists a large gap between the engineering efforts required to configure an automation system and the effort automation companies and their customers can afford. At the same time, tools for implementing Domain-Specific Languages (DSLs) have recently become more and more efficient and easy to use. Tailored DSLs that make use of abstractions for the particular (sub-)domains and omitting other complexities would allow customers to handle their applications in a SaaS-oriented, self-service manner. In this paper, we present an approach towards a model-based infrastructure for engineering languages for a multi-domain automation cloud platform that make use of modern DSL frameworks. This will allow automation SaaS providers to rapidly design sub-domain specific engineering tools based on a common platform. End-customers can then use these tailored languages to engineer their specific applications in an efficient manner.
Thomas Goldschmidt
Experiences with Business Process Model and Notation for Modeling Integration Patterns
Abstract
Enterprise Integration Patterns (EIP) are a collection of widely used best practices for integrating enterprise applications. However, a formal integration model is missing, such as Business Process Model and Notation (BPMN) from the workflow domain. There, BPMN is a “de-facto” standard for modeling business process semantics and their runtime behavior.
In this work we present the mapping of integration semantics represented by EIPs to the BPMN syntax and execution semantics. We show that the resulting runtime independent, BPMN-based integration model can be applied to a real-world integration scenario through compilation to an open source middleware system. Based on that system, we report on our practical experiences with BPMN applied to the integration domain.
Daniel Ritter
Backmatter
Metadata
Title
Modelling Foundations and Applications
Editors
Jordi Cabot
Julia Rubin
Copyright Year
2014
Publisher
Springer International Publishing
Electronic ISBN
978-3-319-09195-2
Print ISBN
978-3-319-09194-5
DOI
https://doi.org/10.1007/978-3-319-09195-2

Premium Partner