Skip to main content
main-content

Über dieses Buch

The MODELS series of conferences is the premier venue for the exchange of - novative technical ideas and experiences focusing on a very important new te- nical discipline: model-driven software and systems engineering. The expansion ofthisdisciplineisadirectconsequenceoftheincreasingsigni?canceandsuccess of model-based methods in practice. Numerous e?orts resulted in the invention of concepts, languagesand tools for the de?nition, analysis,transformation, and veri?cationofdomain-speci?cmodelinglanguagesandgeneral-purposemodeling language standards, as well as their use for software and systems engineering. MODELS 2010, the 13th edition of the conference series, took place in Oslo, Norway, October 3-8, 2010, along with numerous satellite workshops, symposia and tutorials. The conference was fortunate to have three prominent keynote speakers: Ole Lehrmann Madsen (Aarhus University, Denmark), Edward A. Lee (UC Berkeley, USA) and Pamela Zave (AT&T Laboratories, USA). To provide a broader forum for reporting on scienti?c progress as well as on experience stemming from practical applications of model-based methods, the 2010 conference accepted submissions in two distinct tracks: Foundations and Applications. The primary objective of the ?rst track is to present new research results dedicated to advancing the state-of-the-art of the discipline, whereas the second aims to provide a realistic and veri?able picture of the current state-- the-practice of model-based engineering, so that the broader community could be better informed of the capabilities and successes of this relatively young discipline. This volume contains the ?nal version of the papers accepted for presentation at the conference from both tracks.

Inhaltsverzeichnis

Frontmatter

Keynote 2

Session 4a: Distributed/Embedded Software Development

Transformation-Based Parallelization of Request-Processing Applications

Abstract
Multicore, multithreaded processors are rapidly becoming the platform of choice for high-throughput request-processing applications (RPAs). We refer to this class of modern parallel platforms as multi- systems. In this paper, we describe the design and implementation of Lagniappe, a translator that simplifies RPA development by transforming portable models of RPAs to high-throughput multi-⋆ executables. We demonstrate Lagniappe’s effectiveness with a pair of stateful RPA case studies.
Taylor L. Riché, Harrick M. Vin, Don Batory

Model Driven Orchestration: Design for Service Compatibility

Abstract
Service composition is a recent field that has seen a flurry of different approaches proposed towards the goal of flexible distributed heterogeneous interoperation of software systems, usually based on the expectation that such systems must be derived from higher level models rather than be coded at low level. In practice, achieving service interoperability nonetheless continues to require significant modelling approach at multiple levels, and existing formal approaches typically require the analysis of the global space of joint executions of interacting services. Based on our earlier work on providing locally checkable consistency rules for guaranteeing the behavioral consistency of inheritance hierarchies, we propose a model-driven approach for creating consistent service orchestrations. We represent service execution and interaction with a high-level model in terms of Petri-net based Behavior diagrams, provide formal criteria for service consistency that can be checked in terms of local model properties, and give a design methodology for developing services that are guaranteed to be interoperable.
Georg Grossmann, Michael Schrefl, Markus Stumptner

Embedded Software Development with Projectional Language Workbenches

Abstract
This paper describes a novel approach to embedded software development. Instead of using a combination of C code and modeling tools, we propose an approach where modeling and programming is unified using projectional language workbenches. These allow the incremental, domain-specific extension of C and a seamless integration between the various concerns of an embedded system. The paper does not propose specific extensions to C in the hope that everybody will use them; rather, the paper illustrates the benefits of domain specific extension using projectional editors. In the paper we describe the problems with the traditional approach to embedded software development and how the proposed approach can solve them. The main part of the paper describes our modular embedded language, a proof-of-concept implementation of the approach based on JetBrains MPS. We implemented a set of language extensions for embedded programming, such as state machines, tasks, type system extensions as well as a domain specific language (DSL) for robot control. The language modules are seamlessly integrated, leading to a very efficient way for implementing embedded software.
Markus Voelter

Session 4b: (De)Composition and Refactoring

Concern-Based (de)composition of Model-Driven Software Development Processes

Abstract
An MDSD process is often organised as transformation chain. This can threaten the Separation of Concerns (SoC) principle, because information is replicated in, scattered over, and tangled in different models. Aspect-Oriented Software Development (AOSD) supports SoC to avoid such scatterings and tangling of information. Although there are integrations of MDSD and AOSD, there is no approach that uses concern separation for all artifacts (documents, models, code) involved in an MDSD process as the primary (de)composition method for the complete process. In this paper, we propose such an approach called ModelSoC. It extends the hyperspace model for multi-dimensional SoC to deal with information that is replicated in different models. We present a ModelSoC implementation based on our Reuseware framework that organises all information provided in arbitrary models during development in a concern space and composes integrated views as well as the final system from that. This is shown on the development of a demonstrator system.
Jendrik Johannes, Uwe Aßmann

Flexible Model Element Introduction Policies for Aspect-Oriented Modeling

Abstract
Aspect-Oriented Modeling techniques make it possible to use model transformation to achieve advanced separation of concerns within models. Applying aspects that introduce model elements into a base model in the context of large, potentially composite models is nevertheless tricky: when a pointcut model matches several join points within the base model, it is not clear whether the introduced element should be instantiated once for each match, once within each composite, once for the whole model, or based on a more elaborate criteria. This paper argues that in order to enable a modeler to write semantically correct aspects for large, composite models, an aspect weaver must support a flexible instantiation policy for model element introduction. Example models highlighting the need for such a mechanism are shown, and details of how such policies can be implemented are presented.
Brice Morin, Jacques Klein, Jörg Kienzle, Jean-Marc Jézéquel

Role-Based Generic Model Refactoring

Abstract
Refactorings can be used to improve the structure of software artifacts while preserving the semantics of the encapsulated information. Various types of refactorings have been proposed and implemented for programming languages such as Java or C#. With the advent of Model-Driven Software Development (MDSD), the need for restructuring models similar to programs has emerged. Previous work in this field [1,2] indicates that refactorings can be specified generically to foster their reuse. However, existing approaches can handle only certain types of modelling languages and reuse refactorings only once per language.
In this paper a novel approach based on role models to specify generic refactorings is presented. We discuss how this resolves the limitations of previous works, as well as how specific refactorings can be defined as extensions to generic ones. The approach was implemented based on the Eclipse Modeling Framework (EMF) [3] and evaluated using multiple modelling languages and refactorings.
Jan Reimann, Mirko Seifert, Uwe Aßmann

Session 4c: Model Change

Precise Detection of Conflicting Change Operations Using Process Model Terms

Abstract
Version management of process models requires that changes can be resolved by applying change operations. Conflict detection is an important part of version management and the minimization of the number of detected conflicts also reduces the overhead when resolving changes. As not every syntactic conflict leads to a conflict when taking into account model semantics, a computation of conflicts solely on the syntax leads to an unnecessary high number of conflicts. In this paper, we introduce the notion of syntactic and semantic conflicts for change operations of process models. We provide a method how to efficiently compute conflicts, using a term formalization of process models. Using this approach, we can significantly reduce the number of overall conflicts and thereby reduce the amount of work for the user when resolving conflicts.
Christian Gerth, Jochen M. Küster, Markus Luckey, Gregor Engels

Capturing the Intention of Model Changes

Abstract
Model differences calculated by differencing algorithms contain the atomic changes made to a model. However, they do not capture the user’s intention of the modification. We present concepts and a framework for abstracting from atomic changes to produce semantic changes, for example, “move all classes from package A to B” instead of “move classes X, Y, and Z from package A to B”. Semantic changes abstracted this way are closer to the user’s intention and are applicable to other models much like customizable refactorings.
Patrick Könemann

Selective and Consistent Undoing of Model Changes

Abstract
There are many reasons why modeling tools support the undoing of model changes. However, the sequential undoing is no longer useful for interrelated, multi-diagrammatic modeling languages where model changes in one diagram may also affect other diagrams. This paper introduces selective undoing of model changes where the designer decides which model elements to undo and our approach automatically suggests related changes in other diagrams that should be undone also. Our approach identifies dependencies among model changes through standard consistency and well-formedness constraints. It then investigates whether an undo causes inconsistencies and uses the dependencies to explore which other model changes to undo to preserve consistency. Our approach is fully automated and correct with respect to the constraints provided. Our approach is also applicable to legacy models provided what the models were version controlled. We demonstrate our approach’s scalability and correctness based on empirical evidence for a range of large, third party models. The undoing is as complete and correct as the constraints are complete and correct.
Iris Groher, Alexander Egyed

Session 5a: (Meta)Models at Runtime

Modeling Features at Runtime

Abstract
A feature represents a functional requirement fulfilled by a system. Since many maintenance tasks are expressed in terms of features, it is important to establish the correspondence between a feature and its implementation in source code. Traditional approaches to establish this correspondence exercise features to generate a trace of runtime events, which is then processed by post-mortem analysis. These approaches typically generate large amounts of data to analyze. Due to their static nature, these approaches do not support incremental and interactive analysis of features. We propose a radically different approach called live feature analysis, which provides a model at runtime of features. Our approach analyzes features on a running system and also makes it possible to “grow” feature representations by exercising different scenarios of the same feature, and identifies execution elements even to the sub-method level.
We describe how live feature analysis is implemented effectively by annotating structural representations of code based on abstract syntax trees. We illustrate our live analysis with a case study where we achieve a more complete feature representation by exercising and merging variants of feature behavior and demonstrate the efficiency or our technique with benchmarks.
Marcus Denker, Jorge Ressia, Orla Greevy, Oscar Nierstrasz

Metamodel-Based Information Integration at Industrial Scale

Abstract
Flexible data integration has been an important IT research goal for decades. About ten years ago, a significant step was taken with the introduction of declarative methods (e.g., Clio). Since this work, mostly based on classic dependency analysis, extensions have been developed that express more powerful semantic relationships. However, much of this work has remained focused at the relational database (i.e., relatively low) level, and many of the extensions revert to specific algorithms and function specifications. At the same time, models have evolved to incorporate more powerful semantics (object or ontology-based methods). The work presented in the paper uses flexible metamodel-based mapping definitions that enable a model-driven engineering approach to integration, allowing declarative mapping specifications to be automatically executed at runtime within a single-formalism and single-tool framework. The paper reports how to create executable mappings for large-scale data integration scenarios with an interactive graphical tool.
Stefan Berger, Georg Grossmann, Markus Stumptner, Michael Schrefl

Inferring Meta-models for Runtime System Data from the Clients of Management APIs

Abstract
A new trend in runtime system monitoring is to utilize MOF-based techniques in analyzing the runtime system data. Approaches and tools have been proposed to automatically reflect the system data as MOF compliant models, but they all require users to manually build the meta-models that define the types and relations of the system data. To do this, users have to understand the different management APIs provided by different systems, and find out what kinds of data can be obtained from them. In this paper, we present an automated approach to inferring such meta-models by analyzing client code that accesses management APIs. A set of experiments show that the approach is useful for realizing runtime models and applicable to a wide range of systems, and the inferred meta-models are close to the reference ones.
Hui Song, Gang Huang, Yingfei Xiong, Franck Chauvel, Yanchun Sun, Hong Mei

Session 5b: Requirements Engineering

A Meta Model for Artefact-Orientation: Fundamentals and Lessons Learned in Requirements Engineering

Abstract
Requirements Engineering (RE) processes are highly volatile due to dependencies on customers’ capabilities or used process models, both complicating a standardised RE process. A promising solution is given by artefact-orientation that emphasises the results rather than dictating a strict development process. At such a basis one is able to incorporate domain-specific methods for producing artefacts without having to take into account the variability of process definitions. Although artefacts are known to support customisable development processes, there still is no common agreement about the structure and semantics of artefact-based methodologies. In this paper we discuss different interpretations of the term artefact considering aspects like process integration capabilities and necessities within individual project environments. We contribute a meta model for artefact-orientation that is inferred from two RE models elaborated within industrial cooperation projects of our research group. We conclude with a discussion of performed case studies and ongoing work.
Daniel Méndez Fernández, Birgit Penzenstadler, Marco Kuhrmann, Manfred Broy

A Common Framework for Synchronization in Requirements Modelling Languages

Abstract
The ability to describe synchronization between the components of a model is a fundamental primitive in modelling languages. After studying existing modelling languages, we discovered that many synchronization mechanisms can be organized into a common abstract framework. Our framework is based on a notion of synchronization between transitions of complementary roles. It is parameterized by the number of interactions a transition can take part in, i.e., one vs. many, and the arity of the interaction mechanisms, i.e., exclusive vs. shared, which are considered for the complementary roles to result in 16 synchronization types. We describe how many modelling constructs, such as multi-source, multi-destination transitions, many composition operators, and many workflow patterns are forms of synchronization. By generalizing and classifying synchronization types independently of a particular language, our goal is to enable language designers to adopt an appropriate synchronization type for a domain effectively.
Shahram Esmaeilsabzali, Nancy A. Day, Joanne M. Atlee

A Systematic Review of the Use of Requirements Engineering Techniques in Model-Driven Development

Abstract
Model-Driven Development (MDD) emphasizes the use of models at a higher abstraction level in the software development process and argues in favor of automation via model execution, transformation, and code generation. However, one current challenge is how to manage requirements during this process whilst simultaneously stressing the benefits of automation. This paper presents a systematic review of the current use of requirements engineering techniques in MDD processes and their actual automation level. 72 papers from the last decade have been reviewed from an initial set of 884 papers. The results show that although MDD techniques are used to a great extent in platform-independent models, platform-specific models, and at code level, at the requirements level most MDD approaches use only partially defined requirements models or even natural language. We additionally identify several research gaps such as a need for more efforts to explicitly deal with requirements traceability and the provision of better tool support.
Grzegorz Loniewski, Emilio Insfran, Silvia Abrahão

Session 5c: Slicing and Model Transformations

Slicing of UML Models Using Model Transformations

Abstract
This paper defines techniques for the slicing of UML models, that is, for the restriction of models to those parts which specify the properties of a subset of the elements within them. The purpose of this restriction is to produce a smaller model which permits more effective analysis and comprehension than the complete model, and also to form a step in factoring of a model. We consider class diagrams, individual state machines, and communicating sets of state machines.
Kevin Lano, Shekoufeh Kolahdouz-Rahimi

An Adjustable Transformation from OWL to Ecore

Abstract
Although there are sufficient similarities between the W3C Web Ontology Language OWL and the software modeling language Ecore, little research has been conducted into approaches which allow software engineers to incorporate existingWeb ontologies into their familiar Ecore-based software engineering environments. This is becoming important since the number of significant Web ontologies is growing and software engineers are increasingly challenged to build software relying on such ontologies. Therefore, we propose an automatic transformation between OWL and Ecore, that is adjustable between the two extremes of a result which is simple to understand, or a result, which preserves as much as possible of the source ontology. The transformation is realized as an Eclipse plug-in and, thus, integrates seamlessly with a software developer’s familiar environment.
Tirdad Rahmani, Daniel Oberle, Marco Dahms

Transforming Process Models: Executable Rewrite Rules versus a Formalized Java Program

Abstract
In the business process management community, transformations for process models are usually programmed using imperative languages (such as Java). The underlying mapping rules tend to be documented using informal visual rules whereas they tend to be formalized using mathematical set constructs. In the Graph and Model Transformation communities, special purpose languages and tools (such as GrGen) are being developed to support the direct execution of such mapping rules. As part of our ongoing effort to bridge these two communities, we have implemented a transformation from petri-nets to statecharts (PN2SC) using both approaches. By relying on technical comparison criteria and by making the solutions available for online replay, we illustrate that rule-based approaches require less specification effort due to their more declarative specification style and automatic performance optimizations. From a tool perspective, GrGen has better visualization and debugging support whereas Java tools support evolution better.
Pieter Van Gorp, Rik Eshuis

Modeling the Internet

Abstract
The Internet has changed the world. Its astounding success has led to explosive growth in users, traffic, and applications, which has made its original architecture and protocols obsolete. Currently the networking community is questioning all aspects of Internet technology, as researchers and stakeholders try to understand how to meet new requirements for functionality, quality of service, availability, and security.
In this technological crisis, one of our most powerful technical tools, namely functional modeling (as opposed to performance modeling), is being completely ignored. In this talk I explain how modeling can be put to good use in the Internet context, how the culture of the Internet Engineering Task Force and the networking research community resist such efforts, and what might be done to bring about a cultural change.
The talk will be illustrated with examples and results from several projects using different modeling languages and techniques. These include:
  • A project to understand, formalize, and partially verify SIP, the dominant protocol for IP-based multimedia applications. The official specification of SIP consists of many thousands of pages of English text.
  • A project revealing many unknown defects in the Chord routing protocol, which is the most-cited protocol for maintenance of peer-to-peer networks.
  • A project to generate modular telecommunication services from models in a high-level, domain-specific language. This long-term project, based on the DFC architecture, has already produced two large-scale deployed systems.
  • A project to discover how to meet complex requirements for application sessions using software composition.
The descriptions of these projects will emphasize topics such as the search for the right modeling language and the search for principles of networking.
Pamela Zave

Keynote 3

Session 6a: Incorporating Quality Concerns in MDD

Design Guidelines for the Development of Quality-Driven Model Transformations

Abstract
The design of model transformations is a key aspect in model-driven development processes and is even more important when alternative transformations exist. The transformation designer must be able to identify which alternative transformations produce models with the desired quality attributes. This paper presents design guidelines for defining model transformations to deal with alternative transformations in which the selection of the alternatives is done based on quality attributes. The design guidelines are defined in the context of a multimodeling approach which, unlike conventional transformation processes that only use source models as input to apply transformations, also uses two additional models: a quality model for representing quality attributes, and a transformation model for representing the relationships among quality attributes and the alternative transformations in a particular domain.
Emilio Insfran, Javier Gonzalez-Huerta, Silvia Abrahão

Early Deviation Detection in Modeling Activities of MDE Processes

Abstract
Software Process Models (SPM) are used to communicate around the processes and analyze it. They represent the entry point to PSEEs (Process-centered Software Engineering Environments) which use them to coordinate process agents in their tasks. When the process execution doesn’t match the model, the common option in PSEEs is ignoring the model. If the actions of the agents are not tracked during deviations it is impossible to evaluate the effect of these deviations on the success or failure of the process. In this paper we propose to deal with agent deviations during process execution. The originality of our approach comes from the fact that it is Process Modeling Language’s (PML) independent and that it proposes early deviation detection. We validate our approach by implementing a prototype of a process definition and execution environment and evaluating its effectiveness to a group of developers enacting a process defined in it.
Marcos Aurélio Almeida da Silva, Reda Bendraou, Xavier Blanc, Marie-Pierre Gervais

Artifact or Process Guidance, an Empirical Study

Abstract
CASE tools provide artifact guidance and process guidance to enhance model quality and reduce their development time. These two types of guidance seem complementary since artifact guidance supports defect detection after each iterative development step, while process guidance supports defect prevention during each such step. But can this intuition be empirically confirmed? We investigated this question by observing developers refactoring a UML model. This study attempted to assess how general were the observations made by Cass and Osterweil on the benefits of guidance to build such model from scratch. It turns out that they do not generalize well: while their experiment observed a benefit on quality and speed with process guidance (but none with artefact guidance), we, in contrast, observed a benefit on quality at the expense of speed with artefact guidance (but none with process guidance).
Marcos Aurélio Almeida da Silva, Alix Mougenot, Reda Bendraou, Jacques Robin, Xavier Blanc

Session 6b: Model-Driven Engineering in Practice

Scaling Up Model Driven Engineering – Experience and Lessons Learnt

Abstract
Model driven engineering (MDE) aims to shift the focus of software development from coding to modeling. Models being at a higher level of abstraction are easy to understand and analyze for desired properties, leading to better control over software development life cycle. Models are also used to automate generation of implementation artefacts resulting in greater productivity and uniform quality. The focus of the MDE community is largely on exploring modeling languages and model transformation techniques. Not much attention is paid to the issues of scale. Large business applications are typically developed over multiple geographical locations and have a lifecycle running into decades. This puts several additional demands on MDE infrastructure – multi-user multi-site model repository, versioning and configuration management support, change-driven incremental processes etc. We describe our MDE infrastructure, experience of using it to deliver several large business applications over past 15 years, and the lessons learnt.
Vinay Kulkarni, Sreedhar Reddy, Asha Rajbhoj

Mod4J: A Qualitative Case Study of Model-Driven Software Development

Abstract
Model-driven software development (MDSD) has been on the rise over the past few years and is becoming more and more mature. However, evaluation in real-life industrial context is still scarce.
In this paper, we present a case-study evaluating the applicability of a state-of-the-art MDSD tool, modJ, a suite of domain specific languages (DSLs) for developing administrative enterprise applications. modJ was used to partially rebuild an industrially representative application. This implementation was then compared to a base implementation based on elicited success criteria. Our evaluation leads to a number of recommendations to improve mod4J.
We conclude that having extension points for hand-written code is a good feature for a model driven software development environment.
Vincent Lussenburg, Tijs van der Storm, Jurgen Vinju, Jos Warmer

Modeling Issues: a Survival Guide for a Non-expert Modeler

Abstract
While developing an integral security model to be used in a Service Oriented Architecture (SOA) context, we find a lot of ambiguities and inaccuracies when authors speak of models, metamodels, profiles and so on. This led us to study a great number of references in a search for precise definitions to help us to address our research. Our study and discussions were so extensive that we are convinced they will be a valuable contribution to the community. In particular, in this paper we present several Reference Concept Maps that depict graphically a large number of definitions with their associated bibliographical references. Nevertheless, we truly believe that there are still a lot of concepts to be clarified and that this clarification is essential so that basic modeling concepts can be best used by non-expert modelers.
Emilio Rodriguez-Priego, Francisco J. García-Izquierdo, Ángel Luis Rubio

Session 6c: Modeling Architecture

Monarch: Model-Based Development of Software Architectures

Abstract
In recent work we showed that it is possible to separate, and combine formal representations of, application properties and architectural styles, respectively. We do this by defining style-specific mappings from style-independent application models to architectural models in given styles. This paper shows that this separation of concerns supports a model-based development and tools approach to architectural-style-independent application modeling, and architecture synthesis with style as a separate design variable. In support of these claims, we present a proof-of-concept tool, Monarch, and illustrate its use.
Hamid Bagheri, Kevin Sullivan

Model-to-Metamodel Transformation for the Development of Component-Based Systems

Abstract
Embedded systems are a potential application area for component-based development approaches. They can be assembled from multiple generic components that can either be application components used to realize the application logic or hardware components to provide low level hardware access. The glue code to connect these components is typically implemented using middleware or run-time systems. Nowadays great parts of the system are automatically generated and configured according to application needs by using model driven software development tools. In a model driven development process, three different kinds of developers can be identified: run-time system experts, component developers and application developers. This paper presents a multi-phase approach, which is suited to support all of these experts in an optimal way. Key idea is a multi-phase development process based on model-to-metamodel transformations connecting the different phases. The advantages of this approach are demonstrated in the context of distributed sensor / actuator systems.
Gerd Kainz, Christian Buckl, Stephan Sommer, Alois Knoll

Architectural Descriptions as Boundary Objects in System and Design Work

Abstract
Lean and Agile processes have resolved longstanding problems in engineering communication by replacing document based communication with face-to-face collaboration, but do not yet scale to very large and heterogeneous projects. This paper proposes a compatible extension to lean and agile processes that addresses this limitation. The core idea is to adopt the view of documentation as boundary objects: shared artefacts that maintain integrity across a project’s intersecting social worlds. The paper presents a case study, in which interviews with system engineers and designers were analysed to obtain requirements on an architectural description serving as boundary objects in a telecommunications project. The main result is a list of 18 empirically grounded, elementary requirements, worth considering when implementing lean and agile processes in the large.
Lars Pareto, Peter Eriksson, Staffan Ehnebom

Disciplined Heterogeneous Modeling

Invited Paper
Abstract
Complex systems demand diversity in the modeling mechanisms. One way to deal with a diversity of requirements is to create flexible modeling frameworks that can be adapted to cover the field of interest. The downside of this approach is a weakening of the semantics of the modeling frameworks that compromises interoperability, understandability, and analyzability of the models. An alternative approach is to embrace heterogeneity and to provide mechanisms for a diversity of models to interact. This paper reviews an approach that achieves such interaction between diverse models using an abstract semantics, which is a deliberately incomplete semantics that cannot by itself define a useful modeling framework. It instead focuses on the interactions between diverse models, reducing the nature of those interactions to a minimum that achieves a well-defined composition. An example of such an abstract semantics is the actor semantics, which can handle many heterogeneous models that are built today, and some that are not common today. The actor abstract semantics and many concrete semantics have been implemented in Ptolemy II, an open-source software framework distributed under a BSD-style license.
Edward A. Lee

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise