Skip to main content
Top

2015 | Book

SDL 2015: Model-Driven Engineering for Smart Cities

17th International SDL Forum, Berlin, Germany, October 12-14, 2015, Proceedings

insite
SEARCH

About this book

This book constitutes the proceedings of the 17th International System Design Language Forum, SDL 2015, held in Berlin, Germany, in October 2015.

The 15 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 26 submissions. They are organized in topical sections named: smart cities and distributed systems; specification and description language; domain specific languages; goal modeling; use-case modeling; and model-based testing.

Table of Contents

Frontmatter

Smart Cities and Distributed Systems

Frontmatter
Insertion Modeling and Symbolic Verification of Large Systems
Abstract
Insertion modeling has been developed over the last decade as an approach to a general theory of interaction between agents and an environment in complex distributed multiagent systems. The original work in this direction proposed a model of interaction between agents and environments based on an insertion function and the algebra of behaviors (similar to process algebra). Over the recent years, insertion modeling has been applied to the verification of requirement specifications of distributed interacting systems and to the generation of tests from such requirements. Our system, VRS (Verification of Requirements Specifications), has successfully verified specifications in the field of telecommunication systems, embedded systems, and real-time systems. Formal requirements in VRS are presented by means of local descriptions with a succession relation. Formalized requirements are represented in a formalism that combines logical specifications with control descriptions provided by the graphical syntax of UCM (Use Case Map) diagrams. This paper overviews the main concepts of insertion modeling, presents new algorithms developed for symbolic verification, especially a new predicate transformer for local descriptions, and provides a formal description of the method of generating traces from such specifications (which is the key technology used to verify requirements and derive test suites).
Alexander Letichevsky, Oleksandr Letychevskyi, Volodymyr Peschanenko, Thomas Weigert
A Model-Based Framework for SLA Management and Dynamic Reconfiguration
Abstract
A Service Level Agreement (SLA) is a contract between a service provider and a customer that defines the expected quality of the provided services, the responsibilities of each party, and the penalties in case of violations. In the cloud environment where elasticity is an inherent characteristic, a service provider can cater for workload changes and adapt its service provisioning capacity dynamically. Using this feature one may provide only as many resources as required to satisfy the current workload and SLAs, the system can shrink and expand as the workload changes. In this paper, we introduce a model-based SLA monitoring framework, which aims at avoiding SLA violations from the service provider side while using only the necessary resources. We use UML models to describe all the artifacts in the monitoring framework. The UML models not only increase the level of abstraction but they are also reused from the system design/generation phase. For this purpose, we develop metamodels for SLAs and for monitoring. In the monitoring framework, all abstract SLA models are transformed into an SLA compliance model which is used for checking the compliance to SLAs. To avoid SLA violations as well as resource wasting, dynamic reconfigurations are triggered as appropriate based on the predefined Object Constraint Language (OCL) constraints using thresholds.
Mahin Abbasipour, Ferhat Khendek, Maria Toeroe
SDL - The IoT Language
Abstract
Interconnected smart devices constitute a large and rapidly growing element of the contemporary Internet. A smart thing can be as simple as a web-enabled device that collects and transmits sensor data to a repository for analysis, or as complex as a web-enabled system to monitor and manage a smart home. Smart things present marvellous opportunities, but when they participate in complex systems, they challenge our ability to manage risk and ensure reliability.
SDL, the ITU Standard Specification and Description Language, provides many advantages for modelling and simulating communicating agents – such as smart things – before they are deployed. The potential for SDL to enhance reliability and safety is explored with respect to existing smart things below.
But SDL must advance if it is to become the language of choice for developing the next generation of smart things. In particular, it must target emerging IoT platforms, it must support simulation of interactions between pre-existing smart things and new smart things, and it must facilitate deployment of large numbers of similar things. Moreover, awareness of the potential benefits of SDL must be raised if those benefits are to be realized in the current and future Internet of Things.
Edel Sherratt, Ileana Ober, Emmanuel Gaudin, Pau Fonseca i Casas, Finn Kristoffersen
Event Pattern Mining for Smart Environments
Abstract
Complex Event Processing (CEP) systems find matches of a pattern in a stream of events. Patterns specify constraints on matching events and therefore describe situations of interest. Formulating patterns is not always trivial, especially in smart environments where a large amount of events are continuously created. For example, a smart home with sensors that can tell us if someone is flipping on a light switch to indicate how he would like his breakfast in the morning. Our work deals with the problem of mining patterns from historical traces of events.
Lars George

Specification and Description Language

Frontmatter
Simulating Distributed Systems with SDL and Hardware-in-the-Loop
Abstract
The Specification and Description Language (SDL) is a widespread language for the development of distributed real-time systems. One of its major advantages is its tool support, which enables the automatic generation of SDL implementations and the simulative evaluation of SDL systems in early development phases. However, SDL simulations often suffer from low accuracy, since they can not consider relevant non-functional aspects like execution delays of the target platform. In this paper, we present a novel approach improving the accuracy of simulations with SDL. It is based on the simulator framework FERAL and the simulation of SDL implementations on Hardware-in-the-Loop (HiL), thereby enabling both pure functional and performance evaluations of SDL systems. Besides providing a survey of SDL simulations with FERAL, this paper proposes a development process based on virtual prototyping, supporting step-wise system integration and tests of SDL systems by reducing the abstraction level of simulations gradually. To demonstrate this process and the significance of accurate simulations, results of a case study with an inverted pendulum are presented.
Tobias Braun, Dennis Christmann
Name Resolution of SDL Revisited: Drawbacks and Possible Enhancements
Abstract
The Specification and Description Language (SDL) is a formal specified and standardized modeling language, which is mainly used to specify protocols as well as distributed systems. The two algorithms ‘Resolution by Container’ and ‘Resolution by Context’ are specified for name resolution of identifiers in SDL specifications. In this paper, problems that were identified during an implementation of the ‘Resolution by Context’ algorithm are discussed. In addition, possible enhancements to remedy the identified problems are presented.
Alexander Kraas
An Experiment to Introduce Interrupts in SDL
Abstract
Specific modelling technologies for digital hardware design are typically the synthesizable, cycle-accurate register-transfer level descriptions (VHDL or Verilog RTL) or bit-accurate transaction level models (SystemC TLM). Given nowadays complexity of circuits such as System-on-a-Chip (SoC) for multimedia embedded systems, and of the embedded software interacting with the SoC, there is a need for a higher abstraction level that would ease mastering the interaction, starting from initial conceptual stages of a product development. The Specification and Description Language (SDL) modelling technology allows to describe functional models independently from their implementation. This paper describes a work done by STMicroelectronics and PragmaDev to experiment the use of SDL high level functional description in a typical simple hardware/ software interaction scenario involving interrupts handling.
Emmanuel Gaudin, Alain Clouard

Domain Specific Languages

Frontmatter
LanguageLab - A Meta-modelling Environment
Abstract
In the LanguageLab language workbench, we build on a component-based approach to language specification that facilitates the specification of all aspects of a computer language in a consistent manner, taking into account best practices in meta-modelling and language design. The workbench allows operation on a suitable abstraction level, and also focuses on user-friendliness and a low threshold to getting started, in order to make it useful for teaching of meta-modelling and language design and specification. The platform is open for third party language modules and facilitates rapid prototyping of DSLs, re-use of language modules, and experiments with multiple concrete syntaxes. The platform also allows interested parties to develop LanguageLab modules that can further add to the features and capabilities of the LanguageLab platform.
Terje Gjøsæter, Andreas Prinz
Consistency of Task Trees Generated from Website Usage Traces
Abstract
Task trees are an established method for modeling the usage of a website as required to accomplish user tasks. They define the necessary actions and the order in which users need to perform them to reach a certain goal. Modeling task trees manually can be a laborious task, especially if a website is rather complex. In previous work, we presented a methodology for automatically generating task trees based on recorded user actions on a website. We did not verify, if the approach generates similar results for different recordings of the same website. Only if this is given, the task trees can be the basis for a subsequent analysis of the usage of a website, e.g., a usability analysis. In this paper, we evaluate our approach in this respect. For this, we generated task trees for different sets of recorded user actions of the same website and compared the resulting task trees. Our results show, that the generated task trees are consistent but that the level of consistency depends on the type of website or the ratio of possible to recorded actions on a website.
Patrick Harms, Jens Grabowski
On the Semantic Transparency of Visual Notations: Experiments with UML
Abstract
Graphical notations designed by committees in the context of standardization bodies, like Object Management Group (OMG), are widely used in the industry and academia. Naive users of these notations have limited background on visualization, documentation and specification of workflows, data or software systems. Several studies have pointed out the fact that these notations do not convey any particular semantics and their understanding is not perceptually immediate. As reported in these studies, this lack of semantic transparency increases the cognitive load to differentiate between concepts, slows down the learning and comprehension of the language constructs. This paper reports on a set of experiments that confirm the lack of semantic transparency of the Unified Modeling Language (UML) as designed by OMG and compares this standard to alternative solutions where naive users are involved in the design of the notations to speed-up the learning of these languages to new users.
Amine El Kouhen, Abdelouahed Gherbi, Cédric Dumoulin, Ferhat Khendek

Goal Modeling

Frontmatter
On the Reuse of Goal Models
Abstract
The reuse of goal models has received only limited attention in the goal modeling community and is mostly related to the use of goal catalogues, which may be imported into the goal model of an application under development. Two important factors need to be considered when reusing goal models. First, a key purpose of a goal model is its evaluation for trade-off analysis, which is often based on propagating the contributions of low-level tasks (representing considered solutions) to high-level goals as specified in the goal model. Second, goal models are rarely used in isolation, but are combined with other models that impose additional constraints on goal model elements, in particular on tasks. For example, workflow models describe causal relationships of tasks in goal models. Similarly, feature models describe further constraints on tasks, in terms of which tasks may be selected at the same time. This paper (i) argues that reusable goal models must be specified either with real-life measurements (if available) or with relative contributions, (ii) presents a novel evaluation mechanism that enables the reuse of goal models with relative contributions, while taking into account additional constraints on tasks in the goal model expressed with feature models, and (iii) discusses a proof-of-concept implementation of the novel evaluation mechanism.
Mustafa Berk Duran, Gunter Mussbacher, Nishanth Thimmegowda, Jörg Kienzle
Adding a Textual Syntax to an Existing Graphical Modeling Language: Experience Report with GRL
Abstract
A modelling language usually has an abstract syntax (e.g., expressed with a metamodel) separate from its concrete syntax. The question explored in this paper is: how easy is it to add a textual concrete syntax to an existing language that offers only a concrete graphical syntax? To answer this question, this paper reports on lessons learned during the creation of a textual syntax (supported by an editor and transformation tool) for the Goal-oriented Requirement Language (GRL), which is part of the User Requirements Notation standard. Our experiment shows that although current technologies help create textual modelling languages efficiently with feature-rich editors, there are important conflicts between the reuse of existing metamodels and the usability of the resulting textual syntax that require attention.
Vahdat Abdelzad, Daniel Amyot, Timothy C. Lethbridge

Use-Case Modeling

Frontmatter
Generating Software Documentation in Use Case Maps from Filtered Execution Traces
Abstract
One of the main issues in software maintenance is the time and effort needed to understand software. Software documentation and models are often incomplete, outdated, or non-existent, in part because of the cost and effort involved in creating and continually updating them. In this paper, we describe an innovative technique for automatically extracting and visualizing software behavioral models from execution traces. Lengthy traces are summarized by filtering out low-level software components via algorithms that utilize static and dynamic data. Eight such algorithms are compared in this paper. The traces are visualized using the Use Case Map (UCM) scenario notation. The resulting UCM diagrams depict the behavioral model of software traces and can be used to document the software. The tool-supported technique is customizable through different filtering algorithms and parameters, enabling the generation of documentation and models at different levels of abstraction.
Edna Braun, Daniel Amyot, Timothy C. Lethbridge
Towards the Generation of Tests in the Test Description Language from Use Case Map Models
Abstract
The Test Description Language (TDL) is an emerging standard from the European Telecommunications Standards Institute (ETSI) that targets the abstract description of tests for communicating systems and other application domains. TDL is meant to be used as an intermediate format between requirements and executable test cases. This paper explores the automated generation of TDL test descriptions from requirements expressed as Use Case Map (UCM) models. One generation mechanism, which exploits UCM scenario definitions, is prototyped in the jUCMNav tool and illustrated through an example. This transformation enables the exploration of model-based testing where the use of TDL models simplifies the generation of tests in various languages (including the Testing and Test Control Notation – TTCN-3) from UCM requirements. Remaining challenges are also discussed in the paper.
Patrice Boulet, Daniel Amyot, Bernard Stepien
Describing Early Security Requirements Using Use Case Maps
Abstract
Non-functional requirements (NFR), such as availability, usability, performance, and security are often crucial in producing a satisfactory software product. Therefore, these non-functional requirements should be addressed as early as possible in the software development life cycle. Contrary to other non-functional requirements, such as usability and performance, security concerns are often postponed to the very end of the design process. As a result, security requirements have to be tailored into an existing design, leading to serious design challenges that usually translate into software vulnerabilities. In this paper, we present a novel approach to describe high-level security requirements using the Use Case Maps (UCM) language of the ITU-T User Requirements Notation (URN) standard. The proposed approach is based on a mapping to UCM models of a set of security architectural tactics that describe security design measures in a very general, abstract, and implementation-independent way. The resulting security extensions are described using a metamodel and implemented within the jUCMNav tool. We illustrate our approach using a UCM scenario describing the modification of consultants’ pay rates.
Jameleddine Hassine, Abdelwahab Hamou-Lhadj

Model-Based Testing

Frontmatter
Generating Configurations for System Testing with Common Variability Language
Abstract
Modern systems are composed of many subsystems, so it is necessary to understand how to combine them into complete functional systems. When testing a system that includes hardware, it is important that each selected test configuration delivers maximum information for covering many test cases. We have developed a method and a tool for creating a small set of effective test configurations that is based on a systematic approach to describing and formalizing the functionality of the whole system as well as its component into subsystems using feature models and relational notations between them. We applied our approach to an example point-of-sale checkout system consisting of one server and multiple registers.
Daisuke Shimbara, Øystein Haugen
Model-Based Product Line Testing: Sampling Configurations for Optimal Fault Detection
Abstract
Product line (PL) engineering is an emerging methodology for the development of variant-rich systems. As product lines are viable for this purpose, testing them is complicated in contrast to non-variable systems, as there is an increasing amount of possible products due to the number of features. The question of which products should be chosen for testing is still an ongoing challenge.
We present coverage criteria for sampling configurations from reusable test cases. Such criteria are e.g. choosing as many different products as possible so each of the test cases can be executed once. The main contribution is an analysis of the resulting fault detection potential for the presented criteria. The analysis is supported by an example product line and a mutation system for assessing the fault detection capability. From the results of this example, we draw conclusions about the different coverage criteria.
Hartmut Lackner
Testing Business Processes Using TTCN-3
Abstract
Business Process Management (BPM) applications in the medical domain pose challenging testing problems that result from parallel execution of test behaviors performed by different actors. Hospitals nowadays function with the principle of pools of personnel. Each pool addresses a specific functionality and each member of the pool can pick any task that is proposed to the pool. The challenge for BPM testing is in the existence of dependencies between actors and the corresponding test description where the stimuli sent to the BPM that is the system under test (SUT) by one actor produces responses that affect a selected number of other actors belonging to a pool. Unit testing of such systems has proven to be of limited efficiency in detecting faults that can be detected only during parallel execution of test components representing actors. We propose an architecture based on the TTCN-3 model of separation of concern and its intensive parallel test component (PTC) concept which provides solutions that are beyond traditional telecommunication systems testing and which have revealed opportunities for improving TTCN-3.
Bernard Stepien, Kavya Mallur, Liam Peyton
Generating Performance Test Model from Conformance Test Logs
Abstract
In this paper, we present a method that learns a deterministic finite state machine from the conformance test logs of a telecommunication protocol; then that machine is used as test model for performance testing. The learning process is in contrast to most theoretical methods automatic; it applies a sequential pattern mining algorithm on the test logs, and uses a recently proposed metric for finding frequent and significant transition sequences. The method aims to help and speed up test model design, and at the same time it may not provide an exact solution, the equivalence of some states may not be proven. In the paper, we show the results of experiments on random machines, and issues and considerations that arise when the method was applied to 3GGP Telephony Application Server test logs.
Gusztáv Adamis, Gábor Kovács, György Réthy
Backmatter
Metadata
Title
SDL 2015: Model-Driven Engineering for Smart Cities
Editors
Joachim Fischer
Markus Scheidgen
Ina Schieferdecker
Rick Reed
Copyright Year
2015
Electronic ISBN
978-3-319-24912-4
Print ISBN
978-3-319-24911-7
DOI
https://doi.org/10.1007/978-3-319-24912-4

Premium Partner