Skip to main content

About this book

This book constitutes the refereed proceedings of the First International Conference on Systems Modelling and Management, ICSMM 2020, planned to be held in Bergen, Norway, in June 2020. Due to the COVID-19 pandemic the conference did not take place physically or virtually.
The 10 full papers and 3 short papers were thoroughly reviewed and selected from 19 qualified submissions. The papers are organized according to the following topical sections: verification and validation; applications; methods, techniques and tools.

Table of Contents


Verification and Validation


Applying Dynamic Programming to Test Case Scheduling for Automated Production Systems

In today’s practice, the engineering lifecycle of the manufacturing systems is getting shorter due to frequent requirement changes. Since the manufacturing systems are required to have both – higher availability from a productivity viewpoint and reliability from a safety viewpoint. To check and meet these requirements, quality assurance, typified by testing is one of the significant engineering steps. Though existing test cases can be reused during testing, there also appears a selection problem out of a vast amount of test cases. Especially, it gets more important when the time is extremely limited, e.g. in commissioning and start-up process that is a mandatory process of manufacturing systems or in regression testing. In the previous work, we have presented approaches regarding how to define and determine the utility of test cases. In this paper, we present an efficient test case scheduling approach by applying an optimization algorithm, so called “dynamic programming”. Considering a physical setup time of the mechatronics system within the approach, it becomes more applicable to the practice. Through the numerical experiment results, we also show the superiority and the scalability of the approach in comparison to two different straight-forward scheduling approaches.
Kathrin Land, Birgit Vogel-Heuser, Suhyun Cha

A Model-Driven Mutation Framework for Validation of Test Case Migration

Software testing is important in software migration as it is used to validate the migration and ensure functional equivalence which is a key requirement. Developing new test cases for the migrated system is costly and, therefore, the migration of existing test cases is an attractive alternative. As the migrated test cases validate the whole migration, their migration should be clearly validated as well. Applying mutation analysis to validate test case migration is a promising solution candidate. However, due to the diversity of the migration context, applying mutation analysis is quite challenging for conceptual and especially for practical reasons. The different types of test cases combined with the different technologies used, make the application of existing, mostly code-based and mutation score-oriented, mutation tools and frameworks barely possible. In this paper, we present a flexible and extensible model-driven mutation framework applicable in different migration scenarios. We also present a case study, where our mutation framework was applied in industrial context.
Ivan Jovanovikj, Nils Weidmann, Enes Yigitbas, Anthony Anjorin, Stefan Sauer, Gregor Engels

Towards Consistency Checking Between a System Model and Its Implementation

In model-based systems engineering, a system model is the central development artifact containing architectural and design descriptions of core parts of the system. This abstract representation of the system is then partly realized in code. Throughout development, both system model and code evolve independently, incurring the risk of them drifting apart. Inconsistency between model and code can lead to errors in development, resulting in delayed or erroneous implementation. We present a work in progress towards automated mechanisms for checking consistency between a system model and code, within an industrial model-based systems engineering setting. In particular, we focus on automatically establishing traceability links between elements of the system model and parts of the code. The paper describes the challenges in achieving this in industrial practices and outlines our envisioned approach to overcome those challenges.
Robbert Jongeling, Johan Fredriksson, Federico Ciccozzi, Antonio Cicchetti, Jan Carlson



Towards Model-Driven Digital Twin Engineering: Current Opportunities and Future Challenges

Digital Twins have emerged since the beginning of this millennium to better support the management of systems based on (real-time) data collected in different parts of the operating systems. Digital Twins have been successfully used in many application domains, and thus, are considered as an important aspect of Model-Based Systems Engineering (MBSE). However, their development, maintenance, and evolution still face major challenges, in particular: (i) the management of heterogeneous models from different disciplines, (ii) the bi-directional synchronization of digital twins and the actual systems, and (iii) the support for collaborative development throughout the complete life-cycle. In the last decades, the Model-Driven Engineering (MDE) community has investigated these challenges in the context of software systems. Now the question arises, which results may be applicable for digital twin engineering as well.
In this paper, we identify various MDE techniques and technologies which may contribute to tackle the three mentioned digital twin challenges as well as outline a set of open MDE research challenges that need to be addressed in order to move towards a digital twin engineering discipline.
Francis Bordeleau, Benoit Combemale, Romina Eramo, Mark van den Brand, Manuel Wimmer

Reusable Data Visualization Patterns for Clinical Practice

Among clinical psychologists involved in guided internet-facilitated interventions, there is an overarching need to understand patients symptom development and learn about patients need for treatment support. Data visualizations is a technique for managing enormous amounts of data and extract useful information, and is often used in developing digital tool support for decision-making. Although there exists numerous data visualisation and analytical reasoning techniques available through interactive visual interfaces, it is a challenge to develop visualizations that are relevant and suitable in a healthcare context, and can be used in clinical practice in a meaningful way. For this purpose it is necessary to identify actual needs of healthcare professionals and develop reusable data visualization components according to these needs. In this paper we present a study of decision support needs of psychologists involved in online internet-facilitated cognitive behavioural therapy. Based on these needs, we provide a library of reusable visual components using a model-based approach. The visual components are featured with mechanisms for investigating data using various levels of abstraction and causal analysis.
Fazle Rabbi, Jo Dugstad Wake, Tine Nordgreen

A Model Based Slicing Technique for Process Mining Healthcare Information

Process mining is a powerful technique which uses an organization’s event data to extract and analyse process flow information and develop useful process models. However, it is difficult to apply process mining techniques to healthcare information due to factors relating to the complexity inherent in the healthcare domain and associated information systems. There are also challenges in understanding and meaningfully presenting results of process mining and problems relating to technical issues among the users. We propose a model based slicing approach based on dimensional modeling and ontological hierarchies that can be used to raise the level of abstraction during process mining, thereby more effectively dealing with the complexity and other issues. We also present a structural property of the proposed slicing technique for process mining.
Fazle Rabbi, Yngve Lamo, Wendy MacCaull

Validity Frame Driven Computational Design Synthesis for Complex Cyber-Physical Systems

The increasing complexity and performance demands of cyber-physical systems (CPS) force the engineers to switch from traditional, well understood single-core embedded platform to complex multi-core or even heterogeneous embedded platforms. The deployment of a control algorithm on such advanced embedded platforms can affect the control behavior even more than on a single-core embedded platform. It is therefore key to reason about this deployment early within the design process. We propose the use of the Validity Frame concept as enabling technique within the Computational Design Synthesis (CDS) process to automatically generate design alternatives and to prune nonsensical alternatives, narrowing the design space and thus increasing efficiency. For each valid control algorithm alternative, the control behavior under deployment is examined using a custom simulator enabled by modeling the embedded platform and the application deployment explicitly. We demonstrate our approach in the context of a complex cyber-physical system: an advanced safety-critical control system for brushless DC motors.
Bert Van Acker, Yon Vanommeslaeghe, Paul De Meulenaere, Joachim Denil

Industrial Plant Topology Models to Facilitate Automation Engineering

Industrial plant topology models can potentially automate many automation engineering tasks that are today carried out manually. Information on plant topologies is today mostly available in informal CAD drawings, but not formal models that transformations could easily process. The upcoming DEXPI/ISO15926 standard may enable turning CAD drawings into such models, but was so far mainly used for data exchange. This paper proposes extensions to the CAYENNE method for control logic and process graphics generation to utilize DEXPI models and demonstrates the supported model transformation chain prototypically in two case studies involving industrial plants. The results indicate that the model expressiveness and mappings were adequate for the addressed use cases and the model processing could be executed in the range of minutes.
Heiko Koziolek, Julius Rückert, Andreas Berlet

Methods, Techniques and Tools


On the Replicability of Experimental Tool Evaluations in Model-Based Development

Lessons Learnt from a Systematic Literature Review Focusing on MATLAB/Simulink
Research on novel tools for model-based development differs from a mere engineering task by providing some form of evidence that a tool is effective. This is typically achieved by experimental evaluations. Following principles of good scientific practice, both the tool and the models used in the experiments should be made available along with a paper. We investigate to which degree these basic prerequisites for the replicability of experimental results are met by recent research reporting on novel methods, techniques, or algorithms supporting model-based development using MATLAB/Simulink. Our results from a systematic literature review are rather unsatisfactory. In a nutshell, we found that only 31% of the tools and 22% of the models used as experimental subjects are accessible. Given that both artifacts are needed for a replication study, only 9% of the tool evaluations presented in the examined papers can be classified to be replicable in principle. Given that tools are still being listed among the major obstacles of a more widespread adoption of model-based principles in practice, we see this as an alarming signal. While we are convinced that this can only be achieved as a community effort, this paper is meant to serve as starting point for discussion, based on the lessons learnt from our study.
Alexander Boll, Timo Kehrer

Exploring Validity Frames in Practice

Model-Based Systems Engineering (MBSE) provides workflows, methods, techniques and tools for optimal simulation-based design and realization of complex Software-Intensive, Cyber-Physical Systems. One of the key benefits of this approach is that the behavior of the realized system can be reasoned about and predicted in-silico, before any prototype has been developed. Design models are increasingly used after the system has been realized as well. For example, a (design) digital twin can be used for runtime monitoring to detect and diagnose discrepancies between the simulated and realized system. Inconsistencies may arise, however, because models were used at design time that are not valid within the operating context of the realized system. It is often left to the domain expert to ensure that the models used are valid with respect to their realized counterpart. Due to system complexity and automated Design-Space Exploration (DSE), it is increasingly difficult for a human to reason about model validity. We propose validity frames as an explicit model of the contexts in which a model is a valid representation of a system to rule out invalid designs at design time. We explain the essential and conceptual, yet practical, structure of validity frames and a process for building them using an electrical resistor in the optimal design of a high-pass filter as a running example. We indicate how validity frames can be used in a DSE process, as well as for runtime monitoring.
Simon Van Mierlo, Bentley James Oakes, Bert Van Acker, Raheleh Eslampanah, Joachim Denil, Hans Vangheluwe

Suitability of Optical Character Recognition (OCR) for Multi-domain Model Management

The development of systems following model-driven engineering can include models from different domains. For example, to develop a mechatronic component one might need to combine expertise about mechanics, electronics, and software. Although these models belong to different domains, the changes in one model can affect other models causing inconsistencies in the entire system. There are, however, a limited amount of tools that support management of models from different domains. These models are created using different modeling notations and it is not plausible to use a multitude of parsers geared towards each and every modeling notation. Therefore, to ensure maintenance of multi-domain systems, we need a uniform approach that would be independent from the peculiarities of the notation. Meaning that such a uniform approach can only be based on something which is present in all those models, i.e., text, boxes, and lines. In this study we investigate the suitability of optical character recognition (OCR) as a basis for such a uniformed approach. We select graphical models from various domains that typically combine textual and graphical elements, and we focus on text-recognition without looking for additional shapes. We analyzed the performance of Google Cloud Vision and Microsoft Cognitive Services, two off-the-shelf OCR services. Google Cloud Vision performed better than Microsoft Cognitive Services being able to detect text of 70% of model elements. Errors made by Google Cloud Vision are due to absence of support for text common in engineering formulas, e.g., Greek letters, equations, and subscripts, as well as text typeset on multiple lines. We believe that once these shortcomings are addressed, OCR can become a crucial technology supporting multi-domain model management.
Weslley Torres, Mark G. J. van den Brand, Alexander Serebrenik

Simplified View Generation in a Deep View-Based Modeling Environment

Projective modeling environments offer a more efficient and scalable way of supporting multiple views of large software systems than traditional, synthesis-based approaches to view-based development. However, the definition of the view projection transformations needed to create views, on demand, from the single underlying model and ensure that they remain synchronized is a complex and time-consuming process. In particular, to make views editable, the projection process involves the creation of “traces” to map view model elements to their sources in the single underlying model. While this is unavoidable for most view types, for a commonly occurring special case this level of complexity is not required. In this paper we therefore present a simpler approach, based on the OCL language, which simplifies the projection definitions for this kind of view. The approach is defined in the context of a deep, view-based modeling environment which combines support for views with multi-level modeling in order to seamlessly cover all phases of a system’s life cycle.
Arne Lange, Colin Atkinson, Christian Tunjic

GrapeL: Combining Graph Pattern Matching and Complex Event Processing

Incremental Graph Pattern Matching (IGPM) offers an elegant approach to find patterns in graph-based models, reporting newly added and recently removed pattern matches. However, analyzing these matches w.r.t. temporal and causal dependencies can in general only be done by extending not just the IGPM engine but also the underlying model, which often is impractical and sometimes even impossible. Therefore, we transform the stream of pattern matches to a stream of events and employ Complex Event Processing (CEP) to detect such dependencies and derive more complex events from them. For this purpose, we introduce GrapeL as a textual language to specify and generate integrated solutions using both IGPM and CEP to benefit from the synergy of both approaches, which we present in the context of a flight and booking scenario. Finally, we show that our solution can compete with an optimized hand-crafted version without GrapeL and CEP while offering a specification that yields a less tedious and error-prone design process.
Sebastian Ehmes, Lars Fritsche, Konrad Altenhofen


Additional information

Premium Partner

    Image Credits