Skip to main content
main-content

Über dieses Buch

This volume constitutes the refereed proceedings of the Confederated International International Workshop on Enterprise Integration, Interoperability and Networking (EI2N ), Fact Based Modeling ( FBM), Industry Case Studies Program ( ICSP ), International Workshop on Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society (Meta4eS) and, 1st International Workshop on Security via Information Analytics and Applications (SIAnA 2019) held as part of OTM 2018 in October 2019 in Rhodes, Greece.
As the three main conferences and the associated workshops all share the distributed aspects of modern computing systems, they experience the application pull created by the Internet and by the so-called Semantic Web, in particular developments of Big Data, increased importance of security issues, and the globalization of mobile-based technologies.

Inhaltsverzeichnis

Frontmatter

OTM 2019 Keynote

Frontmatter

Choose for AI and for Explainability

Abstract
As an expert in decision support systems development, I have been promoting transparency and self-explanatory systems to close the plan-do-check-act cycle. AI adoption has tripled in 2018, moving AI towards the Gartner-hype-cycle peak. As AI is getting more mainstream, more conservative companies have good reasons to enter this arena. My impression is that the journey is starting all over again as organizations start using AI technology as black box systems. I think that eventually these companies will also start asking for methods that result in more reliable project outcomes and integrated business systems. The idea that explainable AI is at the expense of accuracy is deeply rooted in the AI community. Unfortunately, this has hampered research into good explainable models and indicates that the human factor in AI is underestimated. Driven by governments asking for transparency, a public asking for unbiased decision making and compliance requirements on business and industry, a new trend and research area is emerging. This talk will explain why explainable artificial intelligence is needed, what makes an explanation good and how ontologies, rule-based systems and knowledge representation may contribute to the research area named XAI.
Silvie Spreeuwenberg

14th OTM/IFAC/IFIP International Workshop on Enterprise Integration, Interoperability and Networking (EI2N 2019)

Frontmatter

Design Science as Methodological Approach to Interoperability Engineering in Digital Production

Abstract
Interoperability is considered crucial for sustainable digitization of organizations. Interoperability Engineering captures organizational, semantic and technological aspects of production process components, and combines them for operation. In this paper, we present an adaptable methodological development framework stemming from Design Science. It can be used along structured value chains in digital production for aligning various production process components for operation. We demonstrate its applicability for Additive Manufacturing (AM) and its capability to settle organizational, semantic, and technological aspects in the course of a digital production. AM starts with organizational goal setting and structuring requirements for an envisioned solution, which becomes part of an AM project contract. All pre- and post-fabrication steps are framed by design science stages. Their order help structuring interoperability aspects and enable stepwise addressing them along iterative development cycles. Due its openness, the proposed framework can be adapted to various industrial settings.
Christian Stary, Georg Weichhart, Claudia Kaar

Towards Smart Assessment: A Metamodel Proposal

Abstract
Assessment initiatives in organisations are focused on the evaluation of organisational aspects aiming to obtain a critic view of their status. The assessment results are used to lead improvement programs or to serve as base for comparative purposes. Assessment approaches may comprise complex tasks demanding a large amount of time and resources. Moreover, assessment results are highly dependent on the assessment input, which may have a dynamic nature due to the constant evolution of organisations. The assessment results should be adaptable to these changes without much effort whilst being able to provide efficient and reliable results. Therefore, providing smart capabilities to the assessment process or to systems in charge of performing assessments represents a step forward in the search for more efficient appraisal processes. This work proposes a metamodel defining the elements of a Smart Assessment, which is guided by elements related to the smartness concept such as knowledge, learning and reasoning capabilities. The metamodel is further specialised considering a Enterprise Interoperability assessment scenario.
Marcelo Romero, Wided Guédria, Hervé Panetto, Béatrix Barafort

General Big Data Architecture and Methodology: An Analysis Focused Framework

Abstract
With the development of information technologies such as cloud computing, the Internet of Things, the mobile Internet, and wireless sensor networks, big data technologies are driving the transformation of information technology and business models. Based on big data technology, data-driven artificial intelligence technology represented by deep learning and reinforcement learning has also been rapidly developed and widely used. But big data technology is also facing a number of challenges. The solution of these problems requires the support of a general big data reference architecture and analytical methodology. Based on the General Architecture Framework (GAF) and the Federal Enterprise Architecture Framework 2.0 (FEAF 2.0), this paper proposes a general big data architecture focusing on big data analysis. Based on GAF and CRISP-DM (cross-industry standard process for data mining), the general methodology and structural approach of big data analysis are proposed.
Qing Li, Zhiyong Xu, Hailong Wei, Chao Yu, ShuangShuang Wang

Ensure OPC-UA Interfaces for Digital Plug-and-Produce

Abstract
Experiences in industry has illustrated that “Open Platform Communications Unified Architecture” (OPC-UA) as upcoming de-facto standard for Industry 4.0 requires interoperability tests to support a digital plug-and-produce. Existing tools to validate OPC-UA implementations need to be applicable for such validations. Within the German national “Internet of Things Test” (IoT-T) project, we developed concepts and software for the validation of interoperability between different cyber physical systems using OPC-UA. The paper focuses on this part of the work and provides insights in the results. The results consists of industrial use cases, requirements, concepts and open source software. It also includes the comparison of the developments in the IoT-T project with the Compliance Test Tools (CTT) provided by the OPC Foundation (OPCF), which checks the conformity of the OPC-UA servers and clients against the OPC-UA specification.
Frank-Walter Jaekel, Tobias Wolff, Vincent Happersberger, Thomas Knothe

Predictive Maintenance Model with Dependent Stochastic Degradation Function Components

Abstract
The paper presents an Integrated Maintenance Decision Making Model (IMDMM) concept for cranes under operation with dependent stochastic function into the container type terminals. The target is to improve cranes operational efficiency through minimizing the risk of the Gantry Cranes Inefficiency (GCI) results based on implementation of copula approach model for stochastic degradation function dependency between cranes. In the present study, we investigate the influence of dependent stochastic degradation of multiple cranes on the optimal maintenance decisions. We use copula to model the dependent stochastic degradation of components and we formulate the optimal decision problem based on the minimum GCI expected. We illustrate the developed probabilistic analysis approach and the influence of the dependency of the stochastic degradation on the preferred decisions through numerical examples, and we discuss the close relationship of this approach with interoperability concepts. The crane operation risk is estimated with a sequential Monte Carlo Markov Chain (MCMC) simulation model and the optimization model behind of IMDMM is supported through the Particle Swarm Optimization (PSO) algorithms.
Janusz Szpytko, Yorlandys Salgado Duarte

5th International Workshop on Fact Based Modeling (FBM 2019)

Frontmatter

Word Meaning, Data Semantics, Crowdsourcing, and the BKM/A-Lex Approach

Abstract
The lexical definition of concepts is an integral part of Fact Based Modelling. More in general, structured description of term meaning, in many forms and guises, has since the early days played a role in information systems (data dictionaries, data modelling), data management (business glossaries for data governance), knowledge engineering (applied logic, rule definition and management), and the Semantic Web (RDF). We observe that at the core of many different approaches to lexical meaning lies the combination of semantic networks and textual definitions, and propose to re-appreciate these relatively simple basics as the theoretical but also, and perhaps more so, the practical core of dealing with Data Semantics. We also explore some fundamental concepts from cybernetics, providing some theoretical basis for advocating crowdsourcing as a way of taking up a continuous lexical definition in and across domain communities. We discuss and compare various combined aspects in lexical definition approaches from various relevant fields in view of the A-Lex tool, which supports a crowdsourcing approach to the lexical definition in a data management context: Business Knowledge Mapping. We explain why this approach indeed applies most of the core concepts of “word meaning as a vehicle for dealing with data semantics in and across communities”.
Thomas Nobel, Stijn Hoppenbrouwers, Jan Mark Pleijsant, Mats Ouborg

Leveraging Ontologies for Natural Language Processing in Enterprise Applications

Abstract
The recent advances in Artificial Intelligence and Deep Learning are widely used in real-world applications. Enterprises create multiple corpora and use them to train machine learning models for various applications. As the adoption becomes more widespread, it raises further concerns in areas such as maintenance, governance and reusability. This paper will explore the ways to leverage ontologies for these tasks in Natural Language Processing. Specifically, we explore the usage of ontologies as a schema, configuration and output format. The approach described in the paper are based on our experience in a number of projects for medical, enterprise and national security domains.
Tatiana Erekhinskaya, Matthew Morris, Dmitriy Strebkov, Dan Moldovan

Verbalizing Decision Model and Notation

Abstract
The ability to effectively communicate decision rules becomes increasingly important, as the stake of multidisciplinary teams within organizations increases. With Decision Model and Notation (DMN) turning out to become one of the international standards for modelling decision rules, the need for effective communication of decision rules modelled using the standard grows. In order to facilitate this communication we present a structured verbalization for DMN decision tables, using principles of fact based thinking and modelling theory. This verbalization is designed to be fully interpretable by anyone that has knowledge about the described domain. The key to accommodating this interpretability is the use of a structured natural language as the foundation for the verbalization.
Tomas Cremers, Maurice Nijssen, John Bulles

Fully Traceable Vertical Data Architecture

Abstract
Data architecture is composed of models, policies, rules or standards that govern which data is collected, and how it is stored, arranged, integrated, and put to use in data systems and organizations [1]. Organizations often use models to describe this data architecture for a domain. But what type of models are needed?
This vertical data architecture approach describes what different models are needed, how these models are interlinked, what the concerns are of certain representation and how an organization can deal with the challenges of keeping these models aligned. This involves the analysis of a Universe of Discourse (UoD), creating conceptual information models (in FBM), transforming these into logical data models and transform these logical models into one or more implementation models. Issues like traceability back to the UoD and impact directly from the UoD to the implementation are main issues which often are hard to tackle.
John Bulles, Rob Arntz, Martijn Evers

A Best Practice for the Analysis of Legal Documents

Abstract
Organisations deal with a large set of source documents, often concerning law and regulation, that they need to comply to or bring to execution. In the past, a team of legal specialists analysed these documents and wrote new documents containing informal preliminary specifications which are hard to validate. Another group of experts translates these documents to specifications for IT-systems.
The Best Practice as described in this paper, takes another approach. In a multidisciplinary team of experts, combining the knowledge of legal specialists with the modelling competences of knowledge modelers. It describes how legal experts will analyse legal documents, add interpretations and classifications (model elements) and how knowledge modelers assist in creating extended conceptual information models.
The result is a validated and extended conceptual information model, in which all knowledge elements are traceable to the original legal text. This happens with fewer translation steps compared to the approach mentioned above.
John Bulles, Hennie Bouwmeester, Anouschka Ausems

A Conceptual Model of the Blockchain

Abstract
Hyperledger Fabric is a very large project under the umbrella of the Linux Foundation, with hundreds of developers involved. In this paper we will illustrate how the application of fact-based modeling will help us in understanding some basic features of the blockchain concept as is used in Hyperledger Fabric (HLF) and that it can serve as a conceptual blueprint of HLF for all involved to use.
Peter Bollen

Creating a Space System Ontology Using “Fact Based Modeling” and “Model Driven Development” Principles

Abstract
In this practical paper we describe our ongoing project of building a candidate skeleton for the new Space System Ontology that is to be used by the space system community; starting from the vision: being able to achieve semantic interoperability instead of focusing on technical interoperability), through our approach: Fact Based Modeling (FBM) and Model Driven Development (MDD) and finally ending with the results: an Object Role Model containing the semantic model of the Space System Ontology. This project is based on the already existing meta-model of Arcadia, a field proven method for model based system engineering. By reverse engineering the UML-based meta-model of a tool supporting the method, we were able to remove the technical HOW’s and restore the true conceptual meaning of the meta-model. We will describe the algorithms we used for automatically reverse engineering UML-based meta-models to ORM-models, we will talk about the value of connecting the conceptual model to real-life examples by visualizing, and introduce the process of automatically generating editors in order to verify completeness and correctness by populating the model. We will conclude with general findings while reverse engineering UML-based models and some tips on how to solve typical modeling problems that arises when transforming object oriented artifacts to their semantic equivalents.
Kaiton Buitendijk, Carla Arauco Flores

Industry Case Studies Program 2019 – Industry Day (ICSP 2019)

Frontmatter

Translating a Legacy Stack to Microservices Using a Modernization Facade with Performance Optimization for Container Deployments

Abstract
We often find it challenging to translate a legacy system when the software business is critical. Adding to the misery of technical debt is the “Broken Window” concept which adds more complexity in exercising dynamic context resolutions for independent services alongside governance and data management. This often leads to a maze of disoriented services with high interdependency. To seriously adopt “Operate what you Build” phenomena, we need a granular facade approach to understand the business requirement and translate it to the architectural operators. The paper tries to provide an approach to establish platform independent interfaces, bounded domain contexts, eliminating non-critical legacy components and incremental quality aware methods to translate a legacy system to microservices. Along with the architectural objects, the paper will also present granular level of performance management of the translated application to consider factors like - system, container, network and application service itself.
Prabal Mahanta, Suchin Chouta

A Hybrid Approach to Insightful Business Impacts

Abstract
Organizations often end up with wasted space when handling datasets generated as code-application logs. Every dataset be it semi-structured, unstructured is monitored and insights are driven be it predictive, prescriptive or descriptive.
Now we often replicate data to an application space for analysis and these datasets are often cause a critical problem which is not cost effective. Using this paper we try to evaluate cost effective ways of doing decentralised in-situ and in-transit data analysis with the objective of providing business impact insights.
We also discuss techniques for queue management, scenario based hypothesis for various business requirements and the approach to achieve cost effective analysis mechanisms. Based on the scenarios, we also try to bring in the importance of the in-situ techniques as data movement and storage is itself energy hungry problem when it comes to simulation and analytics.
Prabal Mahanta, Abdul-Gafoor Mohamed

Digital Transformation – A Call for Business User Experience Driven Development

Abstract
In order to transform organizations digitally, it is critical to understand and analyse the disruption effects from the point of view of the business model or a target consumer group. While organizations layout their roadmap to achieve transformations, they often invest in techniques and methodologies without a vision on business analytics, intelligence and end up creating a non-manageable platform and data graveyard. The illusion of achieving the silver lining in analytics and data insights for business is creating a critical roadblock in the path of optimized and scalable mechanisms of developing a well-oiled data management system.
With the growing practices in designing business data management and analytics systems, it is a complex problem to design a scalable and a mature system. Here, we try to discuss on how to, with the changing dimension of platform offerings, use portability as one of the key parameters to scale efficiently and effectively. Also, artificial intelligence (AI) becomes a key component in many optimizations in specific scenario of software applications but when it comes to business data management, we only use it for insights, predictions and seldom map the power of AI to manage the data effectively. We discuss various approaches to address the criticality of the business scenarios and how we can implement the focus on data at its core.
Prabal Mahanta, Abdul-Gafoor Mohamed

8th International Workshop on Methods, Evaluation, Tools and Applications Towards a Data-Driven e-Society (Meta4eS 2019)

Frontmatter

Defining a Master Data Management Approach for Increasing Open Data Understandability

Abstract
Reusing open data is an opportunity for eSociety to create value through the development of novel data-intensive IT services and products. However, reusing open data is hampered by lack of data understandability. Actually, accessing open data requires additional information (i.e., metadata) that describes its content in order to make it understandable: if open data is misinterpreted ambiguities and misunderstandings will discourage eSociety for reusing it. In addition, services and products created by using incomprehensible open data may not generate enough confidence in potential users, thus becoming unsuccessful. Unfortunately, in order to improve the comprehensibility of the data, current proposals focus on creating metadata when open data is being published, thus overlooking metadata coming from data sources. In order to overcome this gap, our research proposes a framework to consider data sources metadata within a Master Data Management approach in order to improve understandability of the corresponding (shortly published) open data.
Susana Cadena-Vela, Jose-Norberto Mazón, Andrés Fuster-Guilló

Human-Activity Recognition with Smartphone Sensors

Abstract
The aim of the Human-Activity Recognition (HAR) is to identify the actions carried out by an individual given a data set of parameters recorded by sensors. Successful HAR research has focused on the recognition of relatively simple activities, as sitting or walking and its applications are mainly useful in the fields of healthcare, tele-immersion or fitness tracking. One of the most affordable ways to recognize human activities is to make use of smartphones. This paper draws a comparison line between several ways of processing and training the data provided by smartphone sensors, in order to achieve an accurate score when recognizing the user’s activity.
Dănuț Ilisei, Dan Mircea Suciu

Chatbots as a Job Candidate Evaluation Tool

Short Paper
Abstract
Nowadays there is a constant interest in solving the problem of recruiting new personal in a constantly changing environment, while reducing the time invested into the process. We propose a solution that uses an intelligent chatbot which drives the screening interview. The users (job candidates) will feel like they talk to a real person and not just filling a simple webform for another job interview. At the same time, the chatbot can evaluate the data provided by users and score them through a sentiment analysis algorithm based on IBM Watson Personality Insights service. Our solution is meant to replace the first step in the interviewing process and to automatically elaborate a job candidate profile.
Andrei-Ionuț Carțiș, Dan Mircea Suciu

Big Data Management: A Case Study on Medical Data

Abstract
The paper introduces an approach for scalable data management in the context of Big Data. The main objective of the study is to design and implement a metadata model and a data catalog solution based on emerging Big Data technologies. The solution is scalable and integrates the following components: (1) the data sources; (2) a file scanner; (3) the metadata storage and processing component; and (4) a visualization component. The approach and its underlying metadata model are demonstrated with a toy use case from the medical domain, and can be easily adapted and extended to other use cases and requirements.
Vlad Sulea, Ioana Ciuciu

Personalizing Smart Services Based on Data-Driven Personality of User

Abstract
The article presents a research method of creating classification of users needs based on their personality (Big 5) determined on the basis of available digital data. The research is work in progress and is based on a specific use case which is a smart services (home environment) with users interface on a mobile phone. This paper includes the results of preliminary research on the needs of users, formulates research problems and discusses assumptions and the research methods. What distinguishes the proposed solution from others, is that the profile will be available for service just after installing, without the necessity of collecting data about user activity. The idea of data-based users classification, can be used at the early stage, which seems to be important in the adaptation process to any new smart service.
Izabella Krzeminska

Temporary User States Method to Support Home Habitants

Abstract
Major goal of the research introduced in the paper is elaboration of a method delivering temporary user states to personalize data-based services for this user. The method addressing a paradigm of ambient living will be used for support people broadly in their everyday life activities, like habitants in their home environments, persons needing assistance (in Ambient Assist Living) and others. It can be applied in new data-based services on 5G platforms, using intelligent ambient environments. A system for home experimentally developed in Orange Labs is one of the solutions where the method can be implemented. This system is aligned with an idea of sensitive home, discovering habitant’s affective characteristics, like personality and emotions, based on his data and reacting according to this characteristics. The paper aims at presentation of another affective characteristics based on discovering temporary user states. An important aspect considered is privacy, GDPR compliance and moreover it should include a consideration on ethics.
Ewelina Szczekocka

1st International Workshop on Security via Information Analytics and Applications (SIAnA 2019)

Frontmatter

SDN-GAN: Generative Adversarial Deep NNs for Synthesizing Cyber Attacks on Software Defined Networks

Abstract
The recent evolution in programmable networks such as SDN opens the possibility to control networks using software controllers. However, such networks are vulnerable to attacks that occur in traditional networks. Several techniques are proposed to handle the security vulnerabilities in SDNs. However, it is challenging to create attack signatures, scenarios, or even intrusion detection rules that are applicable to SDN dynamic environments. Generative Adversarial Deep Neural Networks automates the generation of realistic data in a semi supervised manner. This paper describes an approach that generates synthetic attacks that can target SDNs. It can be used to train SDNs to detect different attack variations. It is based on the most recent OpenFlow models/algorithms and it utilizes similarity with known attack patterns to identify attacks. Such synthesized variations of attack signatures are shown to attack SDNs using adversarial approaches.
Ahmed AlEroud, George Karabatis

A Domain Adaptation Technique for Deep Learning in Cybersecurity

Abstract
In this paper we discuss an algorithm for transfer learning in cybersecurity. In particular, we develop a new image-based representation for the feature set in the source domain and train a convolutional neural network (CNN) using the training data. The CNN model is then augmented with one dense layer in the target domain before applying on the target dataset. The data we have used for our experimental results are taken from the Canadian Institute of Cybersecurity. The results show that transfer learning is feasible in cybersecurity which offers many potential applications including resource-constrained environments such as edge computing.
Aryya Gangopadhyay, Iyanuoluwa Odebode, Yelena Yesha

DeepNet: A Deep Learning Architecture for Network-Based Anomaly Detection

Abstract
Anomaly detection has been one of the most interesting research areas in the field of cybersecurity. Supervised anomaly detection systems have not been practical and effective enough in real-world scenarios. As a result, different unsupervised anomaly detection pipelines have gained more attention due to their effectiveness. Autoencoders are one of the most powerful unsupervised approaches which can be used to analyze complex and large-scale datasets. This study proposes a method called DeepNet, which investigates the potential of adopting an unsupervised deep learning approach by proposing an autoencoder architecture to detect network intrusion. An autoencoder approach is implemented on network-based data while taking different architectures into account. We provide a comprehensive comparison of the effectiveness of different schemes. Due to the unique methodology of autoencoders, specific methods have been suggested to evaluate the performance of proposed models. The results of this study can be used as a foundation to build a robust anomaly detection system with an unsupervised approach.
Javad Zabihi, Vandana Janeja

A Contextual Driven Approach to Risk Event Tagging

Abstract
Current methods of tagging events in a particular context, when deciding which ones pose a security risk on an enterprise network are inadequate. For example, changes in an environment, such as a larger number of HIPAA violations by certain user roles, can pose a risk to specific organizational functions or cyber infrastructure. To compound the problem, different information owners typically specify different user contexts based on differing organizational or individual needs. To address this problem, we developed an approach that utilizes semantic annotations, a technique that can aid in the understanding of how an event may affect knowledge of information in a domain. In this approach, semantic annotations are used to enable the tagging of events in accordance with differing organizational goals and user preferences. This work can be used to flag possible security violations and assist in their prevention.
Shawn Johnson, George Karabatis

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise