Skip to main content

Über dieses Buch

This book constitutes extended, revised and selected papers from the 20th International Conference on Enterprise Information Systems, ICEIS 2018, held in Funchal, Madeira, Portugal, in March 2018.

The 19 papers presented in this volume were carefully reviewed and selected for inclusion in this book from a total of 242 submissions. They deal with topics such as data science and databases; ontologies; social networks; knowledge management; software development; human-computer interaction, and multimedia.



A Computer-Based Framework Supporting Education in STEM Subjects

Education is considered as a key factor for competitiveness on micro- as well as on macro-economic levels, i.e., for a single person, a company, or a country. Several industries have been identified with significant current and/or future workforce shortages. They include diverse areas, such as the health and the technology sectors. As a result, initiatives to foster STEM (science, technology, engineering and mathematics) have emerged in many countries. In this paper, we report on a project of Australian Catholic University and Munich University of Applied Sciences. These universities have developed a framework to support STEM education. The framework is based on R-Project, a leading free software environment that has gained increasing attention particularly in the field of data analysis (statistics, machine learning etc.) in the past decade. The framework is intended to address the following three main challenges in STEM education: mathematics and, in the field of technology, algorithmic programming and dynamic webpage development.
Georg Peters, Tom Rückert, Jan Seruga

Online Content’s Popularity Metrics: RDF-Based Normalization

Social network websites are mainly constructed around the notion of user identities, as set up on the bases of their profiles, and online generated contents such as texts, videos, photos. Still, while some profiles gain an important position in the network, others do not. Similarly, some online generated contents appear to gain a great deal of attention from the part of users, whereas others are completely ignored. In this context, the notion of profile and online content related popularity has come to the forefront. Additionally, several studies turn out to be focused on the area of popularity associated analysis and prediction. Noteworthy, however, is that the popularity evaluative metrics prove to vary from a social network to another. In this respect, the present work is conceived to deal with such challenges through an advanced proposal whereby a unified presentation of popularity metrics related to each social entity, across several social networks, is put forward. Accordingly, a hierarchical structure of popularity metrics, as enhanced with a particular RDF presentation, is suggested, along with a brief summary of wide range of methods used to analyze such entities related popularities.
Hiba Sebei, Mohamed Ali Hadj Taib, Mohamed Ben Aouicha

Supporting Gamified Experiences for Informal Collaborative Groups

Gamification is the application of game-design elements and game principles in non-game contexts. Gamification is being successfully applied in a diverse set of areas such as Education, Business, Human Resources, Health, and Entertainment. In general, gamified systems are designed and implemented for specific contexts. There is not a multipurpose approach that can be applied to many contexts. Informal groups are composed of members who come together online to perform work or social activities, fostering engagement, commitment and participation. We propose a framework to support the design of gamified experiences in general scenarios for collaborative informal groups. We built a context agnostic tool, called Gamefy, to evaluate the framework. We then evaluated motivation and flexibility enabled by the tool. We also employed the framework to assess the perceived usefulness of gamification features in two particular contexts. We then were able to identify key features to gamified experiences, and the possibility of differences in features’ importance depending on context.
Samuel Timbó, Juliana de Melo Bezerra, Celso Massaki Hirata

Payment Authorization in Smart Environments: Security-Convenience Balance

One of the major roadblocks to mass adoption of smart environments and IoT services (and IoE in future) is the lack of ubiquitous solutions for passive and at the same time secure payment authorization at physical locations where services are provided. The main research goal of this work is to comprehensively evaluate such system proposed by the authors. When customers approach a point of sale it identifies them using face biometrics. After the order is completed, the system takes advantage of multimodal context-aware payment authorization to make a multi-criteria selection of the authorization method, optimally to make the whole process fully passive. All this enables a controlled balance between payment security and convenience for the client and for the seller. Empirical tests at an existing point of sale have been performed, the usage data have been collected, statistically analyzed and confronted with formulated research hypotheses.
Adam Wójtowicz, Jacek Chmielewski

An Intelligent and Data-Driven Decision Support Solution for the Online Surgery Scheduling Problem

In operational business situations it is necessary to be aware of and to understand what happens around you and what probably will happen in the near future to make optimal decisions. For example, Online Surgery Scheduling is the planning and control task of Operating Room Management and includes decisions that are difficult to deal with due to high cognitive and communicational efforts to gather the needed information. In addition, several uncertainties like complications, cancellations and emergencies as well as the need to monitor and control the interventions during execution distinguish the operational decision tasks in surgery scheduling from the tactical and strategical planning decisions. However, the emerging trend of connecting devices and intelligent methods in analytics, facilitate innovative approaches for decision support in this area. With the utilization of these concepts, we propose a data-driven approach for a Decisions Support System including components for monitoring, prediction and optimization in Online Surgery Scheduling.
Norman Spangenberg, Christoph Augenstein, Moritz Wilke, Bogdan Franczyk

A Fuzzy Reasoning Process for Conversational Agents in Cognitive Cities

Facing the challenges in a city that is to be understood as a complex construct, this article presents a solution approach for the further development of existing conversational agents, which should be used particularly in cities, for instance, as a source of information. The proposed framework consists of a fuzzy analogical reasoning process (based on structure-mapping theory) and a network-like memory (i.e., fuzzy cognitive maps stored in graph databases) as additions to the general architecture of a chatbot. Thus, it represents a concept of a global fuzzy reasoning process, which allows conversational agents to emulate human information processing by using cognitive computing (consisting of soft computing methods and cognition and learning theories). The framework is already in the third iteration of its development. Three experiments were conducted to examine the stability of the theoretical foundation as well as the potential of the framework.
Sara D’Onofrio, Stefan Markus Müller, Edy Portmann

A Resource-Aware Model-Based Framework for Load Testing of WS-BPEL Compositions

Nowadays, Web services compositions are playing a major role in the implementation of different types of distributed architectures. Such applications usually provide services to hundreds of users simultaneously. Load Testing is considered as an important type of testing for Web services compositions, as such applications require concurrent access by many users simultaneously. In this context, load testing for these types of applications seems an important task in order to discover problems under high loads. For this goal, we propose a model-based resource aware test architecture aiming to study the behavior of WS-BPEL compositions taking into account load conditions. The main contribution of this work consists of (a) adopting the timed automata formalism to model the system under test and to generate digital-clock test suites (b) identifying the best node for hosting each tester instance, then (c) running load tests and recording performance data and finally (d) analyzing the obtained logs in order to detect problems under load. Our approach is illustrated by means of a case study from the healthcare domain.
Moez Krichen, Afef Jmal Maâlej, Mariam Lahami, Mohamed Jmaiel

Collectively Constructing the Business Ecosystem: Towards Crowd-Based Modeling for Platforms and Infrastructures

In this conceptual article, we highlight group modeling and crowd-based modeling as an approach for collectively constructing business ecosystem models. Based on case study examples, we showcase how engaging in the collective activity of crowd-based modeling supports the creation of value propositions in business ecosystems. Such collective activity creates shared, IS-embedded resources on the network level that align the diverse set of ecosystem stakeholders such as firms, public actors, citizens, and other types of organizations. Based on extant research, we describe the roles involved in building ecosystem models that inform and reconfigure shared infrastructures and platforms.
Anne Faber, Sven-Volker Rehm, Adrian Hernandez-Mendez, Florian Matthes

Survey of Vehicular Network Simulators: A Temporal Approach

Evaluating protocols and applications for Intelligent Transportation Systems is the first step before deploying them in the real world. Simulations provide scalable evaluations with low costs. However, to produce reliable results, the simulators should implement models that represent as closely as possible real situations. In this survey, we provide a study of the main simulators focused on Intelligent Transport Systems assessment. Additionally, we examine the temporal evolution of these simulators giving information that leads to an overview understanding of how long the scientific community takes to absorb a new simulator proposal. The conclusions presented in this survey provide valuable insights that help researchers make better choices when selecting the appropriate simulator to evaluate new proposals.
Mauricio J. Silva, Genilson I. Silva, Celio M. S. Ferreira, Fernando A. Teixeira, Ricardo A. Oliveira

Graph Modeling for Topological Data Analysis

The importance of bringing the relational data to other models and technologies has been widely debated. In special, Graph Database Management Systems (DBMS) have gained attention from industry and academia for their analytic potential. One of its advantages is to incorporate facilities to perform topological analysis, such as link prediction, centrality measures analysis, and recommendations. There are already initiatives to map from a relational database to graph representation. However, they do not take into account the different ways to generate such graphs. This work discusses how graph modeling alternatives from data stored in relational datasets may lead to useful results. The main contribution of this paper is towards managing such alternatives, taking into account that the graph model choice and the topological analysis to be used by the user. Experiments are reported and show interesting results, including modeling heuristics to guide the user on the graph model choice.
Silas P. Lima Filho, Maria Claudia Cavalcanti, Claudia Marcela Justel

Automatic Mapping of Business Process Models for Ontologies with an Associated Query System

Business process models are employed in organizational environments to improve the understanding in the interactions between different sectors. They also help to better visualize the interdependencies from the processes that compose these environments. However, the understanding of these models is often limited to the visual representation of their elements. In addition, as the business process models become more complex, their readability and navigability are affected. It is difficult to represent and properly understand the implicit knowledge and the interdependencies of these models. The use of ontologies as a representation of business process models opens a complementary perspective to provide semantics to business processes. Ontologies conceptualize and organize the information that is embedded and unstructured in the business processes and that must be explored. They structure the implicit knowledge that is present in the business processes, enabling the understanding of this knowledge by machine. They also facilitate the sharing and reuse of knowledge by various agents, human or artificial. In this context, this work presents a systematic process to automatically map a business process model in BPMN v2.0 to an ontology, allowing to consult information about the model. To automate this systematic process, the PM2ONTO tool was developed, aiming to generate the ontology in OWL automatically, and made available predefined queries, elaborated with SPARQL, for querying information about the business process models.
Lukas Riehl Figueiredo, Hilda Carvalho de Oliveira

DaVe: A Semantic Data Value Vocabulary to Enable Data Value Characterisation

While data value and value creation are highly relevant in today’s society, there is as yet no consensus data value models, dynamics, measurement techniques or even methods of categorising and comparing them. In this paper we analyse and categorise existing aspects of data that are used in literature to characterise and/or quantify data value. Based on these data value dimensions, as well as a number of value assessment use cases, we also define the Data Value Vocabulary (DaVe) that allows for the comprehensive representation of data value. This vocabulary can be extended to allow for the representation of data value dimensions as required in the context at hand. This vocabulary will allow users to monitor and asses data value throughout any value creating or data exploitation efforts, therefore laying the basis for effective management of value and efficient value exploitation. It also allows for the integration of diverse metrics that span many data value dimensions and which most likely pertain to a range of different tools in different formats. DaVe is evaluated using Gruber’s ontology design criteria, and by instantiating it in a deployment case study. This paper is an extension of Attard and Brennan (2018) [3].
Judie Attard, Rob Brennan

Multi-view Navigation in a Personalized Scientometric Retrieval System

Given the large number of scientific productions, it becomes difficult to select those that meet the needs of researchers in scientific information and from certain sources of trust. One of the challenges facing researchers is finding quality scientific information that meets their research needs. In order to guarantee a quality result, a research method based on scientific quality is required. The quality of scientific information is measured by scientometrics based on a set of metrics and measures called scientometric indicators. In this paper we propose a new personalized information retrieval approach taking into account the researcher quality requirements. The proposed approach includes a scientometric document annotator, a scientometric user model, a scientometric ranking approach and different results visualization methods. We discuss the feasibility of this approach by performing different experimentations on its different parts. The incorporation of scientometric indicators into the different parts of our approach has significantly improved retrieval performance which is rated for 41.66% in terms of F-measure. An important implication of this finding is the existence of correlation between research paper quality and paper relevance. The revelation of this correlation implies better retrieval performance.
Nedra Ibrahim, Anja Habacha Chaibi, Henda Ben Ghézala

Inferring Students’ Emotions Using a Hybrid Approach that Combine Cognitive and Physical Data

There are nowadays a strong agreement in the research community that emotions directly impact learning. Then, an important feature for a software that claim to be useful for learning is to deal with students’ affective reactions. These software should be able to adapt to the users’ affective reactions, trying to get a more natural human-computer interaction. Some priors works achieved relative success in inferring students’ emotion. However, most of them, rely on intrusive, expensive or little practical sensors that track students’ physical reactions. This paper presents as its main contribution the proposal of a hybrid model for emotion inference, combining physical and cognitive elements, using cheaper and little intrusive method to collect data. First experiments with students in a traditional classroom demonstrated the feasibility of this proposal and also indicated promising results for inference of learning centered emotions. In these experiments we achieve an accuracy rate and Cohen Kappa near to 65% and 0.55, respectively, in the task of inferring five classes of learning centered emotion. Even though these results are better than some related work, we believe they can be improved in the future by incorporating new data to the model.
Ernani Gottardo, Andrey Ricardo Pimentel

Visual Filtering Tools and Analysis of Case Groups for Process Discovery

Dealing with average-sized event logs is considered a challenging task in process mining, in order to give value to event log data created by a wide variety of systems. An event log consists of a sequence of events for every case that was handled by the system. Discovery algorithms proposed in the literature work well in specific cases, but they usually fail in generic ones. Furthermore, there is no evidence that those existing strategies can handle logs with a large number of variants. We lack a generic approach to allow experts to explore event log data and decompose information into a series of smaller problems, to identify not only outliers, but also relations between the analyzed cases. In this chapter we propose a visual approach for filtering processes based on a low dimensionality representation of cases, a dissimilarity function based on both case attributes and case paths, and the use of entropy and silhouette to evaluate the uncertainty and quality, respectively, of each subset of cases. For each subset of cases, it is possible to reconstruct and evaluate each process model. Those contributions can be combined in an interactive tool to support process discovery. To demonstrate our tool, we use the event log from BPI Challenge 2017.
Sonia Fiol González, Luiz Schirmer, Leonardo Quatrin Campagnolo, Ariane M. B. Rodrigues, Guilherme G. Schardong, Rafael França, Mauricio Lana, Gabriel D. J. Barbosa, Simone D. J. Barbosa, Marcus Poggi, Hélio Lopes

Schema-Independent Querying and Manipulation for Heterogeneous Collections in NoSQL Document Stores

NoSQL document stores offer native support to efficiently store documents with different schema within a same collection. However, this flexibility made it difficult and complex to formulate queries or to manipulate collections with multiple schemas. Hence, the user has to build complex queries or to reformulate existing ones whenever new schemas appear in the collection. In this paper, we propose a novel approach, grounded on formal foundations, for enabling schema-independent queries for querying and maintaining multi-structured documents. We introduce a query reformulation mechanism which consults a pre-constructed dictionary. This dictionary binds each possible path in the documents to all its corresponding absolute paths in all the documents. We automate the process of query reformulation via a set of rules that reformulate most document store operators, such as select, project and aggregate. In addition, we automate the process of reformulating the classical manipulation operators (insert, delete and update queries) in order to update the dictionary according to the different structural changes made in the collection. These two processes produce queries which are compatible with the native query engine of the underlying document store. To evaluate our approach, we conduct experiments on synthetic datasets. Our results show that the induced overhead when querying or updating can be acceptable when compared to the efforts made to restructure the data and the time required to execute several queries corresponding to the different schemas inside the collection.
Hamdi Ben Hamadou, Faiza Ghozzi, André Péninou, Olivier Teste

Recommending Semantic Concepts for Improving the Process of Semantic Modeling

Data lakes offer enterprises an easy-to-use approach for centralizing the collection of their data sets. However, by just filling the data lake with raw data sets, the probability of creating a data swamp increases. To overcome this drawback, the annotation of data sets with additional meta information is crucial. One way to provide data with such information is to use semantic models that enable the automatic interpretation and processing of data values and their context. However, creating semantic models for data sets containing hundreds of data attributes requires a lot of effort. To support this modeling process, external knowledge bases provide the background knowledge required to create sophisticated semantic models.
In order to benefit from this existing knowledge, we propose a novel modular recommendation framework for identifying the best fitting semantic concepts for a set of data attribute labels. The framework, whose design is based on intensive review of real-world data attribute labels, queries arbitrary pluggable knowledge bases and weights/aggregates their results. We evaluate our approach with different existing knowledge bases and compare it with existing state-of-the-art approaches. In addition, we integrate it into the semantic data platform ESKAPE and discuss how it simplifies the process of creating semantic models.
Alexander Paulus, André Pomp, Lucian Poth, Johannes Lipp, Tobias Meisen

Uncovering Hidden Links Between Images Through Their Textual Context

Using hyperlinks to enhance page ranking has been widely studied in the literature. The main motivation is that an hyperlink underlines a page relevance. However, several hyperlinks in the web are used for navigation or marketing purposes. In addition, hyperlinks are created manually, so it is impossible to semantically link all similar pages. In our work, we propose to uncover hidden semantic links and create them automatically between all the collection’s images. For this aim, we propose first to format textual context of images into topic distributions via LDA technique, and then compute semantic similarities to create links. Experiments carried out in the Wikipedia Retrieval Task of ImageClef 2011 showed that the whole textual context of images is useful for uncovering hidden links and consequently enhancing the retrieval accuracy.
Hatem Aouadi, Mouna Torjmen Khemakhem, Maher Ben Jemaa

Big Data Meets Process Science: Distributed Mining of MP-Declare Process Models

Process mining techniques allow the user to build a process model representing the process behavior as recorded in the logs. Standard process discovery techniques produce as output a procedural process model. Recently, several approaches have been developed to extract declarative process models from logs and have been proven to be more suitable to analyze flexible processes, which frequently depend on human decisions and are less predictable. However, when analyzing declarative constraints from other perspective than the control flow, such as data and resources, existing process mining techniques turned out to be inefficient. Thus, computational performance remains a key challenge of declarative process discovery. In this paper, we present a high-performance approach for the discovery of multi-perspective declarative process models that is built upon the distributed big data processing method MapReduce. Compared to recent work we provide an in-depth analysis of an implementation approach based on Hadoop, a powerful BigData-Framework, and describe detailed information on the implemented prototype. We evaluated effectiveness and efficiency of the approach on real-life event logs.
Christian Sturm, Stefan Schönig


Weitere Informationen

Premium Partner