Skip to main content

2021 | Buch

Research Challenges in Information Science

15th International Conference, RCIS 2021, Limassol, Cyprus, May 11–14, 2021, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 15th International Conference on Research Challenges in Information Sciences, RCIS 2021, which was planned to take place in Limassol, Cyprus, but had to change to an online event due to the COVID-19 pandemic. The conference took place virtually during May 11-14, 2021. It focused on the special theme "Information Science and Global Crisis".

The scope of RCIS is summarized by the thematic areas of information systems and their engineering; user-oriented approaches; data and information management; business process management; domain-specific information systems engineering; data science; information infrastructures, and reflective research and practice.

The 29 full papers and 6 work-in-progress papers presented in this volume were carefully reviewed and selected from 99 submissions. They were organized in topical sections named: Business and Industrial Processes, Information Security and Risk Management, Data and Information Management, Domain-specific Information Systems Engineering, User-Centered Approaches, Data Science and Decision Support, and Information Systems and Their Engineering. The volume also contains 13 poster and demo papers, and 4 doctoral consortium papers. In addition, two-page summaries of tutorials and research project papers can be found in the back matter.

Inhaltsverzeichnis

Frontmatter

Business and Industrial Processes

Frontmatter
Robotic Process Automation in the Automotive Industry - Lessons Learned from an Exploratory Case Study

Robotic Process Automation (RPA) is the rule-based automation of business processes by software bots mimicking human interactions. The aims of this paper are to provide insights into three RPA use cases from the automotive domain as well as to derive the main challenges to be tackled when introducing RPA in this domain. By means of an exploratory case study, the three use cases are selected from real RPA projects. A systematic method for analyzing the cases is applied. The results are structured along the stages of the lifecycle model of software development. We provide information on every lifecycle stage and discuss the respective lessons learned. In detail, we derive five challenges that should be tackled for any successful RPA implementation in the automotive domain: (1) identifying the right process to automate, (2) understanding the factors influencing user acceptance, (3) explaining RPA to the users, (4) designing human bot interaction, and (5) providing software development guidelines for RPA implementation.

Judith Wewerka, Manfred Reichert
A Framework for Comparative Analysis of Intention Mining Approaches

Intention Mining has the purpose to manipulate of large volumes of data, integrate information from different sources and formats and extract useful insights as facts from this data in order to discover users’ intentions. It is used in different fields: Robotics, Network forensics, Security, Bioinformatics, Learning, Map Visualization, Game, etc. There is actually a large variety of intention mining techniques applied to different domains as information retrieval, security, robotics, etc. However, no systematic review had been conducted on this recent research domain. There is a need to understand what is Intention Mining, what is its purpose, what are the existing techniques and tools to mine intentions. In this paper, we propose a comparison framework to structure and to describe the domain °of Intention Mining for a further complete systematic literature review of this field. We validate our comparison framework by applying it to five relevant approaches in the domain.

Rébecca Déneckère, Elena Kornyshova, Charlotte Hug
Exploring the Challenge of Automated Segmentation in Robotic Process Automation

Robotic Process Automation (RPA) is an emerging technology that allows organizations to automate intensive repetitive tasks (or simply routines) previously performed by a human user on the User Interface (UI) of web or desktop applications. RPA tools are able to capture in dedicated UI logs the execution of several routines and then emulate their enactment in place of the user by means of a software (SW) robot. A UI log can record information about many routines, whose actions are mixed in some order that reflects the particular order of their execution by the user, making their automated identification far from being trivial. The issue to automatically understand which user actions contribute to a specific routine inside the UI log is also known as segmentation. In this paper, we leverage a concrete use case to explore the issue of segmentation of UI logs, identifying all its potential variants and presenting an up-to-date overview that discusses to what extent such variants are supported by existing literature approaches. Moreover, we offer points of reference for future research based on the findings of this paper.

Simone Agostinelli, Andrea Marrella, Massimo Mecella
Adapting the CRISP-DM Data Mining Process: A Case Study in the Financial Services Domain

Data mining techniques have gained widespread adoption over the past decades, particularly in the financial services domain. To achieve sustained benefits from these techniques, organizations have adopted standardized processes for managing data mining projects, most notably CRISP-DM. Research has shown that these standardized processes are often not used as prescribed, but instead, they are extended and adapted to address a variety of requirements. To improve the understanding of how standardized data mining processes are extended and adapted in practice, this paper reports on a case study in a financial services organization, aimed at identifying perceived gaps in the CRISP-DM process and characterizing how CRISP-DM is adapted to address these gaps. The case study was conducted based on documentation from a portfolio of data mining projects, complemented by semi-structured interviews with project participants. The results reveal 18 perceived gaps in CRISP-DM alongside their perceived impact and mechanisms employed to address these gaps. The identified gaps are grouped into six categories. The study provides practitioners with a structured set of gaps to be considered when applying CRISP-DM or similar processes in financial services. Also, number of the identified gaps are generic and applicable to other sectors with similar concerns (e.g. privacy), such as telecom, e-commerce.

Veronika Plotnikova, Marlon Dumas, Fredrik Milani
A Method for Modeling Process Performance Indicators Variability Integrated to Customizable Processes Models

Process Performance Indicators (PPIs) are quantifiable metrics to evaluate the business process performance providing essential information for decision-making as regards to efficiency and effectiveness. Nowadays, customizable process models and PPIs are usually modeled separately, especially when dealing with PPIs variability. Likewise, modeling PPI variants with no explicit link with the related customizable process generates redundant models, making adjustment and maintenance difficult. The use of appropriate methods and tools is needed to enable the integration and support of PPIs variability in customizable process models. In this paper, we propose a method based on the Process Performance Indicator Calculation Tree (PPICT), which allows to model the PPIs variability linked to customizable processes modeled on the Business Process Feature Model (BPFM) approach. The Process Performance Indicator Calculation (PPIC) method supports PPIs variability modeling through five design stages, which concerns the PPICT design, the integration of PPICT-BMFM and the configuration of required PPIs aligned with process activities. The PPIC method is supported by a metamodel and a graphical notation. This method has been implemented in a prototype using the ADOxx platform. A partial user-centered evaluation of the PPICT use was carried out in a real utility distribution case to model PPIs variability linked to a customizable process model.

Diego Diaz, Mario Cortes-Cornax, Agnès Front, Cyril Labbe, David Faure

Information Security and Risk Management

Frontmatter
Novel Perspectives and Applications of Knowledge Graph Embeddings: From Link Prediction to Risk Assessment and Explainability

Knowledge graph representation is an important embedding technology that supports a variety of machine learning related applications. By learning the distributed representation of multi-relational data, knowledge embedding models are supposed to efficiently deal with the semantic relatedness of their constituents. However, failing in the fundamental task of creating an appropriate form to represent knowledge harms any attempt of designing subsequent machine learning tasks. Several knowledge embedding methods have been proposed in the last decade. Although there is a consensus on the idea that enhanced approaches are more efficient, more complex projections in the hyperspace that indeed favor link prediction (or knowledge graph completion) can result in a loss of semantic similarity. We propose a new evaluation task that aims at performing risk assessment on domain-specific categorized multi-relational datasets, designed as a classification problem based on the resulting embeddings. We assess the quality of embedding representations based on the synergy of the resulting clusters of target subjects. We show that more sophisticated embedding approaches do not necessarily favor embedding quality, and the traditional link prediction validation protocol is a weak metric to measure the quality of embedding representation. Finally, we present insights about using the synergy analysis to provide risk assessment explainability based on the probability distribution of feature-value pairs within embedded clusters.

Hegler C. Tissot
How FAIR are Security Core Ontologies? A Systematic Mapping Study

Recently, ontology-based approaches to security, in particular to information security, have been recognized as a relevant challenge and as an area of research interest of its own. As the number of ontologies about security grows for supporting different applications, semantic interoperability issues emerge. Relatively little attention has been paid to the ontological analysis of the concept of security understood as a broad application-independent security ontology. Core (or reference) ontologies of security cover this issue to some extent, enabling multiple applications crossing domains of security (information systems, economics, public health, crime etc.). In this paper, we investigate the current state-of-the-art on Security Core Ontologies. We select, analyze, and categorize studies on this topic, supporting a future ontological analysis of security, which could ground a well-founded security core ontology. Notably, we show that: most existing ontologies are not publicly findable/accessible; foundational ontologies are under-explored in this field of research; there seems to be no common ontology of security. From these findings, we make the case for the need of a FAIR Core Security Ontology.

Ítalo Oliveira, Mattia Fumagalli, Tiago Prince Sales, Giancarlo Guizzardi
PHIN: A Privacy Protected Heterogeneous IoT Network

The increasing growth of the Internet of Things (IoT) escalates a broad range of privacy concerns, such as inconsistencies between an IoT application and its privacy policy, or inference of personally identifiable information (PII) of users without their knowledge. To address these challenges, we propose and develop a privacy protection framework called PHIN, for a heterogeneous IoT network, which aims to evaluate privacy risks associated with a new IoT device before it is deployed within a network. We define a methodology and set of metrics to identify and calculate the level of privacy risk of an IoT device and to provide two-layered privacy notices. We also develop a privacy taxonomy and data practice mapping schemas by analyzing 75 randomly selected privacy policies from 12 different categories to help us identify and extract IoT data practices. We conceptually analyze our framework with four smart home IoT devices from four different categories. The result of the evaluation shows the effectiveness of PHIN in helping users understand privacy risks associated with a new IoT device and make an informed decision prior to its installation.

Sanonda Datta Gupta, Aubree Nygaard, Stephen Kaplan, Vijayanta Jain, Sepideh Ghanavati
PriGen: Towards Automated Translation of Android Applications’ Code to Privacy Captions

Mobile applications are required to give privacy notices to the users when they collect or share personal information. Creating consistent and concise privacy notices can be a challenging task for developers. Previous work has attempted to help developers create privacy notices through a questionnaire or predefined templates. In this paper, we propose a novel approach and a framework, called PriGen, that extends these prior work. PriGen uses static analysis to identify Android applications’ code segments which process personal information (i.e. permission-requiring code segments) and then leverages a Neural Machine Translation model to translate them into privacy captions. We present the initial analysis of our translation task for $$\sim $$ 300,000 code segments.

Vijayanta Jain, Sanonda Datta Gupta, Sepideh Ghanavati, Sai Teja Peddinti
CompLicy: Evaluating the GDPR Alignment of Privacy Policies - A Study on Web Platforms

The European Union General Data Protection Regulation (GDPR) came into effect on May 25, 2018, imposing new rights and obligations for the collection and processing of EU citizens personal data. Inevitably, privacy policies of systems handling such data are required to be adapted accordingly. Specific rights and provisions are now required to be communicated to the users, as specified in GDPR Articles 12-14. This work aims to provide insights on whether privacy policies are aligned to the GDPR in this regard, i.e., including the needed information, formulated in sets of terms, by studying the paradigm of web platforms. We present: (1) a defined set of 89 terms, in 7 groups that need to be included within a systems’ privacy policy, resulting from a study of the GDPR and from an examination and analysis of real-life web platforms privacy policies; (2) the CompLicy tool, which as a first step crawls a given web platform, to infer whether a privacy policy page exists and, if it does, subsequently parses it, identifying GDPR terms and groups within, and finally, providing results for the inclusion of the necessary GDPR information within the aforementioned policy; (3) the evaluation of 148 existing web platforms, from 5 different sectors: (i) banking, (ii) e-commerce, (iii) education, (iv) travelling, and (v) social media, presenting the results .

Evangelia Vanezi, George Zampa, Christos Mettouris, Alexandros Yeratziotis, George A. Papadopoulos

Data and Information Management

Frontmatter
WEIR-P: An Information Extraction Pipeline for the Wastewater Domain

We present the MeDO project, aimed at developing resources for text mining and information extraction in the wastewater domain. We developed a specific Natural Language Processing (NLP) pipeline named WEIR-P (WastewatEr InfoRmation extraction Platform) which identifies the entities and relations to be extracted from texts, pertaining to information, wastewater treatment, accidents and works, organizations, spatio-temporal information, measures and water quality. We present and evaluate the first version of the NLP system which was developed to automate the extraction of the aforementioned annotation from texts and its integration with existing domain knowledge. The preliminary results obtained on the Montpellier corpus are encouraging and show how a mix of supervised and rule-based techniques can be used to extract useful information and reconstruct the various phases of the extension of a given wastewater network. While the NLP and Information Extraction (IE) methods used are state of the art, the novelty of our work lies in their adaptation to the domain, and in particular in the wastewater management conceptual model, which defines the relations between entities. French resources are less developed in the NLP community than English ones. The datasets obtained in this project are another original aspect of this work.

Nanée Chahinian, Thierry Bonnabaud La Bruyère, Francesca Frontini, Carole Delenne, Marin Julien, Rachel Panckhurst, Mathieu Roche, Lucile Sautot, Laurent Deruelle, Maguelonne Teisseire
Recommendations for Data-Driven Degradation Estimation with Case Studies from Manufacturing and Dry-Bulk Shipping

Predictive planning of maintenance windows reduces the risk of unwanted production or operational downtimes and helps to keep machines, vessels, or any system in optimal condition. The quality of such a data-driven model for the prediction of remaining useful lifetime is largely determined by the data used to train it. Training data with qualitative information, such as labeled data, is extremely rare, so classical similarity models cannot be applied. Instead, degradation models extrapolate future conditions from historical behaviour by regression. Research offers numerous methods for predicting the remaining useful lifetime by degradation regression. However, the implementation of existing approaches poses significant challenges to users due to a lack of comparability and best practices. This paper provides a general approach for composing existing process steps such as health stage classification, frequency analysis, feature extraction, or regression models for the estimation of degradation. To challenge effectiveness and relations between the steps, we run several experiments in two comprehensive case studies, one from manufacturing and one from dry-bulk shipping. We conclude with recommendations for composing a data-driven degradation estimation process.

Nils Finke, Marisa Mohr, Alexander Lontke, Marwin Züfle, Samuel Kounev, Ralf Möller
Detection of Event Precursors in Social Networks: A Graphlet-Based Method

The increasing availability of data from online social networks attracts researchers’ interest, who seek to build algorithms and machine learning models to analyze users’ interactions and behaviors. Different methods have been developed to detect remarkable precursors preceding events, using text mining and Machine Learning techniques on documents, or using network topology with graph patterns.Our approach aims at analyzing social networks data, through a graphlets enumeration algorithm, to identify event precursors and to study their contribution to the event. We test the proposed method on two different types of social network data sets: real-world events (Lubrizol fire, EU law discussion), and general events (Facebook and MathOverflow). We also contextualize the results by studying the position (orbit) of important nodes in the graphlets, which are assumed as event precursors. After analysis of the results, we show that some graphlets can be considered precursors of events.

Hiba Abou Jamra, Marinette Savonnet, Éric Leclercq
Developing and Operating Artificial Intelligence Models in Trustworthy Autonomous Systems

Companies dealing with Artificial Intelligence (AI) models in Autonomous Systems (AS) face several problems, such as users’ lack of trust in adverse or unknown conditions, gaps between software engineering and AI model development, and operation in a continuously changing operational environment. This work-in-progress paper aims to close the gap between the development and operation of trustworthy AI-based AS by defining an approach that coordinates both activities. We synthesize the main challenges of AI-based AS in industrial settings. We reflect on the research efforts required to overcome these challenges and propose a novel, holistic DevOps approach to put it into practice. We elaborate on four research directions: (a) increased users’ trust by monitoring operational AI-based AS and identifying self-adaptation needs in critical situations; (b) integrated agile process for the development and evolution of AI models and AS; (c) continuous deployment of different context-specific instances of AI models in a distributed setting of AS; and (d) holistic DevOps-based lifecycle for AI-based AS.

Silverio Martínez-Fernández, Xavier Franch, Andreas Jedlitschka, Marc Oriol, Adam Trendowicz
Matching Conservation-Restoration Trajectories: An Ontology-Based Approach

The context of our work is a project at the French National Library (BnF), which aims at designing a decision support system for conservation experts. The goal of this system is to analyze the conservation history of documents in order to enable reliable predictions on their physical state. The present work is the first step towards such system. We propose a representation of a document conservation history as a conservation-restoration trajectory, and we specify its different types of events. We also propose a trajectory matching process which computes a similarity score between two conservation-restoration trajectories taking into account the terminological heterogeneity of the events. We introduce an ontological model validated by domain experts which will be used during the pairwise comparison of events in two trajectories. Finally, we present experiments showing the effectiveness of our approach.

Alaa Zreik, Zoubida Kedad

Domain Specific Information Systems Engineering

Frontmatter
A Transactional Approach to Enforce Resource Availabilities: Application to the Cloud

This paper looks into the availability of resources, exemplified with the cloud, in an open and dynamic environment like the Internet. A growing number of users consume resources to complete their operations requiring a better way to manage these resources in order to avoid conflicts, for example. Resource availability is defined using a set of consumption properties (limited, limited-but-renewable, and non-shareable) and is enforced at run-time using a set of transactional properties (pivot, retriable, and compensatable). In this paper, a CloudSim-based system simulates how mixing consumption and transactional properties allows to capture users’ needs and requirements in terms of what cloud resources they need, for how long, and to what extent they tolerate the unavailability of these resources.

Zakaria Maamar, Mohamed Sellami, Fatma Masmoudi
A WebGIS Interface Requirements Modeling Language

WebGIS applications have become popular due to technological advances in location and sensor technology, with diverse examples with different objectives becoming available. However, there is a lack of requirements elicitation approaches for WebGIS applications, restricting the communication between stakeholders, compromising the systematization of development, and the overall quality of the resulting systems. In this paper, we present the WebGIS Interface Requirements Modeling Language (WebGIS IRML), developed to support communication between stakeholders and developers, addressing user interface requirements during the development process of a WebGIS application. WebGIS IRML is supported by a requirements model editor, which was developed using Model-Driven Development (MDD) techniques. We also describe an experiment which was performed to evaluate the language, involving 30 participants (mostly IT Engineers), to measure the ease of understanding of the language models resulting from the use of the language in the editor and capture the feedback of participants about it. Generally, the results were quite positive, encouraging the use of the language for WebGIS development.

Roberto Veloso, João Araujo, Armanda Rodrigues
CoV2K: A Knowledge Base of SARS-CoV-2 Variant Impacts

In spite of the current relevance of the topic, there is no universally recognized knowledge base about SARS-CoV-2 variants; viral sequences deposited at recognized repositories are still very few, and the process of tracking new variants is not coordinated. CoV2K is a manually curated knowledge base providing an organized collection of information about SARS-CoV-2 variants, extracted from the scientific literature; it features a taxonomy of variant impacts, organized according to three main categories (protein stability, epidemiology, and immunology) and including levels for these effects (higher, lower, null) resulting from a coherent interpretation of research articles.CoV2K is integrated with ViruSurf, hosted at Politecnico di Milano; ViruSurf is globally the largest database of curated viral sequences and variants, integrated from deposition repositories such as COG-UK, GenBank, and GISAID. Thanks to such integration, variants documented in CoV2K can be analyzed and searched over large volumes of nucleotide and amino acid sequences, e.g., for co-occurrence and impact agreement; the paper sketches some of the data analysis tests that are currently under development.

Ruba Al Khalaf, Tommaso Alfonsi, Stefano Ceri, Anna Bernasconi
A Language-Based Approach for Predicting Alzheimer Disease Severity

Alzheimer’s disease (AD) is the most leading symptom of neurodegenerative dementia; AD is defined now as one of the most costly chronic diseases. For that automatic diagnosis and control of Alzheimer’s disease may have a significant effect on society along with patient well-being. The Mini Mental State Examination (MMSE) is a prominent method for identifying whether a person might have dementia and about the dementia severity respectively. These methods are time-consuming and require well-educated personnel to administer.This study investigates another method for predicting MMSE score based on the language deterioration of people, using linguistic information from speech samples of picture description task.We use a regression model over a set of 169 patients with different degrees of dementia; we achieve a Mean Absolute Error (MAE) of 3.6 for MMSE. When focusing on selecting the best features, we improve the MAE to 0.55. Obtained results indicate that the proposed taxonomy of the linguistic features could operate as a cheap dementia test, probably also in non-clinical situations.

Randa Ben Ammar, Yassine Ben Ayed
Towards a Digital Maturity Balance Model for Public Organizations

The phenomenon of digital transformation is currently affecting almost all sectors of activity. Both private and public organizations face the challenge of the rapid growth of digitization. Measuring the digital maturity of an organization is a crucial step in the digitization process. The characteristics and challenges of digital transformation are specific to each sector of activity and even to each type of organization. Therefore, each of them may require a specific digital maturity model. In this work, we pay particular attention to the public sector and develop a digital maturity balance model for public organizations. The model is built on two axes: digital maturity and importance ratio, and aims to measure the balance between them. Each maturity dimension is assessed taking into account the importance ratio of this dimension in the organization.

Mateja Nerima, Jolita Ralyté

User-Centred Approaches

Frontmatter
CCOnto: The Character Computing Ontology

Especially in light of the COVID-19 pandemic which is influencing human behavior, it is clear that there is a rising need for joining psychology and computer science to provide technology interventions for people suffering from negative feelings and behavior change. Behavior is driven by an individual’s character and the situation they are in, according to Character Computing and the Character-Behavior-Situation (CBS) triad. Accordingly, we developed the first full ontology modeling the CBS triad with the aim of providing domain experts with an intelligent interface for modeling, testing, and unifying hypotheses on the interactions between character, behavior, and situation. The ontology consists of a core module modeling character-based interactions and use-case and domain-specific modules. It was developed by computer scientists and psychology domain experts following an iterative process. The main contributions of this paper include updating the earlier prototypical version of the ontology-based on feedback from psychology experts and existing literature, adding more tools to it for enabling domain expert interaction, and providing the final ontology. Steps taken towards evaluating and validating the ontology are outlined.

Alia El Bolock, Nada Elaraby, Cornelia Herbert, Slim Abdennadher
Assessment of Malicious Tweets Impact on Stock Market Prices

Accurate stock market prediction is of paramount importance for traders. Professional ones typically derive financial market decision-making from fundamental and technical indicators. However, stock markets are very often influenced by external human factors, like sentiment information that can be contained in online social networks. As a result, micro-blogs are more and more exploited to predict prices and traded volumes of stocks in financial markets. Nevertheless, it has been shown that a large volume of the content shared on micro-blogs is published by malicious entities, especially spambots. In this paper, we introduce a novel deep learning-based approach for financial time series forecasting based on social media. Through the Generative Adversarial Network (GAN) model, we gauge the impact of malicious tweets, posted by spambots, on financial markets, mainly the closing price. We compute the performance of the proposed approach using real-world data of stock prices and tweets related to the Facebook Inc company. Carried out experiments show that the proposed approach outperforms the two baselines, LSTM, and SVR, using different evaluation metrics. In addition, the obtained results prove that spambot tweets potentially grasp investors’ attention and induce the decision to buy and sell.

Tatsuki Ishikawa, Imen Ben Sassi, Sadok Ben Yahia
Integrating Adaptive Mechanisms into Mobile Applications Exploiting User Feedback

Mobile applications have become a commodity in multiple daily scenarios. Their increasing complexity has led mobile software ecosystems to become heterogeneous in terms of hardware specifications, features and context of use, among others. For their users, fully exploiting their potential has become challenging. While enacting software systems with adaptation mechanisms has proven to ease this burden from users, mobile devices present specific challenges related to privacy and security concerns. Nevertheless, rather than being a limitation, users can play a proactive role in the adaptation loop by providing valuable feedback for runtime adaptation. To this end, we propose the use of chatbots to interact with users through a human-like smart conversational process. We depict a work-in-progress proposal of an end-to-end framework to integrate semi-automatic adaptation mechanisms for mobile applications. These mechanisms include the integration of both implicit and explicit user feedback for autonomous user categorization and execution of enactment action plans. We illustrate the applicability of such techniques through a set of scenarios from the Mozilla mobile applications suite. We envisage that our proposal will improve user experience by bridging the gap between users’ needs and the capabilities of their mobile devices through an intuitive and minimally invasive conversational mechanism.

Quim Motger, Xavier Franch, Jordi Marco
Conceptual Modeling Versus User Story Mapping: Which is the Best Approach to Agile Requirements Engineering?

User stories are primary requirements artifacts within agile methods. They are comprised of short sentences written in natural language expressing units of functionality for the to-be system. Despite their simple format, when modelers are faced with a set of user stories they might be having difficulty in sorting them, evaluating their redundancy, and assessing their relevancy in the effort to prioritize them. The present paper tests the ability of modelers to understand the requirements problem through a visual representation (named the Rationale Tree) which is a conceptual model and is built out of a user stories’ set. The paper is built upon and extends previous work relating to the feasibility of generating such a representation out of a user stories’ set by comparing the performance of the Rationale Tree with the User Story Mapping approach. This is achieved by performing a two-group quantitative comparative study. The identified comparative variables for each method were understandability, recognition of missing requirements/epics/themes, and adaptability. The Rational Tree was not easy to understand and did not perform as anticipated in assisting with the recognition of missing requirements/epics/themes. However, its employment allowed modelers to offer qualitative representations of a specific software problem. Overall, the present experiment evaluates whether a conceptual model could be a consistent solution towards the holistic comprehension of a software development problem within an agile setting, compared to more ‘conventional’ techniques used so far.

Konstantinos Tsilionis, Joris Maene, Samedi Heng, Yves Wautelet, Stephan Poelmans
Design and Execution of ETL Process to Build Topic Dimension from User-Generated Content

Latest research studies on multi-dimensional design have combined business data with User-Generated Content (UGC). They have integrated new analytical aspects, such as user’s behavior, sentiments, opinions or topics of interest, to ameliorate decisional analysis. In this paper, we deal with the complexity of designing topics dimension schema due to the dynamicity and heterogeneity of its hierarchies. Researchers addressed partially this issue by offering technical solutions to topics detection without focusing on the Extraction, Transformation and Loading (ETL) process allowing their integration in multi-dimensional schema. Our contribution consists in modeling ETL steps generating valid topic dimension hierarchies referring to UGC informal texts. In this research work, we propose a generic ETL4SocialTopic process model defining a set of operations executed following a specific order. The implementation of these steps offers a set of customized jobs simplifying the ETL designer’s work by automating a large part of the process. Experimentation results show the consistency of ETL4SocialTopic to design valid topic dimension schemas in several contexts.

Afef Walha, Faiza Ghozzi, Faiez Gargouri

Data Science and Decision Support

Frontmatter
Predicting Process Activities and Timestamps with Entity-Embeddings Neural Networks

Predictive process monitoring aims at predicting the evolution of running traces based on models extracted from historical event logs. Standard process prediction techniques are limited to the prediction of the next activity in a running trace. As a consequence, processes with complex topology (i.e. with several events having similar start/end time) are impossible to predict with these classical multinomial classification approaches. In this paper, the goal is to exploit an original features engineering technique which converts the historical event log of a process into different topological and temporal features, capturing the behavior and context of execution of previous events. These features are then used to train an Entity Embeddings Neural Network in order to learn a model able to predict, in a one-shot manner, both the remaining activities until the end in a running trace and the associated timestamp. Experiments show that this approach globally outperforms previous work for both types of predictions.

Benjamin Dalmas, Fabrice Baranski, Daniel Cortinovis
Data-Driven Causalities for Strategy Maps

The Strategy Map is a strategic tool that allows companies to formulate, control and communicate their strategy and positively impact their performance. Created in 2000, the methodologies applied to develop Strategy Maps have evolved over the past two decades but always rely solely on human input. In practice, Strategy Map causalities - the core elements of this tool - are identified by managers’ opinion and judgment which may result with a lack of accuracy, completeness and longitudinal perspective. Even though authors in the literature have highlighted these issues in the past, few recommendations have been made as to how to address them. In this paper, we present a preliminary work on the use of business operational data and data mining techniques to systematize the detection of causalities in Strategy Maps. We describe a framework we plan to develop using time series techniques and Granger causality tests in order to increase the efficiency of such strategic tool. We demonstrate the feasibility and relevance of this methodology using data from skeyes, the Belgian air traffic control company.

Lhorie Pirnay, Corentin Burnay
A Novel Personalized Preference-based Approach for Job/Candidate Recommendation

Although fuzzy-based recommendation systems are widely used in several services, scanty efforts have been carried out to investigate the efficiency of such approaches in job recommendation applications. In fact, most of the existing fuzzy-based job recommendation systems are only considering two crisp criteria: Curriculum Vitae (CV) content and job description. Other factors like personalized users needs and the fuzzy nature of their explicit and implicit preferences are totally ignored. To fill this gap, this paper introduces a new fuzzy personalized job recommendation approach aiming at providing a more accurate and selective job/candidate matching. To this end, our contribution considers a Fuzzy NoSQL Preference Model to define the candidates profiles. Based on this modeling, an efficient Fuzzy Matching/Scoring algorithm is then applied to select the top-k personalized results. The proposed framework has been added as an extension to TeamBuilder software. Through extensive experimentations using real data sets, achieved results corroborate the efficiency of our approach in providing accurate and personalized results.

Olfa Slama, Patrice Darmon
A Scalable Knowledge Graph Embedding Model for Next Point-of-Interest Recommendation in Tallinn City

With the rapid growth of location-based social networks (LBSNs), the task of next Point Of Interest (POI) recommendation has become a trending research topic as it provides key information for users to explore unknown places. However, most of the state-of-the-art next POI recommendation systems came short to consider the multiple heterogeneous factors of both POIs and users to recommend the next targeted location. Furthermore, the cold-start problem is one of the most thriving challenges in traditional recommender systems. In this paper, we introduce a new Scalable Knowledge Graph Embedding Model for the next POI recommendation problem called Skgem. The main originality of the latter is that it relies on a neural network-based embedding method (node2vec) that aims to automatically learn low-dimensional node representations to formulate and incorporate all heterogeneous factors into one contextual directed graph. Moreover, it provides various POIs recommendation groups for cold-start users, e.g., nearby, by time, by tag, etc. Experiments, carried out on a location-based social network (Flickr) dataset collected in the city of Tallinn (Estonia), demonstrate that our approach achieves better results and sharply outperforms the baseline methods. Source code is publicly available at: https://github.com/Ounoughi-Chahinez/SKGEM

Chahinez Ounoughi, Amira Mouakher, Muhammad Ibraheem Sherzad, Sadok Ben Yahia
Spatial and Temporal Cross-Validation Approach for Misbehavior Detection in C-ITS

This paper proposes a novel approach to apply machine learning techniques to data collected from emerging cooperative intelligent transportation systems (C-ITS) using Vehicle-to-Vehicle (V2V) broadcast communications. Our approach considers temporal and spatial aspects of collected data to avoid correlation between the training set and the validation set. Connected vehicles broadcast messages containing safety-critical information at high frequency. Thus, detecting faulty messages induced by attacks is crucial for road-users safety. High frequency broadcast makes the temporal aspect decisive in building the cross-validation sets at the data preparation level of the data mining cycle. Therefore, we conduct a statistical study considering various fake position attacks. We statistically examine the difficulty of detecting the faulty messages, and generate useful features of the raw data. Then, we apply machine learning methods for misbehavior detection, and discuss the obtained results. We apply our data splitting approach to message-based and communication-based data modeling and compare our approach to traditional splitting approaches. Our study shows that traditional splitting approaches performance is biased as it causes data leakage, and we observe a 10% drop in performance in the testing phase compared to our approach. This result implies that traditional approaches cannot be trusted to give equivalent performance once deployed and thus are not compatible with V2V broadcast communications.

Mohammed Lamine Bouchouia, Jean-Philippe Monteuuis, Ons Jelassi, Houda Labiod, Wafa Ben Jaballah, Jonathan Petit

Information Systems and Their Engineering

Frontmatter
Towards an Efficient Approach to Manage Graph Data Evolution: Conceptual Modelling and Experimental Assessments

This paper describes a new temporal graph modelling solution to organize and memorize changes in a business application. To do so, we enrich the basic graph by adding the concepts of states and instances. Our model has first the advantage of representing a complete temporal evolution of the graph, at the level of: (i) the graph structure, (ii) the attribute set of entities/relationships and (iii) the attributes’ value of entities/relationships. Then, it has the advantage of memorizing in an optimal manner evolution traces of the graph and retrieving easily temporal information about a graph component. To validate the feasibility of our proposal, we implement our proposal in Neo4j, a data store based on property graph model. We then compare its performance in terms of storage and querying time to the classical modelling approach of temporal graph. Our results show that our model outperforms the classical approach by reducing disk usage by 12 times and saving up to 99% queries’ runtime.

Landy Andriamampianina, Franck Ravat, Jiefu Song, Nathalie Vallès-Parlangeau
DISDi: Discontinuous Intervals in Subgroup Discovery

The subgroup discovery problem aims to identify, from data, a subset of objects which exhibit interesting characteristics according to a quality measure defined on a target attribute. Main approaches in this area make the implicit assumption that optimal subgroups emerge from continuous intervals. In this paper, we propose a new approach, called DISDi, for extracting subgroups in numerical data whose originality consists of searching for subgroups on discontinuous attribute intervals. The intuition behind this approach is that disjoint intervals allow refining the definition of subgroups and therefore the quality of the subgroups identified. Thus unlike the main algorithms in the field, the novelty of our proposal lies in the way it breaks down the intervals of the attributes during the subgroup research process. The algorithm also limits the exploration of the search space by exploiting the closure property and combining some branches. The efficiency of the proposal is demonstrated by comparing the results with two algorithms that are references in the field of several benchmark datasets.

Reynald Eugenie, Erick Stattner
Building Correct Taxonomies with a Well-Founded Graph Grammar

Taxonomies play a central role in conceptual domain modeling having a direct impact in areas such as knowledge representation, ontology engineering, software engineering, as well as in knowledge organization in information sciences. Despite their key role, there is in the literature little guidance on how to build high-quality taxonomies, with notable exceptions such as the OntoClean methodology, and the ontology-driven conceptual modeling language OntoUML. These techniques take into account the ontological meta-properties of types to establish well-founded rules for forming taxonomic structures. In this paper, we show how to leverage on the formal rules underlying these techniques to build taxonomies which are correct by construction. We define a set of correctness-preserving operations to systematically introduce types and subtyping relations into taxonomic structures. To validate our proposal, we formalize these operations as a graph grammar. Moreover, to demonstrate our claim of correctness by construction, we use automatic verification techniques over the grammar language to show that: (i) all taxonomies produced by the grammar rules are correct; and (ii) the rules can generate all correct taxonomies.

Jeferson O. Batista, João Paulo A. Almeida, Eduardo Zambon, Giancarlo Guizzardi
Microservice Maturity of Organizations
Towards an Assessment Framework

This early work aims to allow organizations to diagnose their capacity to properly adopt microservices through initial milestones of a Microservice Maturity Model (MiMMo). The objective is to prepare the way towards a general framework to help companies and industries to determine their microservices maturity. Organizations lean more and more on distributed web applications and Line of Business software. This is particularly relevant during the current Covid-19 crisis, where companies are even more challenged to offer their services online, targeting a very high level of responsiveness in the face of rapidly increasing and diverse demands. For this, microservices remain the most suitable delivery application architectural style. They allow agility not only on the technical application, as often considered, but on the enterprise architecture as a whole, influencing the actual financial business of the company. However, microservices adoption is highly risk-prone and complex. Before they establish an appropriate migration plan, first and foremost, companies must assess their degree of readiness to adopt microservices. For this, MiMMo, a Microservices Maturity Model framework assessment, is proposed to help companies assess their readiness for the microservice architectural style, based on their actual situation. MiMMo results from observations of and experience with about thirty organizations writing software. It conceptualizes and generalizes the progression paths they have followed to adopt microservices appropriately. Using the model, an organization can evaluate itself in two dimensions and five maturity levels and thus: (i) benchmark itself on its current use of microservices; (ii) project the next steps it needs to achieve a higher maturity level and (iii) analyze how it has evolved and maintain a global coherence between technical and business stakes.

Jean-Philippe Gouigoux, Dalila Tamzalit, Joost Noppen
Dealing with Uncertain and Imprecise Time Intervals in OWL2: A Possibility Theory-Based Approach

Dealing with temporal data imperfections in Semantic Web is still under focus. In this paper, we propose an approach based on the possibility theory to represent and reason about time intervals that are simultaneously uncertain and imprecise in OWL2. We start by calculating the possibility and necessity degrees related to the imprecision and uncertainty of the handled temporal data. Then, we propose an ontology-based representation for the handled data associated with the obtained measures and associative qualitative relations. For the reasoning, we extend Allen’s interval algebra to treat both imprecision and uncertainty. All the proposed relations preserve the desirable properties of the original algebra and can be used for temporal reasoning by means of a transitivity table. We create a possibilistic temporal ontology based on the proposed semantic representation and the extension of Allen’s relations. Inferences are based on a set of SWRL rules. Finally, we implement a prototype based on this ontology and we conduct a case study applied to temporal data entered by Alzheimer’s patients in the context of a memory prosthesis.

Nassira Achich, Fatma Ghorbel, Fayçal Hamdi, Elisabeth Metais, Faiez Gargouri

Poster and Demo

Frontmatter
KEOPS: Knowledge ExtractOr Pipeline System

The KEOPS platform applies text mining approaches (e.g. classification, terminology and named entity extraction) to generate knowledge about each text and group of texts extracted from documents, web pages, or databases. KEOPS is currently implemented on real data of a project dedicated to Food security, for which preliminary results are presented.

Pierre Martin, Thierry Helmer, Julien Rabatel, Mathieu Roche
Socneto: A Scent of Current Network Overview
(Demonstration Paper)

For more than a decade already, there has been an enormous growth of social networks and their audiences. As people post about their life and experiences, comment on other people’s posts, and discuss all sorts of topics, they generate a tremendous amount of data that are stored in these networks. It is virtually impossible for a user to get a concise overview about any given topic.Socneto is an extensible framework allowing users to analyse data related to a chosen topic from selected social networks. A typical use case is studying sentiment about a public topic (e.g., traffic, medicine etc.) after an important press conference, tracking opinion evolution about a new product on the market, or comparing stock market values and general public sentiment peaks of a company. An emphasis on modularity and extensibility of Socneto enables one to add/replace parts of the analytics pipeline in order to utilise it for a specific use case or to study and compare various analytical approaches.

Jaroslav Knotek, Lukáš Kolek, Petra Vysušilová, Julius Flimmel, Irena Holubová
STOCK: A User-Friendly Stock Prediction and Sentiment Analysis System
(Demonstration Paper)

Determining a future value of a company in order to find a good target for investing is a critical and complex task for stock marketers. And it is even more complicated for non-experts.STOCK is a modular, scalable, and extensible framework that enables users to gain insight in the stock market by user-friendly combination of three sources of information: (1) An easy access to the companies’ current position and evolution of prices. (2) Prediction models and their customisation according to users’ needs and interests, regardless of their knowledge in the field. (3) Results of sentiment analysis of related news that may influence the respective changes in prices.

Ilda Balliu, Harun Ćerim, Mahran Emeiri, Kaan Yöş, Irena Holubová
Interoperability in the Footwear Manufacturing Networks and Enablers for Digital Transformation

The digital transformation of the manufacturing industry is accelerated by the advancements of the digital technologies (e.g., Internet of Things, big data analytics), enabling the emergence of new business models and digital networks. The footwear manufacturing networks face numerous interoperability challenges due to the high heterogeneity of the enterprises, software systems and resources they comprise. The aim of this article is to analyze interoperability approaches in the footwear manufacturing networks up-stream segment and discuss enablers and challenges that need to be tackled towards ensuring digital transformation in this rather traditional manufacturing sector. The enablers are grouped in five categories in a digital radar: digital data and insights, automation, digital access, customer-centric manufacturing, and networking.

Claudia-Melania Chituc
SIMPT: Process Improvement Using Interactive Simulation of Time-Aware Process Trees

Process mining techniques including process discovery, conformance checking, and process enhancement provide extensive knowledge about processes. Discovering running processes and deviations as well as detecting performance problems and bottlenecks are well-supported by process mining tools. However, all the provided techniques represent the past/current state of the process. The improvement in a process requires insights into the future states of the process w.r.t. the possible actions/changes. In this paper, we present a new tool that enables process owners to extract all the process aspects from their historical event data automatically, change these aspects, and re-run the process automatically using an interface. The combination of process mining and simulation techniques provides new evidence-driven ways to explore “what-if” questions. Therefore, assessing the effects of changes in process improvement is also possible. Our Python-based web-application provides a complete interactive platform to improve the flow of activities, i.e., process tree, along with possible changes in all the derived activity, resource, and process parameters. These parameters are derived directly from an event log without user-background knowledge.

Mahsa Pourbafrani, Shuai Jiao, Wil M. P. van der Aalst
Digitalization in Sports to Connect Child’s Sport Clubs, Parents and Kids: Simple Solution for Tackling Social and Psychological Issues

Today, the topic of child’s sporting has become a crucial, not only because, in the 21st century, computer games and social networks are the most common way of spending a child leisure time (shift from sports to eSports took place), but also because in 2020, Covid-19 and the associated restrictions give parents even more stress on visits to sports clubs. In addition, sports may lead to unpleasant situations when a child is not successful enough in the particular discipline and is publicly criticized, thereby undermining his willingness to sport. We suppose that the trends of digitalization and some aspects of Industry 4.0, would be able to solve these issues at least partly, without requiring a lot of resources from child’s sports clubs. This paper is devoted to a simple technological solution that would improve the sports club business by facilitating the exchange of information between a child, parents and sports clubs.

Irina Marcenko, Anastasija Nikiforova
Open Government Data for Non-expert Citizens: Understanding Content and Visualizations’ Expectations

Open government data (OGD) refers to data made available online by governments for anyone to freely reuse. Non-expert users, however, lack the necessary technical skills and therefore face challenges when trying to exploit it. Amongst these challenges, finding useful datasets for citizens is very difficult as their expectations are not always identified. Furthermore, findings the appropriate visualization that is more understandable by citizens is also a barrier. The goal of this paper is to decrease those two entry barriers by better understanding the expectations of non-expert citizens. In order to reach that goal, we first seek to understand their content expectations through the usage statistics analysis of the OGD portal of Namur and through a complementary online survey of 43 participants. Second, we conduct interviews with 10 citizens to obtain their opinion on the appropriate and well-designed visualizations of the content they seek. The findings of this multi-method approach allow us to issue 5 recommendations for OGD portal publishers and developers to foster non-expert use of OGD.

Abiola Paterne Chokki, Anthony Simonofski, Benoît Frénay, Benoît Vanderose
Domain Analysis and Geographic Context of Historical Soundscapes: The Case of Évora

Soundscape is the technical term used to describe the sound in our surroundings. Experiencing Historical Soundscapes allows for a better understanding of life in the past and provides clues on the evolution of a community. Interactive and multimedia-based Historical Soundscape environments with geolocation is a relatively unexplored area but, recently, this topic has started to call the attention of researchers due to its relevance in culture and history. This work is part of the PASEV project, which is developing several types of digital tools, designed to interactively share the Historical Soundscapes of the Portuguese City of Evora. This paper presents an initial domain requirements analysis for the interactive and multimedia-based Historical Soundscapes domain, which involves handling geolocations. Thus, projects in this domain, such as PASEV, can be part, and take advantage of the benefits of this work, which is the reuse of Soundscape domain requirements, reducing the time needed to develop applications in such domain.

Mariana Bonito, João Araujo, Armanda Rodrigues
Towards a Unified Framework for Computational Trust and Reputation Models for e-Commerce Applications

Evaluating the quality of resources and the reliability of entities in a system is one of the current needs of modern computer systems. This assessment is the result of two concepts that dominate our real life as well as computer systems, which are Trust and Reputation. To measure them, a variety of computational models have been developed to help users make decisions, and to improve interactions with the system and between users. Due to the wide variety of definitions for reputation and trust topics, this paper attempts to unify these definitions by proposing a unique formalization in terms of graphical and textual notations. It introduces also a deep analysis to understand the behavior and the intuition behind each computational model.

Chayma Sellami, Mickaël Baron, Mounir Bechchi, Allel Hadjali, Stephane Jean, Dominique Chabot
Improving Web API Usage Logging

A Web API (WAPI) is a type of API whose interaction with its consumers is done through the Internet. While being accessed through the Internet can be challenging, mostly when WAPIs evolve, it gives providers the possibility to monitor their usage. Currently, WAPI usage is mostly logged for traffic monitoring and troubleshooting. Even though they contain invaluable information regarding consumers’ behavior, they are not sufficiently used by providers. In this paper, we first consider two phases of the application development lifecycle, and based on them we distinguish two different types of usage logs, namely development logs and production logs. For each of them we show the potential analyses (e.g., WAPI usability evaluation) that can be performed, as well as the main impediments, that may be caused by the unsuitable log format. We then conduct a case study using logs of the same WAPI from different deployments and different formats, to demonstrate the occurrence of these impediments and at the same time the importance of a proper log format. Next, based on the case study results, we present the main quality issues of WAPI logs and explain their impact on data analyses. For each of them, we give some practical suggestions on how to deal with them, as well as mitigating their root cause.

Rediana Koçi, Xavier Franch, Petar Jovanovic, Alberto Abelló
Towards a Hybrid Process Modeling Language

Nowadays, business process management is getting more and more important. A wide variety of process modeling languages is available. Hence one of the most complicated tasks of entrepreneurs is to choose the modeling language which suits their respective problems and purposes best. Each of the modeling languages has its own advantages and disadvantages depending on the properties of the process to be modeled. None of the existing approaches satisfies requirements for a “good” modeling language completely. Thus, we formulate our goal to develop a new concept for a hybrid modeling language based on BPMN.

Nicolai Schützenmeier, Stefan Jablonski, Stefan Schönig
A Metadata-Based Event Detection Method Using Temporal Herding Factor and Social Synchrony on Twitter Data

Detecting events from social media data is an important problem. In this paper, we propose a novel method to detect events by detecting traces of herding in the Twitter data. We analyze only the metadata for this and not the content of the tweets. We evaluate our method on a dataset of 3.3 million tweets that was collected by us. We then compared the results obtained from our method with a state of the art method called Twitinfo on the above mentioned 3.3 million dataset. Our method showed better results. To check the generality of our method, we tested it on a publicly available dataset of 1.28 million tweets and the results convey that our method can be generalised.

Nirmal Kumar Sivaraman, Vibhor Agarwal, Yash Vekaria, Sakthi Balan Muthiah
Data and Conceptual Model Synchronization in Data-Intensive Domains: The Human Genome Case

Context and Motivation: With the increasing quantity and versatility of data in data-intensive domains, designing information systems, to effectively process the relevant information is becoming increasingly challenging. Conceptual modeling could tackle such challenges in numerous manners as a preliminary phase in the software development process. But assessing data and model synchronization becomes an issue in domains where data are heterogeneous, have a diverse provenance and are subject to continuous change. Question/problem: The problem is how to determine and demonstrate the ability of a conceptual schema to represent the concepts and the data in the particular data-intensive domain. Principal Ideas/Results: A validation approach has been designed for the Conceptual Schema of the Human Genome by investigating the particular issues in the genetic domain and systematically connecting constituents of this conceptual schema with potential instances in samples of genome-related data. As a result, this approach provided us accurate insight in terms of attribute resemblance, completeness, structure and shortcomings. Contribution: This work demonstrates how the strategy of conceptualizing a data-intensive domain and then validating that concept by reconnecting this with the attributes of the real world data domain, can be generalized. Conceptual modeling has a limited resistance to the evolution of data, which is the next problem to face.

Floris Emanuel, Verónica Burriel, Oscar Pastor

Doctoral Consortium

Frontmatter
Simplified Timed Attack Trees

This paper considers attack trees, a graphical security model that can be used to visualised varying ways an asset may be compromised. We proposed an extension of this model, termed Simplified Timed Attack Trees (STAT). The primary aim here is to use STAT for the analysis of CPS assets. STAT extend attack trees gate refinements with time parameters. In order to reach a parent node, an attacker has to achieve the child nodes within a specified time interval. This adds a level of security to the gates refinement. We propose how to translate STAT to a parallel composition of timed automata (TA). We reduce the root reachability of STAT to place reachability in the TA. This reachability can be checked; using a formal verification tool UPPAAL.

Aliyu Tanko Ali
A Model-Driven Engineering Approach to Complex Performance Indicators: Towards Self-Service Performance Management (SS-PM)

Every modern organization nowadays produces data and consumes information through Decision-Support Systems (DSS) which produce more and more Complex Performance Indicators (CPI). This allows business monitoring, decision-making support and tracking of decisions effects. With the increasing complexity, DSS suffer from two main limitations that inhibit their use. First, DSS tend to be opaque to Business Managers who cannot observe how the data is treated to produce indicators. Second, DSS are owned by the technicians, resulting in an IT-bottleneck and a business-exclusion. From a Business Management perspective, the consequences are damaging. DSS result in sunk costs of development, fail to receive full confidence from Business Managers and to fit dynamic business environments. In this research, preliminary insights are proposed to build a solution that tackles the previous limitations. The literature review, the research contributions and methodology are presented to conclude with the work plan of the PhD.

Benito Giunta
Robust and Strongly Consistent Distributed Storage Systems

The design of Distributed Storage Systems involves many challenges due to the fact that the users and storage nodes are physically dispersed. In this doctoral consortium paper, we present a framework for boosting the concurrent access to large shared data objects (such as files), while maintaining strong consistency guarantees. In the heart of the framework lies a fragmentation strategy, which enables different updates to occur on different fragments of the object concurrently, while ensuring that all modifications are valid.

Andria Trigeorgi
The Impact and Potential of Using Blockchain Enabled Smart Contracts in the Supply Chain Application Area

Blockchain potential in supply chain management (SCM) has increased considerably over the last few years, both in academia and industry. However, according to the literature, few researchers have tackled the real impact of blockchain in changing businesses and creating value. This research fully delved into an analysis of the SCM function with specific focus on measuring the potential of using a blockchain-based system in terms of enhancing effectiveness, trust and transparency. The main contributions of this research aim to address the main challenges of blockchain enabled smart contracts implementation in the field of SCM through a deep qualitative research that highlights the inconsistencies and misfits that are common in that application area.

Samya Dhaiouir
Backmatter
Metadaten
Titel
Research Challenges in Information Science
herausgegeben von
Samira Cherfi
Anna Perini
Selmin Nurcan
Copyright-Jahr
2021
Electronic ISBN
978-3-030-75018-3
Print ISBN
978-3-030-75017-6
DOI
https://doi.org/10.1007/978-3-030-75018-3