Skip to main content
Top

2018 | Book

On the Move to Meaningful Internet Systems. OTM 2018 Conferences

Confederated International Conferences: CoopIS, C&TC, and ODBASE 2018, Valletta, Malta, October 22-26, 2018, Proceedings, Part I

Editors: Hervé Panetto, Christophe Debruyne, Henderik A. Proper, Dr. Claudio Agostino Ardagna, Dumitru Roman, Robert Meersman

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This double volumes LNCS 11229-11230 constitutes the refereed proceedings of the Confederated International Conferences: Cooperative Information Systems, CoopIS 2018, Ontologies, Databases, and Applications of Semantics, ODBASE 2018, and Cloud and Trusted Computing, C&TC, held as part of OTM 2018 in October 2018 in Valletta, Malta.
The 64 full papers presented together with 22 short papers were carefully reviewed and selected from 173 submissions. The OTM program every year covers data and Web semantics, distributed objects, Web services, databases, informationsystems, enterprise workflow and collaboration, ubiquity, interoperability, mobility, grid and high-performance computing.

Table of Contents

Frontmatter

International Conference on Cooperative Information Systems (CoopIS) 2018

Frontmatter
Digitization of Government Services: Digitization Process Mapping

The search for improvement and standardization of the digitization of government services has led governments around the world to focus on solutions that seek satisfaction, engagement and involvement of society in general. In addition, governmental systems are seeking constantly to renew the digital governance environment through good planning, use of best practices, and offering greater opportunities to establish collaborative and participatory relationships among all stakeholders (Government and Society). This paper presents a systematic literature review of the digitization of services contributing to the knowledge of the processes and methodologies adopted by these governments to provide their services to the citizen. The main contribution of this work is the proposal of a process mapping model that can be adopted during the stages of providing digital services by interested agencies in offering services focused on the needs of the citizens. The proposed model can be used by any government agency or private company interested in updating its processes, tools and methods of digitization and services automation according to their necessities.

Heloise Acco Tives Leão, Edna Dias Canedo, João Carlos Felix Souza
Modeling Process Interactions with Coordination Processes

With the rise of data-centric process management paradigms, small and interdependent processes, such as artifacts or object lifecycles, form a business process by interacting with each other. To arrive at a meaningful overall business process, these process interactions must be coordinated. One challenge is the proper consideration of one-to-many and many-to-many relations between interacting processes. Other challenges arise from the flexible, concurrent execution of the processes. Relational process structures and semantic relationships have been proposed for tackling these individual challenges. This paper introduces coordination processes, which bring together both relational process structures and semantic relationships, leveraging their features to enable proper coordination support for interdependent, concurrently running processes. Coordination processes contribute an abstracted and concise model for coordinating the highly complex interactions of interrelated processes.

Sebastian Steinau, Kevin Andrews, Manfred Reichert
Blockchain-Based Collaborative Development of Application Deployment Models

The automation of application deployment is vital today as the manual alternative is too slow and error-prone. For this reason, many technologies for deploying applications automatically based on deployment models have been developed. However, in many scenarios, these models have to be created in collaborative processes involving multiple participants that belong to independent organizations. However, the potential competing interests of these organizations hinder the degree of trust they have in each other. Thus, without a guarantee of accountability, iterative collaborative deployment modeling is not possible in such domains. In this paper, we propose a decentralized deployment modeling approach that achieves accountability by utilizing public blockchains and decentralized storage systems to store intermediate states of the collaborative deployment model. The approach guarantees integrity of deployment models and allows obtaining the history of changes they went through while ensuring participants’ authenticity.

Ghareeb Falazi, Uwe Breitenbücher, Michael Falkenthal, Lukas Harzenetter, Frank Leymann, Vladimir Yussupov
Evaluating Multi-tenant Live Migrations Effects on Performance

Multitenancy is an important feature for all Everything as a Service providers like Business Process Management as a Service. It allows to reduce the cost of the infrastructure since multiple tenants share the same service instances. However, tenants have dynamic workloads. The resource they share may not be sufficient at some point in time. It may require Cloud resource (re-)configurations to ensure a given Quality of Service. Tenants should be migrated without stopping the service from a configuration to another to meet their needs while minimizing operational costs on the provider side. Live migrations reveal many challenges: service interruption must be minimized and the impact on co-tenants should be minimal. In this paper, we investigate live tenants migrations duration and its effects on the migrated tenants as well as the co-located ones. To do so, we propose a generic approach to measure these effects for multi-tenant Software as a Service. Further, we propose a testing framework to simulate workloads, and observe the impact of live migrations on Business Process Management Systems. The experimental results highlight the efficiency of our approach and show that migration time depends on the size of data that have to be transferred and that the effects on co-located tenants should not be neglected.

Guillaume Rosinosky, Chahrazed Labba, Vincenzo Ferme, Samir Youcef, François Charoy, Cesare Pautasso
Probability Based Heuristic for Predictive Business Process Monitoring

Predictive business process monitoring concerns the unfolding of ongoing process instance executions. Recent work in this area frequently applies “blackbox” like methods which, despite delivering high quality prediction results, fail to implement a transparent and understandable prediction generation process, likely, limiting the trust users put into the results. This work tackles this limitation by basing prediction and the related prediction models on well known probability based histogram like approaches. Those enable to quickly grasp, and potentially visualise the prediction results, various alternative futures, and the overall prediction process. Furthermore, the proposed heuristic prediction approach outperforms state-of-the-art approaches with respect to prediction accuracy. This conclusion is drawn based on a publicly available prototypical implementation, real life logs from multiple sources and domains, along with a comparison with multiple alternative approaches.

Kristof Böhmer, Stefanie Rinderle-Ma
Indulpet Miner: Combining Discovery Algorithms

In this work, we explore an approach to process discovery that is based on combining several existing process discovery algorithms. We focus on algorithms that generate process models in the process tree notation, which are sound by design. The main components of our proposed process discovery approach are the Inductive Miner, the Evolutionary Tree Miner, the Local Process Model Miner and a new bottom-up recursive technique. We conjecture that the combination of these process discovery algorithms can mitigate some of the weaknesses of the individual algorithms. In cases where the Inductive Miner results in overgeneralizing process models, the Evolutionary Tree Miner can often mine much more precise models. At the other hand, while the Evolutionary Tree Miner is computationally expensive, running it only on parts of the log that the Inductive Miner is not able to represent with a precise model fragment can considerably limit the search space size of the Evolutionary Tree Miner. Local Process Models and bottom-up recursion aid the Evolutionary Tree Miner further by instantiating it with frequent process model fragments. We evaluate our approaches on a collection of real-life event logs and find that it does combine the advantages of the miners and in some cases surpasses other discovery techniques.

Sander J. J. Leemans, Niek Tax, Arthur H. M. ter Hofstede
Towards Event Log Querying for Data Quality
Let’s Start with Detecting Log Imperfections

Process mining is, by now, a well-established discipline focussing on process-oriented data analysis. As with other forms of data analysis, the quality and reliability of insights derived through analysis is directly related to the quality of the input (garbage in - garbage out). In the case of process mining, the input is an event log comprised of event data captured (in information systems) during the execution of the process. It is crucial then that the event log be treated as a first-class citizen. While data quality is an easily understood concept little effort has been directed towards systematically detecting data quality issues in event logs. Analysts still spend a large proportion of any project in ‘data cleaning’, often involving manual and ad hoc tasks, and requiring more than one tool. While there are existing tools and languages that query event logs, the problem of different approaches for different log imperfections remains. In this paper we take the first steps to developing QUELI (Querying Event Log for Imperfections) a log query language that provides direct support for detecting log imperfections. We develop an approach that identifies capabilities required of QUELI and illustrate the approach by applying it to 5 of the 11 event log imperfection patterns described in [29]. We view this as a first step towards operationalising systematic, automated support for log cleaning.

Robert Andrews, Suriadi Suriadi, Chun Ouyang, Erik Poppe
Towards a Collective Awareness Platform for Privacy Concerns and Expectations

In an increasingly instrumented and inter-connected digital world, citizens generate vast amounts of data, much of it being valuable and a significant part of it being personal. However, controlling who can collect it, limiting what they can do with it, and determining how best to protect it, remain deeply undecided issues. This paper proposes CAPrice, a socio-technical solution based on collective awareness and informed consent, whereby data collection and use by digital products are driven by the expectations and needs of the consumers themselves, through a collaborative participatory process and the configuration of collective privacy norms. The proposed solution relies on a new innovation model that complements existing top-down approaches to data protection, which mainly rely on technical or legal provisions. Ultimately, the CAPrice ecosystem will strengthen the trust bond between service developers and users, encouraging innovation and empowering the individuals to promote their privacy expectations as a quantifiable, community-generated request.

Giorgos Flouris, Theodore Patkos, Ioannis Chrysakis, Ioulia Konstantinou, Nikolay Nikolov, Panagiotis Papadakos, Jeremy Pitt, Dumitru Roman, Alexandru Stan, Chrysostomos Zeginis
Shadow Testing for Business Process Improvement

A fundamental assumption of improvement in Business Process Management (BPM) is that redesigns deliver refined and improved versions of business processes. These improvements can be validated online through sequential experiment techniques like AB Testing, as we have shown in earlier work. Such approaches have the inherent risk of exposing customers to an inferior process version during the early stages of the test. This risk can be managed by offline techniques like simulation. However, offline techniques do not validate the improvements because there is no user interaction with the new versions. In this paper, we propose a middle ground through shadow testing, which avoids the downsides of simulation and direct execution. In this approach, a new version is deployed and executed alongside the current version, but in such a way that the new version is hidden from the customers and process workers. Copies of user requests are partially simulated and partially executed by the new version as if it were running in the production. We present an architecture, algorithm, and implementation of the approach, which isolates new versions from production, facilitates fair comparison, and manages the overhead of running shadow tests. We demonstrate the efficacy of our technique by evaluating the executions of synthetic and realistic process redesigns.

Suhrid Satyal, Ingo Weber, Hye-young Paik, Claudio Di Ciccio, Jan Mendling
A DevOps Implementation Framework for Large Agile-Based Financial Organizations

Modern large-scale financial organizations show an interest in embracing a DevOps way of working in addition to Agile adoption. Implementing DevOps next to Agile enhances certain Agile practices while extending other practices. Although there are quite some DevOps maturity models available in the literature, they are either not specific to large-scale financial organizations or do not include the Agile aspects within the desired scope. This study has been performed to identify why such organizations are interested in implementing DevOps and how this implementation can be guided by a conceptual framework. As a result, a list of drivers, a generic DevOps implementation framework and driver-dependent variations are presented. The development of these artifacts has been realized through a design science research method and they have been validated by practitioners from financial organizations in the Netherlands. The practitioners have identified the developed artifacts as useful, mainly to educate people within their organizations. Moreover, the artifacts have been applied to real organizational goals to demonstrate how they can be of help to identify the useful measurement units, which in turn can help to measure and achieve their DevOps transformation goals. Thus, the developed artifacts are not only serving as a baseline for future research but are also useful for existing financial organizations to commence and get ahead with their DevOps implementations.

Anitha Devi Nagarajan, Sietse J. Overbeek
Utilizing Twitter Data for Identifying and Resolving Runtime Business Process Disruptions

The advent of web 2.0 technologies represents a paradigm shift in how individuals collaborate in their businesses and daily lives. Web 2.0 opens new opportunities for businesses to reconsider their strategies and operating models by taking a customer-centric approach, which creates a competitive advantage. Business Process Management (BPM) is taking advantage from this phenomenon (aka social business processes or business processes 2.0), embracing ‘social’ and embed it through different stages of the BP lifecycle. This paper contributes by a novel framework for the real-time monitoring and improvement of business processes by analyzing the huge amounts of social data, providing visibility and control, which leads to informed decision making and immediate corrective actions. Thus, the proposed framework bridges in the gap between the social and business worlds. The applicability, efficiency and utility of the proposed approach is validated through its application on a real-life case study of a leading telecommunication company.

Alia Ayoub, Amal Elgammal
Semantic IoT Gateway: Towards Automated Generation of Privacy-Preserving Smart Contracts in the Internet of Things

The Internet of Things paradigm has brought opportunities to meet several challenges by interconnecting IoT resources, such as sensors, actuators, and gateways on a massive scale. The IoT gateways play an important role in the IoT applications to bridge between sensor networks and the external environment through the Internet. Typically, the IoT gateways collect and send the data collected from sensors and actuators to external platforms where they will be remotely analyzed. However, the users desire a more adapted IoT gateway that can improve the IoT data privacy preservation before sending them to these external platforms. Thus, an IoT gateway that enables a better control over the set of private IoT resources and protects the collected personal data and their privacy is required. For this purpose, we propose a Semantic IoT Gateway that helps implement a dynamic and flexible privacy-preserving solution for the IoT domain. First, it enables to match between the data consumer’s terms of service and the data owner’s privacy preferences by generating an adapted privacy policy. Second, it converts the privacy policy into a custom smart contract. Finally, it connects a set of private IoT resources to a distributed network using the blockchain technology to host the generated smart contracts. A smart contract is an executable code that runs on top of the blockchain to facilitate, execute and enforce an agreement between untrusted parties without the involvement of a trusted third party. Our proposal, which is highlighted through an example and experimentation on a real-world use-case, has given the expected results.

Faiza Loukil, Chirine Ghedira-Guegan, Khouloud Boukadi, Aïcha Nabila Benharkat
Crowdsourcing Task Assignment with Online Profile Learning

In this paper, we present a reference framework called Argo+ for worker-centric crowdsourcing where task assignment is characterized by feature-based representation of both tasks and workers and learning techniques are exploited to online predict the most appropriate task to execute for a requesting worker. On the task side, features are used to represent requirements expressed in terms of knowledge expertise that are asked to workers for being involved in the task execution. On the worker side, features are used to compose a profile, namely a structured description of the worker capabilities in executing tasks. Experimental results obtained on a real crowdsourcing campaign are discussed by comparing the performance of Argo+ against a baseline with conventional task assignment techniques.

Silvana Castano, Alfio Ferrara, Stefano Montanelli
Distributed Collaborative Filtering for Batch and Stream Processing-Based Recommendations

Nowadays, user actions are tracked and recorded by multiple websites and e-commerce platforms, allowing them to better understand their preferences and support them with specific and accurate content suggestions. Researches have proposed several recommendation approaches and addressed several challenges such as data sparsity and cold start. However, the low-scalability problem remains a major challenge when handling large volumes of user actions data. This issue becomes more challenging when it comes to real-time applications. Such constraint requires a new class of low latency recommendation approaches capable of incrementally and continuously update their knowledge and models at scale as soon as data arrives. In this paper, we focus on the user-centered collaborative filtering as one of the most adopted recommendation approaches known for its lack of scalability. We propose two distributed and scalable implementations of collaborative filtering addressing the challenges and the requirements of batch offline and incremental online recommendation scenarios. Several experiments were conducted on a distributed environment using the MovieLens dataset in order to highlight the properties and the advantages of each variant.

Kais Zaouali, Mohamed Ramzi Haddad, Hajer Baazaoui Zghal
Detecting Constraints and Their Relations from Regulatory Documents Using NLP Techniques

Extracting constraints and process models from natural language text is an ongoing challenge. While the focus of current research is merely on the extraction itself, this paper presents a three step approach to group constraints as well as to detect and display relations between constraints in order to ease their implementation. For this, the approach uses NLP techniques to extract sentences containing constraints, group them by, e.g., stakeholders or topics, and detect redundant, subsuming, and conflicting pairs of constraints. These relations are displayed using network maps. The approach is prototypically implemented and evaluated based on regulatory documents from the financial sector as well as expert interviews.

Karolin Winter, Stefanie Rinderle-Ma
Empirical Analysis of Sentence Templates and Ambiguity Issues for Business Process Descriptions

Business process management has become an increasingly present activity in organizations. In this context, approaches that assist in the identification and documentation of business processes are presented as relevant efforts to make organizations more competitive. To achieve these goals, business process descriptions are considered as a useful artifact in both identifying business processes and complementing business process documentation. However, approaches that automatically generate business process descriptions do not explain how the sentence templates that compose the text were selected. This selection influences the quality of the text, as it may produce ambiguous or non-recurring sentences, which could make it difficult to understand the process. In this work, we present an empirical analysis of 64 business process descriptions in order to find recurrent sentence templates and filter them for ambiguity issues. The analysis made it possible to find 101 sentence templates divided into 29 categories. In addition, 13 of the sentence templates were considered to have ambiguity issues based on the adopted criteria. These findings may support other approaches in generating process descriptions more suitable for process analysts and domain experts.

Thanner Soares Silva, Lucinéia Heloisa Thom, Aline Weber, José Palazzo Moreira de Oliveira, Marcelo Fantinato
Evolution of Instance-Spanning Constraints in Process Aware Information Systems

Business process compliance has been widely addressed resulting in works ranging from proposing compliance patterns to checking and monitoring techniques. However, little attention has been paid to a specific type of constraints known as instance spanning constraints (ISC). Whereas traditional compliance rules define constraints on process models, which are checked separately for each instance, ISC impose constraints that span multiple instances. This paper focuses on ISC evolution and its impact on process compliance. In particular, ISC change operations, as well as change strategies are defined, and the impact on both the ISC monitoring engine and the process instances during run time are analyzed. The concepts are prototypically implemented.

Conrad Indiono, Walid Fdhila, Stefanie Rinderle-Ma
Process Histories - Detecting and Representing Concept Drifts Based on Event Streams

Business processes have to constantly adapt in order to react to changes induced by, e.g., new regulations or customer needs resulting in so called concept drifts. By now techniques to detect concept drifts are applied on process execution logs ex post, i.e., after the process is finished. However, detecting concept drifts during run-time bears many benefits such as instant reaction to the concept drift. Introducing process histories as a novel way to detect and represent incremental, sudden, recurring, and gradual concept drifts through mining the evolution of a process model based on an event stream will face this challenge. Therefore, a formal definition of process histories is given, the concept of process histories is prototypically implemented and compared with existing approaches based on a synthetic event log.

Florian Stertz, Stefanie Rinderle-Ma
Lifecycle-Based Process Performance Analysis

Many business processes are supported by information systems that record their execution. Process mining techniques extract knowledge and insights from such process execution data typically stored in event logs or streams. Most process mining techniques focus on process discovery (the automated extraction of process models) and conformance checking (aligning observed and modeled behavior). Existing process performance analysis techniques typically rely on ad-hoc definitions of performance. This paper introduces a novel comprehensive approach to process performance analysis from event data. Our generic technique centers around business artifacts, key conceptual entities that behave according to state-based transactional lifecycle models. We present a formalization of these concepts as well as a structural approach to calculate and monitor process performance from event data. The approach has been implemented in the open source process mining tool ProM and its applicability has been evaluated using public real-life event data.

Bart F. A. Hompes, Wil M. P. van der Aalst
A Relevance-Based Data Exploration Approach to Assist Operators in Anomaly Detection

Data is emerging as a new industrial asset in the factory of the future, to implement advanced functions like state detection, health assessment, as well as manufacturing servitization. In this paper, we foster Industry 4.0 data exploration by relying on a relevance evaluation approach that is: (i) flexible, to detect relevant data according to different analysis requirements; (ii) context-aware, since relevant data is discovered also considering specific working conditions of the monitored machines; (iii) operator-centered, thus enabling operators to visualise unexpected working states without being overwhelmed by the huge volume and velocity of collected data. We demonstrate the feasibility of our approach with the implementation of an anomaly detection service in the Smart Factory, where the attention of operators is focused on relevant data corresponding to unusual working conditions, and data of interest is properly visualised on operator’s cockpit according to adaptive sampling techniques based on the relevance of collected data.

Ada Bagozi, Devis Bianchini, Valeria De Antonellis, Alessandro Marini
Exploiting Smart City Ontology and Citizens’ Profiles for Urban Data Exploration

Smart Cities are complex systems, collecting together huge amounts of heterogeneous data mainly concerning energy consumption, garbage collection, level of pollution, citizens’ safety and security. In the recent years, several approaches have been defined to enable Public Administration (PA), utility and energy providers, as well as citizens, to share and use information in order to take decisions about their daily life in Smart Cities. Research challenges concern the study of advanced techniques and tools to enable effective urban data exploration. In this paper, we describe a framework that combines ontology-based techniques and citizens’ profiles in order to enable personalised exploration of urban data. Ontologies may provide a powerful tool for semantics-enabled exploration of data, by exploiting the knowledge structure in terms of concepts organised through hierarchies and semantic relationships. Smart City indicators are used to aggregate data that can have different relevance for target users, the activities they are performing and their role (e.g., PA, utility and energy providers, citizens) within the Smart City. Ontologies combined with users’ profiles enable effective and personalised recommendation and exploration of urban data.

Devis Bianchini, Valeria De Antonellis, Massimiliano Garda, Michele Melchiori
Design and Performance Analysis of Load Balancing Strategies for Cloud-Based Business Process Management Systems

Business Process Management Systems (BPMS) provide automated support for the execution of business processes in modern organisations. With the advent of cloud computing, the deployment of BPMS is shifting from traditional on-premise models to the Software-as-a-Service (SaaS) paradigm with the aim of delivering Business Process Automation as a Service on the cloud. To cope with the impact of numerous simultaneous requests from multiple tenants, a typical SaaS approach will launch multiple instances of its core applications and distribute workload to these application instances via load balancing strategies that operate under the assumption that tenant requests are stateless. However, since business process executions are stateful and often long-running, strategies that assume statelessness are inadequate for ensuring a uniform distribution of system load. In this paper, we propose several new load balancing strategies that support the deployment of BPMS in the cloud by taking into account (a) the workload imposed by the execution of stateful process instances from multiple tenants and (b) the capacity and availability of BPMS workflow engines at runtime. We have developed a prototypical implementation built upon an open-source BPMS and used it to evaluate the performance of the proposed load balancing strategies within the context of diverse load scenarios with models of varying complexity.

Michael Adams, Chun Ouyang, Arthur H. M. ter Hofstede, Yang Yu
Towards Cooperative Semantic Computing: A Distributed Reasoning Approach for Fog-Enabled SWoT

The development of the Semantic Web of Things (SWoT) is challenged by the nature of IoT architectures where constrained devices are connected to powerful cloud servers in charge of processing remotely collected data. Such an architectural pattern introduces multiple bottlenecks constituting a hurdle for scalability, and degrades the QoS parameters such as response time. This hinders the development of a number of critical and time-sensitive applications. As an alternative to this Cloud-centric architecture, Fog-enabled architectures can be considered to take advantage of the myriad of devices that can be used for partially processing data circulating between the local sensors and the remote Cloud servers. The approach developed in this paper is a contribution in this direction: it aims to enable rule-based processing to be deployed closer to data sources, in order to foster the implementation of semantic-enabled applications. For this purpose, we define a dynamic deployment technique for rule-based semantic reasoning on Fog nodes. This technique has been evaluated according to a strategy improving information delivery delay to applications. The implementation in Java based on SHACL rules has been executed on a platform containing a server, a laptop and a Raspberry Pi, and is evaluated on a smart building use case where both distribution and scalability have been considered.

Nicolas Seydoux, Khalil Drira, Nathalie Hernandez, Thierry Monteil
Enhancing Business Process Flexibility by Flexible Batch Processing

Business Process Management is a powerful approach for the automation of collaborative business processes. Recently concepts have been introduced to allow batch processing in business processes addressing the needs of different industries. The existing batch activity concepts are limited in their flexibility. In this paper we contribute different strategies for modeling and executing processes including batch work to improve the flexibility (1) of business processes in general and (2) of the batch activity concept. The strategies support different flexibility aspects (i.e., variability, looseness, adaptation, and evolution) of batch activities. The strategies provide a systematic approach to categorize existing and future batch-enabled BPM systems. Furthermore, the paper provides a system architecture independent from existing BPM systems, which allows for the support of all the strategies. The architecture can be used with different process languages and existing execution environments in a non-intrusive manner.

Luise Pufahl, Dimka Karastoyanova
Scheduling Business Process Activities for Time-Aware Cloud Resource Allocation

Cloud Computing is gaining more and more attention among enterprises thanks to its high performance and low operating cost. Particularly, Cloud resources are used to deploy enterprises’ business processes which are constrained by hard timing requirements. Similarly, Cloud providers propose resources in various pricing strategies based on temporal perspective. Taking into consideration both the time constraints and the variety of Cloud pricing strategies helps enterprises to achieve cost-effective process execution plans. Basically, to minimize process costs, stakeholders need to decide the execution time of process activities that overlaps with the temporal interval of the cheapest pricing strategy. In this paper, we present an approach to optimally schedule activities without violating their temporal constraints and capacity requirements. To do so, we use a mixed integer programming model with an objective function under a set of constraints. Our approach has been implemented and the experimental results highlight its performance and effectiveness.

Rania Ben Halima, Slim Kallel, Walid Gaaloul, Mohamed Jmaiel
Designing Process Diagrams – A Framework for Making Design Choices When Visualizing Process Mining Outputs

Modern information systems can log the executions of the business processes it supports. Such event logs contain useful information on the performance and health of business processes. Event logs can be used in process analysis with the aid of process mining tools. Process mining tools use various diagrams to visualize the output of analysis made. Such diagrams support the visual exploration of the event logs, facilitating process analysis, and usefulness of process mining tools. However, designing such diagrams is not an easy task. Oftentimes neither the developer nor the end-user know how to visualize the outputs created by process mining algorithms, nor do they know where the interesting information is hidden. Designing diagrams for process mining tools require taking design decisions that, on the one hand allow flexible exploration, and on the other hand, are simple and intuitive. In this paper, we investigate how existing process mining outputs are visualized and their underlying design rationale. Our analysis show that process diagrams, the most common type of diagrams used, are designed with next to no guidance from data visualization principles. Based on our findings, we propose a framework to support developers when designing visualization for process mining outputs. The framework is based on data visualization theory and practices within process mining visualization. The effectiveness and usability of the framework is tested in a case study.

Marit Sirgmets, Fredrik Milani, Alexander Nolte, Taivo Pungas
A Viewpoint for Integrating Costs in Enterprise Architecture

Managing and controlling costs is a major concern in every organization. To cope with costs complexity, enterprise models are referred in the literature as solutions to understand that partial view of the reality. Enterprise Architecture (EA) is one of the solutions that is widely used to cover many of the stakeholder’s concerns and viewpoints. However, EA lacks a viewpoint for representing costs. In this paper we present an ArchiMate viewpoint as an extension of any EA model, allowing costs representation. The proposal is grounded in the Time-Driven Activity Based Costing (TDABC) method. Our proposed viewpoint includes the ArchiMate concepts, their relationships and properties enabling the calculation and representation of costs in EA according to TDABC. To validate the usefulness and applicability of the proposal a well-known Harvard case study related with Heart Bypass Surgery is used to formulate a questionnaire. Then, it was applied to EA experts and the results are analysed using a quantitative approach. The paper concludes that representing costs in the architecture is considered as a good option for stakeholders to analyse and draw conclusions about the costs of their business processes and also to enables a better display of cost information when compared to unstructured representations.

João Miguens, Miguel Mira da Silva, Sérgio Guerreiro
Coop-DAAB: Cooperative Attribute Based Data Aggregation for Internet of Things Applications

The deployment of IoT devices is gaining an expanding interest in our daily life. Indeed, IoT networks consist in interconnecting several smart and resource constrained devices to enable advanced services. Security management in IoT is a big challenge as personal data are shared by a huge number of distributed services and devices. In this paper, we propose a Cooperative Data Aggregation solution based on a novel use of Attribute Based signcryption scheme ( $$\mathsf {Coop}$$ - $$\mathsf {DAAB}$$ ). $$\mathsf {Coop}$$ - $$\mathsf {DAAB}$$ consists in distributing data signcryption operation between different participating entities (i.e., IoT devices). Indeed, each IoT device encrypts and signs in only one step the collected data with respect to a selected sub-predicate of a general access predicate before forwarding to an aggregating entity. This latter is able to aggregate and decrypt collected data if a sufficient number of IoT devices cooperates without learning any personal information about each participating device. Thanks to the use of an attribute based signcryption scheme, authenticity of data collected by IoT devices is proved while protecting them from any unauthorized access.

Sana Belguith, Nesrine Kaaniche, Mohamed Mohamed, Giovanni Russello
On Cancellation of Transactions in Bitcoin-Like Blockchains

Bitcoin-like blockchains do not envisage any specific mechanism to avoid unfairness for the users. Hence, unfair situations, like impossibility of cancellation of transactions explicitly or having unconfirmed transactions, reduce the satisfaction of users dramatically, and, as a result, they may leave the system entirely. Such a consequence would impact significantly the security and the sustainability of the blockchain. Based on this observation, in this paper, we focus on explicit cancellation of transactions to improve the fairness for users. We propose a novel scheme with which it is possible to cancel a transaction, whether it is confirmed in a block or not, under certain conditions. We show that the proposed scheme is superior to the existing workarounds and is implementable for Bitcoin-like blockchains.

Önder Gürcan, Alejandro Ranchal Pedrosa, Sara Tucci-Piergiovanni
Spam Detection Approach for Cloud Service Reviews Based on Probabilistic Ontology

Online reviews provide a vision on the strengths and weakness of products/services, influencing potential customers’ purchasing decisions. The fact that anybody can leave a review provides the opportunity for spammers to write spam reviews about products and services for different intents. To counter this problem, a number of approaches for detecting spam reviews have been proposed. However, to date, most of these approaches depend on rich/complete information about items/reviewers, which is not the case of Social Media Platforms (SMPs). In this paper, we consider well known spam features taken from the literature to them we add two new ones: the user profile authenticity to allow the detection of spam review from any SMP and opinion deviation to verify the opinion truthfulness. To define a common model for different SMPs and to cope with the incompleteness of information and uncertainty in spam judgment, we propose a Review Spam Probabilistic Ontology (RSPO) based approach. Probabilistic Ontology is defined using Probabilistic Web Ontology Language (PR-OWL) and the probability distributions of the review spamicity is defined automatically using a learning approach. The herein reported experimental results proved the effectiveness and the performance of the approach.

Emna Ben-Abdallah, Khouloud Boukadi, Mohamed Hammami
Formal Modelling and Verification of Cloud Resource Allocation in Business Processes

Cloud environments have been increasingly used by companies for deploying and executing business processes to enhance their performance while lowering the operating cost. Nevertheless, the combination of business processes and Cloud environments is a field that needs to be further studied since it lacks an explicit and formal description of the resource perspective in the existing business processes and especially of Cloud-related properties, namely vertical/horizontal elasticity. Therefore, this field cannot yet fully benefit of what Cloud environments can offer. Besides the lack in formalization, there is also a need for a verification method to check the correctness of allocations. In fact, without formal verification, the designer can easily model erroneous allocations which lead to runtime errors if left untreated at design-time. In this work, we address the above shortcomings by proposing a formal model for the Cloud resource perspective in business processes using the Coloured Petri net formalism, which can be used to check the correctness of Cloud resource allocation at design-time.

Ikram Garfatta, Kais Klai, Mohamed Graiet, Walid Gaaloul
Integrating Digital Identity and Blockchain

Blockchain is a recent technology whose importance is rapidly growing. One of its native features is pseudo-anonymity, since users are referred by (blockchain) addresses, which are hashed public keys with no link to real identities. However, when moving from the use of blockchain as simple platform for cryptocurrencies to applications in which we want to automatize trust and transparency, in general, there is not the need of anonymity. Indeed, there are situations in which secure accountability, trust and transparency should coexist (e.g., in supply-chain management) to accomplish the goal of the application to design. Blockchain may appear little suitable for these cases, due to its pseudo-anonymity feature, so that an important research problem is to understand how to overcome this drawback. In this paper, we address this problem by proposing a solution that mixes the mechanism of public digital identity with blockchain via Identity-Based-Encryption. We define the solution and show its application to a real-life case study.

Francesco Buccafurri, Gianluca Lax, Antonia Russo, Guillaume Zunino
Context-Aware Predictive Process Monitoring: The Impact of News Sentiment

Predictive business process monitoring is concerned with forecasting how a process is likely to proceed, covering questions such as what is the next activity to expect and what is the remaining time until case completion. Process prediction typically builds on machine learning techniques that leverage past process execution data. A fundamental problem of a process prediction methods is the data acquisition. So far, research on predictive monitoring utilize data, which is internal to the process. In this paper, we present a novel approach of integrating the external context of the business processes into prediction methods. More specifically, we develop a technique that leverages the sentiments of online news for the task of remaining time prediction. Using our prototypical implementation, we carried out experiments that demonstrate the usefulness of this approach and allowing us to draw conclusions about circumstances in which it works best.

Anton Yeshchenko, Fernando Durier, Kate Revoredo, Jan Mendling, Flavia Santoro
Deadlock-Freeness Verification of Cloud Composite Services Using Event-B

With the emergence of the Cloud computing paradigm, interests were focused on representing and verifying the Cloud architecture in a formal way in order to prevent eventual failures and deadlocks. Service composition promotes reuse, interoperability, and loosely coupled interaction. However, verifying the correctness of a composite service remains a tedious task. To ensure the correctness of a composition, critical properties such as deadlock freeness must be verified. In this paper, we propose a novel formal approach to verify Cloud composite services correctness based on the Event-B method. We consider that both behavioral incompatibilities and conflicted resource may lead the composite service execution to failure. Event-B provides rigorous mathematical reasoning that helps building trust on developed software. A verification and validation approach combining both model proofs and model checking is finally performed to check the soundness of the proposed model.

Aida Lahouij, Lazhar Hamel, Mohamed Graiet
A Formal Model for Business Process Configuration Verification Supporting OR-Join Semantics

In today’s industries, similar process models are typically reused in different application contexts. These models result in a number of process model variants sharing several commonalities and exhibiting some variations. Configurable process models came to represent and group these variants in a generic manner. These processes are configured according to a specific context through configurable elements. Considering the large number of possible variants as well as the potentially complex configurable process, the configuration may be a tedious task and errors may lead to serious behavioral issues. Since achieving configuration in a correct manner has become of paramount importance, the analysts undoubtedly need assistance and guidance in configuring process variants. In this work, we propose a formal behavioral model based on the Symbolic Observation Graph (SOG) allowing to find the set of deadlock-free configuration choices while avoiding the well-known state-space explosion problem and considering loops and OR-join semantics. These choices are used to support business analysts in deriving deadlock-free variants.

Souha Boubaker, Kais Klai, Hedi Kortas, Walid Gaaloul
Change Detection in Event Logs by Clustering

The detection of changes in event logs recording the behavior of flexible processes is especially challenging and process mining algorithms generate useless “spaghetti” models out of them. Due to this, existing approaches for change detection in event logs recording the behavior of flexible processes can only localize a change point, which is of no avail when it comes to explain when, why and how a process model changed and will change. The aim of this paper is to present a novel clustering technique laying the foundation to determine a variety of changes and to foresee changes. In order to do this, four algorithms have been developed. We report the results of evaluations on synthetic as well as real-life data demonstrating the efficiency of the approach and also its broad scope of application for event and sensor data.

Agnes Koschmider, Daniel Siqueira Vidal Moreira
Dimensions for Scoping e-Government Enterprise Architecture Development Efforts

Inspired by developed economies, many developing economies are adopting an enterprise architecture approach to e-government implementation in order to overcome challenges of e-government interoperability. However, when developing an enterprise architecture for a complex enterprise such as the e-government enterprise, there is need to rationally specify scope dimensions. Addressing this requires guidance from e-government maturity models that provide insights into phasing e-government implementations; and enterprise architecture approaches that provide general insight into key dimensions for scoping enterprise architecture efforts. Although such insights exist, there is hardly detailed guidance on scoping initiatives associated with developing an e-government enterprise architecture. Yet the success of such business-IT alignment initiatives is often affected by scope issues. Thus, this paper presents an intertwined procedure that draws insights from e-government maturity models and enterprise architecture frameworks to specify critical aspects in scoping e-government enterprise architecture development efforts. The procedure was validated using a field demo conducted in a Uganda public entity.

Agnes Nakakawa, Flavia Namagembe, Erik H. A. Proper
Backmatter
Metadata
Title
On the Move to Meaningful Internet Systems. OTM 2018 Conferences
Editors
Hervé Panetto
Christophe Debruyne
Henderik A. Proper
Dr. Claudio Agostino Ardagna
Dumitru Roman
Robert Meersman
Copyright Year
2018
Electronic ISBN
978-3-030-02610-3
Print ISBN
978-3-030-02609-7
DOI
https://doi.org/10.1007/978-3-030-02610-3

Premium Partner