Skip to main content
Top

2019 | Book

Business Process Management Workshops

BPM 2019 International Workshops, Vienna, Austria, September 1–6, 2019, Revised Selected Papers

insite
SEARCH

About this book

This book constitutes revised papers from the twelve International Workshops held at the 17th International Conference on Business Process Management, BPM 2019, in Vienna, Austria, in September 2019:

The third International Workshop on Artificial Intelligence for Business Process Management (AI4BPM)The third International Workshop on Business Processes Meet Internet-of-Things (BP-Meet-IoT)The 15th International Workshop on Business Process Intelligence (BPI)The first International Workshop on Business Process Management in the era of Digital Innovation and Transformation (BPMinDIT)The 12th International Workshop on Social and Human Aspects of Business Process Management (BPMS2)The 7th International Workshop on Declarative, Decision and Hybrid approaches to processes (DEC2H)The second International Workshop on Methods for Interpretation of Industrial Event Logs (MIEL)The first International Workshop on Process Management in Digital Production (PM-DiPro)The second International Workshop on Process-Oriented Data Science for Healthcare (PODS4H)The fourth International Workshop on Process Querying (PQ)The second International Workshop on Security and Privacy-enhanced Business Process Management (SPBP)The first International Workshop on the Value and Quality of Enterprise Modelling (VEnMo)

Each of the workshops discussed research still in progress and focused on aspects of business process management, either a particular technical aspect or a particular application domain. These proceedings present the work that was discussed during the workshops.

Table of Contents

Frontmatter

Third International Workshop on Artificial Intelligence for Business Process Management (AI4BPM)

Frontmatter
Pushing More AI Capabilities into Process Mining to Better Deal with Low-Quality Logs

The ever increasing attention of Process Mining (PM) research to the logs of lowly-structured processes and of non process-aware systems (e.g., ERP, IoT systems) poses several challenges stemming from the lower quality that these logs have, concerning the precision, completeness and abstraction with which they describe the activities performed. In such scenarios, most of the resources spent in a PM project (in terms of time and expertise) are usually devoted to try different ways of selecting and preparing the input data for PM tasks, in order to eventually obtain significant, interpretable and actionable results. Two general AI-based strategies are discussed here that have been partly pursued in the literature to improve the achievements of PM efforts on low-quality logs, and to limit the amount of human intervention needed: (i) using explicit domain knowledge, and (ii) exploiting auxiliary AI tasks. The also provides an overview of trends, open issues and opportunities in the field.

Francesco Folino, Luigi Pontieri
Research Challenges for Intelligent Robotic Process Automation

Robotic Process Automation (RPA) is a fast-emerging automation technology in the field of Artificial Intelligence that allows organizations to automate high volume routines. RPA tools are able to capture the execution of such routines previously performed by a human user on the interface of a computer system, and then emulate their enactment in place of the user. In this paper, after an in-depth experimentation of the RPA tools available in the market, we developed a classification framework to categorize them on the basis of some key dimensions. Then, starting from this analysis, we derived four research challenges necessary to inject intelligence into current RPA technology.

Simone Agostinelli, Andrea Marrella, Massimo Mecella
Description Logics and Specialization for Structured BPMN

The literature contains arguments for the benefits of representing and reasoning with BPMN processes in (OWL) ontologies, but these proposals are not able to reason about their dynamics. We introduce a new Description Logic, sBPMprocessDL, to represent the behavioral semantics of (block) structured BPMN. It supports reasoning about process concepts based on their execution traces.Starting from the traditional notion of subsumption in Description Logics (including sBPMprocessDL), we further investigate the notions of specialization and inheritance, as a way to help build and abbreviate large libraries of processes in an ontology, which are needed in many applications.We also provide formal evidence for the intuition that features of structured BPMN diagrams such as AND-gates and sub-processes can provide substantial benefits for their succinctness. The same can be true when moving from a structured to an equivalent unstructured version.

Alexander Borgida, Varvara Kalokyri, Amélie Marian
Utilizing Ontology-Based Reasoning to Support the Execution of Knowledge-Intensive Processes

Supporting knowledge-intensive processes (KiPs) has been widely addressed so far and is still subject of discussion. In this context, little attention was paid to the ontology-driven combination of data-centric and semantic business process modeling, which finds its motivation in supporting KiPs by enabling work sharing between humans and artificial intelligence. Such approaches have characteristics that could allow support for KiPs based on inferencing capabilities of reasoners. We confirm this as we show that reasoners are able to infer the executability of tasks. This is done by designing an inference mechanism to extend a currently researched ontology- and data-driven business process model (ODD-BP model). Further support for KiPs by the proposed inference mechanism results from its ability to infer the relevance of tasks, depending on the extent to which their execution would contribute to process progress. Thereby, it takes into account the dynamic behaviour of KiPs and helps knowledge workers to pursue their process goals.

Carsten Maletzki, Eric Rietzke, Lisa Grumbach, Ralph Bergmann, Norbert Kuhn
How Cognitive Processes Make Us Smarter

Cognitive computing describes the learning effects that computer systems can achieve through training and via interaction with human beings. Developing capabilities like this requires large datasets, user interfaces with cognitive functions, as well as interfaces to other systems so that information can be exchanged and meaningfully linked. Recently, cognitive computing has been applied within business process management (BPM), raising the question about how cognitive computing may change BPM, and even leverage some new cognitive resources. We believe that the answer to this question is linked to the promised learning effects for which we need to explore how cognitive processes enable learning effects in BPM. To this end, we collect and analyze publications on cognitive BPM from research and practice. Based on this information, we describe the principle of cognitive process automation and discuss its practical implications with a focus on technical synergies. The results are used to build a visual research map for cognitive BPM.

Andrea Zasada
Supporting Complaint Management in the Medical Technology Industry by Means of Deep Learning

Complaints about finished products are a major challenge for companies. Particularly for manufacturers of medical technology, where product quality is directly related to public health, defective products can have a significant impact. As part of the increasing digitalization of manufacturing companies (“Industry 4.0”), more process-related data is collected and stored. In this paper, we show how this data can be used to support the complaint management process in the medical technology industry. Working together with a large manufacturer of medical products, we obtained a large dataset containing textual descriptions and assigned error sources for past complaints. We use this dataset to design, implement, and evaluate a novel approach for automatically suggesting a likely error source for future complaints based on the customer-provided textual description. Our results show that deep learning technology holds an interesting potential for supporting complaint management processes, which can be leveraged in practice.

Philip Hake, Jana-Rebecca Rehse, Peter Fettke
Resource Controllability of Workflows Under Conditional Uncertainty

An access controlled workflow (ACWF) specifies a set of tasks that have to be executed by authorized users with respect to some partial order in a way that all authorization constraints are satisfied. Recent research focused on weak, strong and dynamic controllability of ACWFs under conditional uncertainty showing that directional consistency is a way to generate any consistent assignment of tasks to users efficiently and without backtracking. This means that during execution we never realize that we would have chosen a different user for some previous task to avoid some constraint violation. However, dynamic controllability of ACWFs also depends on how the components of the ACWF are totally ordered. In this paper, we employ Constraint Networks Under Conditional Uncertainty (CNCUs) to solve this limitation, and provide an encoding from ACWFs into CNCUs to exploit existing controllability checking algorithms for CNCUs. We also address the execution of a controllable ACWF discussing which (possibly different) users are committed for the same tasks depending on what is going on (online planning).

Matteo Zavatteri, Carlo Combi, Luca Viganò
Automated Multi-perspective Process Generation in the Manufacturing Domain

Rapid advances in manufacturing technologies have spurred a tremendous focus on automation and flexibility in smart manufacturing ecosystems. The needs of customers require these ecosystems to be capable of handling product variability in a prompt, reliable and cost-effective way that expose a high degree of flexibility. A critical bottleneck in addressing product variability in a factory is the manual design of manufacturing processes for new products that heavily relies on the domain experts. This is not only a tedious and time-consuming task but also error-prone. Our method supports the domain experts by generating manufacturing processes for the new products by learning the manufacturing knowledge from the existent processes that are designed for similar products to the new products. We have successfully applied our approach in the gas turbine manufacturing domain, which resulted in a significant decrease of time and effort and an increase of quality in the final process design.

Giray Havur, Alois Haselböck, Cristina Cabanillas
Data-Driven Workflows for Specifying and Executing Agents in an Environment of Reasoning and RESTful Systems

We present an approach to specify and execute agents on Read-Write Linked Data that are given as Guard-Stage-Milestone workflows. That is, we work in an environment of semantic knowledge representation, reasoning and RESTful systems. For specifying, we present a Guard-Stage-Milestone workflow and instance ontology. For executing, we present operational semantics for this ontology. We show that despite different assumptions of this environment in contrast to the traditional environment of workflow management systems, the Guard-Stage-Milestone approach can be transferred and successfully applied on the web.

Benjamin Jochum, Leonard Nürnberg, Nico Aßfalg, Tobias Käfer
The Changing Roles of Humans and Algorithms in (Process) Matching

Historically, matching problems (including process matching, schema matching, and entity resolution) were considered semiautomated tasks in which correspondences are generated by matching algorithms and subsequently validated by human expert(s). The role of humans as validators is diminishing, in part due to the amount and size of matching tasks. Our vision for the changing role of humans in matching is divided into two main approaches, namely Humans Out and Humans In. The former questions the inherent need for humans in the matching loop, while the latter focuses on overcoming human cognitive biases via algorithmic assistance. Above all, we observe that matching requires unconventional thinking demonstrated by advanced machine learning methods to complement (and possibly take over) the role of humans in matching.

Roee Shraga, Avigdor Gal

Third International Workshop on Business Processes Meet Internet-of-Things (BP-Meet-IoT)

Frontmatter
Integrating IoT with BPM to Provide Value to Cattle Farmers in Australia

This research study covers the journey to date introducing BPM into the increasing IoT implementations across cattle farming operations in Australia, to generate value through improved decision support. The research is supported by Meat and Livestock Australia (MLA) who represent over 50 000 cattle, sheep and goat producers, Hitachi Consulting as well as individual farming businesses from large corporates to family owned farms. The challenges of deploying IoT devices, especially across extensive farms in remote areas, the importance of looking beyond numerous dashboards on various websites and mobile applications to an integrated control centre, with BPM at its core, driving decision support are documented. Capabilities of Business Process Management Systems have been leveraged to drive workflow and process orchestration to extract the right data and run decision models. Learnings from this research study can be applied to any industry or value chain wishing to extract greater value from the opportunities provided by IoT.

Owen Keates
Enabling the Discovery of Manual Processes Using a Multi-modal Activity Recognition Approach

The analysis of business processes using process mining requires structured log data. Regarding manual activities, this data can be generated from sensor data acquired from the Internet of Things. The main objective of this paper is the development and evaluation of an approach which recognizes and logs manually performed activities, enabling the application of established process discovery methods. A system was implemented which uses a body area network, image data of the process environment and feedback from the executing workers in case of uncertainties during detection. Both feedback and image data are acquired and processed during process execution. In a case study in a laboratory environment, the system was evaluated using an example process. The implemented approach shows that the inclusion of image data of the environment and user feedback in ambiguous situations during recognition generate log data which well represent actual process behavior.

Adrian Rebmann, Andreas Emrich, Peter Fettke

15th International Workshop on Business Process Intelligence (BPI)

Frontmatter
Impact-Aware Conformance Checking

Alignment-based conformance checking techniques detect and quantify deviations of process execution from expected behavior as depicted in process models. However, often when deviations occur, additional actions are needed to remedy and restore the process state. These would seem as further reducing conformance according to existing measures. This paper proposes a conformance checking approach which considers the response to unexpected deviations during process execution, by analyzing the data updates involved and their impact on the expected behavior. We evaluated our approach in an experimental study, whose results show that our approach better captures adapted behavior in response to deviations, as compared to standard fitness measurement.

Arava Tsoury, Pnina Soffer, Iris Reinhartz-Berger
Encoding Conformance Checking Artefacts in SAT

Conformance checking strongly relies on the computation of artefacts, which enable reasoning on the relation between observed and modeled behavior. This paper shows how important conformance artefacts like alignments, anti-alignments or even multi-alignments, defined over the edit distance, can be computed by encoding the problem as a SAT instance. From a general perspective, the work advocates for a unified family of techniques that can compute conformance artefacts in the same way. The prototype implementation of the techniques presented in this paper show capabilities for dealing with some of the current benchmarks, and potential for the near future when optimizations similar to the ones in the literature are incorporated.

Mathilde Boltenhagen, Thomas Chatain, Josep Carmona
Performance Mining for Batch Processing Using the Performance Spectrum

Performance analysis from process event logs is a central element of business process management and improvement. Established performance analysis techniques aggregate time-stamped event data to identify bottlenecks or to visualize process performance indicators over time. These aggregation-based techniques are not able to detect and quantify the performance of time-dependent performance patterns such as batches. In this paper, we propose a first technique for mining performance features from the recently introduced performance spectrum. We present an algorithm to detect batches from event logs even in case of batches overlapping with non-batched cases, and we propose several measures to quantify batching performance. Our analysis of public real-life event logs shows that we can detect batches reliably, batching performance differs significantly across processes, across activities within a process, and our technique even allows to detect effective changes to batching policies regarding consistency of processing.

Eva L. Klijn, Dirk Fahland
LIProMa: Label-Independent Process Matching

The identification of best practices is an important methodology to improve the executions of processes. To determine those best practices process mining techniques analyze process entities and model specific views to highlight points for improvements. A major requirement in most approaches is a common activity space so events can be related directly. However there are instances which do provide multiple activity universes and processes from different sources need to be compared. For example in corporate finance, strategic operations like mergers or acquisitions cause processes with similar workflow but different descriptions to be merged. In this work we develop LIProMa, a method to compare processes based on their temporal flow of action sequences by solving the correlated transportation problem. Activity labels are purposely omitted in the comparison. Hence our novel method provides a similarity measure which is capable of comparing processes with diverging labels often caused by distributed executions and varying operators. Therefore it works orthogonal to conventional methods which rely on similarity between activity labels. Instead LIProMa establishes a correspondence between activities of two processes by focusing on temporal patterns.

Florian Richter, Ludwig Zellner, Imen Azaiz, David Winkel, Thomas Seidl
A Generic Approach for Process Performance Analysis Using Bipartite Graph Matching

The field of process mining focuses on the analysis of event data, generated and captured during the execution of processes within companies. The majority of existing process mining techniques focuses on process discovery, i.e., automated (data-driven) discovery of a descriptive process model of the process, and conformance and/or compliance checking. However, to effectively improve processes, a detailed understanding in differences of the actual performance of a process, as well as the underlying causing factors, is needed. Surprisingly, few research focuses on generic techniques for process-aware data-driven performance measurement, analysis and prediction. Therefore, in this paper, we present a generic approach, which allows us to compute the average performance between arbitrary groups of activities active in a process. In particular, the technique requires no a priori knowledge of the process, and thus does not suffer from representational bias induced by any underlying process representation. Our experiments show that our approach is scalable to large cases and especially robust to recurrent activities in a case.

Chiao-Yun Li, Sebastiaan J. van Zelst, Wil M. P. van der Aalst
Extracting a Collaboration Model from VCS Logs Based on Process Mining Techniques

A precise overview on how software developers collaborate on code could reveal new insights such as indispensable resources, potential risks and better team awareness. Version control system logs keep track of what team members worked on and when exactly this work took place. Since it is possible to derive collaborations from this information, these logs form a valid data source to extract this overview from. This concept shows many similarities with how process mining techniques can extract process models from execution logs. The fuzzy mining algorithm [5] in particular holds many useful ideas and metrics that can also be applied to our problem case. This paper describes the development of a tool that extracts a collaboration graph from a version control system log. It explores to what extend fuzzy mining techniques can be incorporated to construct and simplify the visualization. A demonstration of the tool on a real-life version control system log is given. The paper concludes with a discussion of future work.

Leen Jooken, Mathijs Creemers, Mieke Jans
Finding Uniwired Petri Nets Using eST-Miner

In process discovery, the goal is to find, for a given event log, the model describing the underlying process. While process models can be represented in a variety of ways, in this paper we focus on a subclass of Petri nets. In particular, we describe a new class of Petri nets called Uniwired Petri Nets and first results on their expressiveness. They provide a balance between simple and readable process models on the one hand, and the ability to model complex dependencies on the other hand. We then present an adaptation of our eST-Miner aiming to find such Petri Nets efficiently. Constraining ourselves to uniwired Petri nets allows for a massive decrease in computation time compared to the original algorithm, while still discovering complex control-flow structures such as long-term-dependencies. Finally, we evaluate and illustrate the performance of our approach by various experiments.

Lisa Luise Mannel, Wil M. P. van der Aalst
Discovering Process Models from Uncertain Event Data

Modern information systems are able to collect event data in the form of event logs. Process mining techniques allow to discover a model from event data, to check the conformance of an event log against a reference model, and to perform further process-centric analyses. In this paper, we consider uncertain event logs, where data is recorded together with explicit uncertainty information. We describe a technique to discover a directly-follows graph from such event data which retains information about the uncertainty in the process. We then present experimental results of performing inductive mining over the directly-follows graph to obtain models representing the certain and uncertain part of the process.

Marco Pegoraro, Merih Seran Uysal, Wil M. P. van der Aalst
Predictive Process Monitoring in Operational Logistics: A Case Study in Aviation

The research area of process mining concerns itself with knowledge discovery from event logs, containing recorded traces of executions as stored by process aware information systems. Over the past decade, research in process mining has increasingly focused on predictive process monitoring to provide businesses with valuable information in order to identify violations, deviance and delays within a process execution, enabling them to carry out preventive measures. In this paper, we describe a practical case in which both exploratory and predictive process monitoring techniques were developed to understand and predict completion times of a luggage handling process at an airport. From a scientific perspective, our main contribution relates to combining a random forest regression model and a Long Short-Term Memory (LSTM) model into a novel stacked prediction model, in order to accurately predict completion time of cases.

Björn Rafn Gunnarsson, Seppe K. L. M. vanden Broucke, Jochen De Weerdt
A Survey of Process Mining Competitions: The BPI Challenges 2011–2018

In recent years, several advances in the field of process mining, and even data science in general, have come from competitions where participants are asked to analyze a given dataset or event log. Besides providing significant insights about a specific business process, these competitions have also served as a valuable opportunity to test a wide range of process mining techniques in a setting that is open to all participants, from academia to industry. In this work, we conduct a survey of process mining competitions, namely the Business Process Intelligence Challenge, from 2011 to 2018. We focus on the methods, tools and techniques that were used by all participants in order to analyze the published event logs. From this survey, we develop a comparative analysis that allows us to identify the most popular tools and techniques, and to realize that data mining and machine learning are playing an increasingly important role in process mining competitions.

Iezalde F. Lopes, Diogo R. Ferreira

First International Workshop on Business Process Management in the Era of Digital Innovation and Transformation: New Capabilities and Perspectives (BPMinDIT)

Frontmatter
The Power of the Ideal Final Result for Identifying Process Optimization Potential

Modern information and telecommunication technologies provide possibilities not only for improving existing business processes. They can also allow to make processes redundant or to move to new business models. This paper suggests a method for analysing the potential for process improvement and organizational change. Instead of starting with a visualisation of the current process, the starting point is a visualisation of customers’ steps in the context of their use of a service or product.

Ralf Laue
Understanding the Need for New Perspectives on BPM in the Digital Age: An Empirical Analysis

The emergence of digital technology is substantially changing the way we communicate and collaborate. In recent years, groundbreaking business model innovations have disrupted industries by the dozen, shifting previously unchallenged global players out of the market within shortest time. Although business process management (BPM) is often identified as a main driver for organizational efficiency in this context, there is little understanding of how its methods and tools can successfully navigate organizations through the uncertainty brought by today’s highly dynamic market environments. However, we see more and more contributions emerging that question the timeliness of BPM due to its lack of context sensitivity. In this context, the inflexibility and over-functionalization of hierarchical management structures is often referred to as the primary reason why organizations fail to achieve the flexibility, agility, and responsiveness needed to address today’s entrepreneurial challenges. In this research paper, we question whether the contemporary BPM body of knowledge is still sufficient to equip organizations with the competitive advantages and operational excellence that have long yielded sustainable growth and business success. In fact, our empirical observations indicate that the vertical management of functional units inherent to current BPM is increasingly being replaced by adaptive and context-sensitive management approaches drawing on agile methodologies and modular process improvements. From a total of 17 interviews, we derive five criteria that the respondents consider as essential to strengthen the position of BPM in the digital age.

Florian Imgrund, Christian Janiesch
The Use of Distance Metrics in Managing Business Process Transfer - An Exploratory Case Study

Business process transfer refers to the innovations of business processes by taking a business process from one organization to another. Such transfer is a prominent technique for improving operations within larger companies that transfer them from one branch or subsidy to another. The success of such transfers can be affected by several factors. A central challenge is the distance between the organizational units, i.e. source organization and target organization. In this context, distance refers to the extent to which the source organization and the target organizations differ, e.g. geographically, culturally or in organizational terms. The factors of distance and their measurement in an intra-organizational context still represent a gap in the literature. For this reason, a case study was conducted in a Central-European financial institution, in which 14 persons from different hierarchical levels, departments and organizational units were interviewed. The study identified eight dimensions of distance, which we integrate into a model for measuring the distance of such intra-company transfers.

Anna-Maria Exler, Jan Mendling, Alfred Taudes

12th International Workshop on Social and Human Aspects of Business Process Management (BPMS2)

Frontmatter
Mining Personal Service Processes: The Social Perspective

The process of digital transformation opens more and more domains to data driven analysis. This also accounts for Process Mining of service processes. This work investigates the use of Process Mining in the domain of Personal Services focusing on the Social or Organizational Perspective respectively. Documenting research in progress, problems of “traditional” organizational mining approaches on knowledge-intensive service processes and possible solutions are shown. Furthermore, new objectives and thus approaches for Social Mining in the context of Process Mining are discussed, addressing the recently increasing focus on workforce well-being.

Birger Lantow, Julian Schmitt, Fabienne Lambusch
Supporting ED Process Redesign by Investigating Human Behaviors

Human behaviors play a very relevant role in many Knowledge-Intensive Processes (KIPs), as healthcare processes, where the human factors are predominantly involved. Accordingly, health process performances, and particularly patient satisfaction, are highly affected by practitioners’ behaviors and interactions.Leveraging the novelty and potential of wearable sensors, this research aims to investigate behavioral factors affecting patient satisfaction in the Emergency Department (ED), with the final goal of supporting the ED process (re-)design in a holistic perspective. 51 patients and 135 practitioners were systematically monitored in a real emergency department using the Sociometric badges. Preliminary results show that patient satisfaction is greatly influenced by behavioral factors. Specifically, the ED doctors’ behaviors seem to assume the most important role in the formation of patient perceptions during ED process. Finally, findings provide to researchers, practitioners, and health managers indications for designing and/or improving the ED service process.

Alessandro Stefanini, Davide Aloini, Peter Gloor, Federica Pochiero
The Potential of Workarounds for Improving Processes

Several studies have hinted how the study of workarounds can help organizations to improve business processes. Through a systematic literature review of 70 articles that discuss workarounds by information systems users, we aim to unlock this potential. Based on a synthesis of recommendations mentioned in the reviewed studies, we describe five key activities that help organizations to deal with workarounds. We contribute to the IS literature by (1) providing an overview of concrete recommendations for managing workarounds and (2) offering a background for positioning new research activities on the subject. Organizations can apply these tools directly to turn their knowledge on workarounds into organizational improvement.

Iris Beerepoot, Inge van de Weerd, Hajo A. Reijers

7th International Workshop on Declarative, Decision and Hybrid Approaches to Processes (DEC2H)

Frontmatter
Putting Decisions in Perspective

The advent of the OMG Decision Model and Notation (DMN) standard has revived interest, both from academia and industry, in decision management and its relationship with business process management. Several techniques and tools for the static analysis of decision models have been brought forward, taking advantage of the trade-off between expressiveness and computational tractability offered by the DMN S-FEEL language.In this short paper, I argue that decisions have to be put in perspective, that is, understood and analyzed within their surrounding organizational boundaries. This brings new challenges that, in turn, require novel, advanced analysis techniques. Using a simple but illustrative example, I consider in particular two relevant settings: decisions interpreted the presence of background, structural knowledge of the domain of interest, and (data-aware) business processes routing process instances based on decisions. Notably, the latter setting is of particular interest in the context of multi-perspective process mining. I report on how we successfully tackled key analysis tasks in both settings, through a balanced combination of conceptual modeling, formal methods, and knowledge representation and reasoning.

Marco Montali
DMN for Data Quality Measurement and Assessment

Data Quality assessment is aimed at evaluating the suitability of a dataset for an intended task. The extensive literature on data quality describes the various methodologies for assessing data quality by means of data profiling techniques of the whole datasets. Our investigations are aimed to provide solutions to the need of automatically assessing the level of quality of the records of a dataset, where data profiling tools do not provide an adequate level of information. As most of the times, it is easier to describe when a record has quality enough than calculating a qualitative indicator, we propose a semi-automatically business rule-guided data quality assessment methodology for every record. This involves first listing the business rules that describe the data (data requirements), then those describing how to produce measures (business rules for data quality measurements), and finally, those defining how to assess the level of data quality of a data set (business rules for data quality assessment). The main contribution of this paper is the adoption of the OMG standard DMN (Decision Model and Notation) to support the data quality requirement description and their automatic assessment by using the existing DMN engines.

Álvaro Valencia-Parra, Luisa Parody, Ángel Jesús Varela-Vaca, Ismael Caballero, María Teresa Gómez-López
Modeling Rolling Stock Maintenance Logistics at Dutch Railways with Declarative Business Artifacts

A case study is presented in which the maintenance logistics process of rolling stock (train units) at Dutch Railways is modeled. This decision-intensive process has a high complexity due to its many steps and the different actors that are involved. Declarative Business Artifacts are used as modeling technique to master this complexity. We discuss the key elements of the artifact model and focus on encountered issues and lessons learned. The artifact model is compared with an earlier developed agent-based model of the maintenance logistics process. The case study shows that declarative business artifacts are a promising technique to structure complex decision-intensive processes.

Erik Smit, Rik Eshuis
Applying Business Architecture Principles with Domain-Specific Ontology for ACM Modelling: A Building Construction Project Example

Adaptive Case Management (ACM) empowers knowledge workers of any industry by utilizing their experience in the execution of business cases. It is designed to handle non-repetitive processes, where the course of actions has to be decided for each case individually, based on the assessment of the current situation. ACM uses domain-specific data and the analysis of rules to provide the relevant information at the right time to the business users. This supports their actions and decision making but does not dictate predefined workflows that are difficult or even impossible to define for knowledge-intensive work and prevent users from efficiently achieving their goals. Solutions for ACM implementations need to focus on business requirements to support business users rather than limit them by extending existing tools such as content management systems, BPM-based software suites, or collaboration tools with ad hoc development that all lack the ability to adapt flexibly to business level changes without requiring significant IT development efforts. To tackle these challenges, this paper proposes a novel approach by applying business architecture concepts for the definition of ACM applications in combination with domain-specific ontologies and business rules. Business domain analysts describe with the domain-specific ontology the complete value stream with its goals, activities and rules which use a formalized natural business language. The resulting business information model is mapped to an underlying data model and instantly enacted during case execution so that adaptations get immediately available to the business users. The behavioral business rules guide business users to guarantee case compliance which facilitates an easy and rapid adaptation of business applications. We apply this approach to a real life example from the construction industry where several parties from different trades have to collaborate in heterogeneous environments.

Antonio Manuel Gutiérrez Fernández, Freddie Van Rijswijk, Christoph Ruhsam, Ivan Krofak, Klaus Kogler, Anna Shadrina, Gerhard Zucker
Checking Compliance in Data-Driven Case Management

Case management approaches address the special requirements of knowledge workers. In fragment-based case management (fCM), small structured parts are modelled and loosely coupled through data dependencies, which can be freely combined at run-time. When executing business processes, organizations must adhere to regulations, to laws, to company guidelines etc. Business process compliance comprises methods to verify designed and executed business processes against certain rules. While design-time compliance checking works well for structured process models, flexible knowledge-intensive processes have been rarely considered despite increasing interest in academia and industry.In this paper, we present (i) formal execution semantics of fCM models using Petri nets. We also cover concurrently running fragment instances and case termination. We (ii) apply model checking to investigate the compliance with temporal logic rules; finally, we (iii) provide an implementation based on the open-source case modeler Gryphon and the free model checker LoLA.

Adrian Holfter, Stephan Haarmann, Luise Pufahl, Mathias Weske

Second International Workshop on Methods for Interpretation of Industrial Event Logs (MIEL)

Frontmatter
Graph Summarization for Computational Sensemaking on Complex Industrial Event Logs

Complex event logs in industrial applications can often be represented as graphs in order to conveniently model their multi-relational complex characteristics. Then, appropriate methods for analysis and mining are required, in order to provide insights that cover the relevant analytical questions and are understandable to humans. This paper presents a framework for such computational sensemaking on industrial event logs utilizing graph summarization techniques. We demonstrate the efficacy of the proposed approach on a real-world industrial dataset.

Stefan Bloemheuvel, Benjamin Kloepper, Martin Atzmueller
Capturing Human-Machine Interaction Events from Radio Sensors in Industry 4.0 Environments

In manufacturing environments, human workers interact with increasingly autonomous machinery. To ensure workspace safety and production efficiency during human-robot cooperation, continuous and accurate tracking and perception of workers’ activities is required. The RadioSense project intends to move forward the state-of-the-art in advanced sensing and perception for next generation manufacturing workspace. In this paper, we describe our ongoing efforts towards multi-subject recognition cases with multiple persons conducting several simultaneous activities. Perturbations induced by moving bodies/objects on the electromagnetic wavefield can be processed for environmental perception by leveraging next generation (5G) New Radio (NR) technologies, including MIMO systems, high performance edge-cloud computing and novel (or custom designed) deep learning tools.

Stephan Sigg, Sameera Palipana, Stefano Savazzi, Sanaz Kianoush

First International Workshop on Process Management in Digital Production (PMDiPro)

Frontmatter
BPMN and DMN for Easy Customizing of Manufacturing Execution Systems

Manufacturing execution systems (MES) are the central integration point for the shop floor. They collect information about customer orders from ERP systems (enterprise resource planning), calculate the best plans for production orders and their assignment to machines and monitor the execution of these plans. It is therefore also the central data hub on the shop floor and communicates progress back to the ERP system. However, each company and production facility is a bit different and it is a hard decision to find a good compromise between standard products with a long customization period, industry-specific solutions with less customization need and company-specific solutions with long development times. This paper proposes the use of business process management (BPM) as a means for easy graphical customization of production processes that lead to immediately executable workflows (zero code development) or need very few code additions to get executable (low code development).

René Peinl, Ornella Perak

Second International Workshop on Process-Oriented Data Science for Healthcare (PODS4H)

Frontmatter
Analysis and Optimization of a Sepsis Clinical Pathway Using Process Mining

In this work, we propose and apply a methodology for the management and optimization of clinical pathways using process mining. We adapt the Clinical Pathway Analysis Method (CPAM) by taking into consideration healthcare providers’ needs. We successfully applied the methodology in the sepsis treatment of a major Brazilian hospital. Using data extracted from the hospital information system, a total of 5,184 deviations in the execution of the sepsis clinical pathway were discovered and categorized in 43 different types. We identified the process as it was actually executed, two bottlenecks, and significant differences in performance in cases that deviated from the prescribed clinical pathway. Furthermore, factors such as patient age, gender, and type of infection were shown to affect performance. The analysis results were validated by an expert panel of clinical professionals and verified to provide valuable, actionable insights. Based on these insights, we were able to suggest optimization points in the sepsis clinical pathway.

Ricardo Alfredo Quintano Neira, Bart Franciscus Antonius Hompes, J. Gert-Jan de Vries, Bruno F. Mazza, Samantha L. Simões de Almeida, Erin Stretton, Joos C. A. M. Buijs, Silvio Hamacher
Understanding Undesired Procedural Behavior in Surgical Training: The Instructor Perspective

In recent years, a new approach to incorporate the process perspective in the surgical procedural training through Process Mining has been proposed. In this approach, training executions are recorded, to later generate end-to-end process models for the students, describing their execution. Although those end-to-end models are useful for the students, they do not fully capture the needs of the instructors of the training programs. This article proposes a taxonomy of activities for surgical process models, analyzes the specific questions instructors have about the student execution and their undesired procedural behavior, and proposes the Procedural Behavior Instrument, an instrument to answer them in an easy-to-interpret way. A real case was used to test the approach, and a preliminary validity was developed by a medical expert.

Victor Galvez, Cesar Meneses, Gonzalo Fagalde, Jorge Munoz-Gama, Marcos Sepúlveda, Ricardo Fuentes, Rene de la Fuente
Towards Privacy-Preserving Process Mining in Healthcare

Process mining has been successfully applied in the healthcare domain and helped to uncover various insights for improving healthcare processes. While benefits of process mining are widely acknowledged, many people rightfully have concerns about irresponsible use of personal data. Healthcare information systems contain highly sensitive information and healthcare regulations often require protection of privacy of such data. The need to comply with strict privacy requirements may result in a decreased data utility for analysis. Although, until recently, data privacy issues did not get much attention in the process mining community, several privacy-preserving data transformation techniques have been proposed in the data mining community. Many similarities between data mining and process mining exist, but there are key differences that make privacy-preserving data mining techniques unsuitable to anonymise process data. In this article, we analyse data privacy and utility requirements for healthcare process data and assess the suitability of privacy-preserving data transformation methods to anonymise healthcare data. We also propose a framework for privacy-preserving process mining that can support healthcare process mining analyses.

Anastasiia Pika, Moe T. Wynn, Stephanus Budiono, Arthur H. M. ter Hofstede, Wil M. P. van der Aalst, Hajo A. Reijers
Comparing Process Models for Patient Populations: Application in Breast Cancer Care

Processes in organisations such as hospitals, may deviate from intended standard processes, due to unforeseeable events and the complexity of the organisation. For hospitals, the knowledge of actual patient streams for patient populations (e.g., severe or non-severe cases) is important for quality control and improvement. Process discovery from event data in electronic health records can shed light on the patient flows, but their comparison for different populations is cumbersome and time-consuming. In this paper, we present an approach for the automatic comparison of process models extracted from events in electronic health records. Concretely, we propose to compare processes for different patient populations by cross-log conformance checking, and standard graph similarity measures obtained from the directed graph underlying the process model. Results from a case study on breast cancer care show that average fitness and precision of cross-log conformance checks provide good indications of process similarity and therefore can guide the direction of further investigation for process improvement.

Francesca Marazza, Faiza Allah Bukhsh, Onno Vijlbrief, Jeroen Geerdink, Shreyasi Pathak, Maurice van Keulen, Christin Seifert
Evaluating the Effectiveness of Interactive Process Discovery in Healthcare: A Case Study

This work aims at investigating the effectiveness and suitability of Interactive Process Discovery, an innovative Process Mining technique, to model healthcare processes in a data-driven manner. Interactive Process Discovery allows the analyst to interactively discover the process model, exploiting his domain knowledge along with the event log. In so doing, a comparative evaluation against the traditional automated discovery techniques is carried out to assess the potential benefits that domain knowledge brings in improving both the quality and the understandability of the process model. The comparison is performed by using a real dataset from an Italian Hospital, in collaboration with the medical staff. Preliminary results show that Interactive Process Discovery allows to obtain an accurate and fully compliant with clinical guidelines process model with respect to the automated discovery techniques. Discovering an accurate and comprehensible process model is an important starting point for subsequent process analysis and improvement steps, especially in complex environments, such as healthcare.

Elisabetta Benevento, Prabhakar M. Dixit, M. F. Sani, Davide Aloini, Wil M. P. van der Aalst
Developing Process Performance Indicators for Emergency Room Processes

In a healthcare environment, it is essential to manage the emergency room process since its connectivity to the quality of care. In managing clinical operations, quantitative process performance analysis is typically performed with process mining, and there have been several approaches to utilize process mining in emergency room process analysis. These research provide a comprehensive methodology to analyze the emergency room processes using process mining; however, performance indicators for directly assessing the emergency room processes are lacking. To overcome the limitation, this paper proposes a framework of process performance indicators utilized in emergency rooms. The proposed framework starts with the devil’s quadrangle, i.e., time, cost, quality, and flexibility. Based on four perspectives, we suggest specific process performance indicators with a formal explanation. To validate the applicability of this research, we present a case study result with the real-life clinical data collected from a tertiary hospital in Korea.

Minsu Cho, Minseok Song, Seok-Ran Yeom, Il-Jae Wang, Byung-Kwan Choi
Interactive Data Cleaning for Process Mining: A Case Study of an Outpatient Clinic’s Appointment System

Hospitals are becoming increasingly aware of the need to improve their processes and data-driven approaches, such as process mining, are gaining attention. When applying process mining techniques in reality, it is widely recognized that real-life data tends to suffer from data quality problems. Consequently, thorough data quality assessment and data cleaning is required. This paper proposes an interactive data cleaning approach for process mining. It encompasses both data-based and discovery-based data quality assessment, showing that both are complementary. To illustrate some key elements of the proposed approach, a case study of an outpatient clinic’s appointment system is considered.

Niels Martin, Antonio Martinez-Millana, Bernardo Valdivieso, Carlos Fernández-Llatas
Clinical Guidelines: A Crossroad of Many Research Areas. Challenges and Opportunities in Process Mining for Healthcare

Clinical Guidelines, medical protocols, and other healthcare indications, cover a significant slice of physicians daily routine, as they are used to support clinical choices also with relevant legal implications. On the one hand, informatics have proved to be a valuable mean for providing formalisms, methods, and approaches to extend clinical guidelines for better supporting the work performed in the healthcare domain. On the other hand, due to the different perspectives that can be considered for addressing similar problems, it lead to an undeniable fragmentation of the field. It may be argued that such fragmentation did not help to propose a practical, accepted, and extensively adopted solutions to assist physicians. As in Process Mining as a general field, Process Mining for Healthcare inherits the requirement of Conformance Checking. Conformance Checking aims to measure the adherence of a particular (discovered or known) process with a given set of data, or vice-versa. Due to the intuitive similarities in terms of challenges and problems to be faced between conformance checking and clinical guidelines, one may be tempted to expect that the fragmentation issue will naturally arise also in the conformance checking field. This position paper is a first step on the direction to embrace experience, lessons learnt, paradigms, and formalisms globally derived from the clinical guidelines challenge. We argue that such new focus, joint with the even growing notoriety and interest in PM4HC, might allow more physicians to make the big jump from user to protagonist becoming more motivated and proactive in building a strong multidisciplinary community.

Roberto Gatta, Mauro Vallati, Carlos Fernandez-Llatas, Antonio Martinez-Millana, Stefania Orini, Lucia Sacchi, Jacopo Lenkowicz, Mar Marcos, Jorge Munoz-Gama, Michel Cuendet, Berardino de Bari, Luis Marco-Ruiz, Alessandro Stefanini, Maurizio Castellano
Predicting Outpatient Process Flows to Minimise the Cost of Handling Returning Patients: A Case Study

We describe an application of process predictive monitoring at an outpatient clinic in a large hospital. A model is created to predict which patients will wrongly refer to the outpatient clinic, instead of directly to other departments, when returning to get treatment after an initial visit. Four variables are identified to minimise the cost of handling these patients: the cost of giving appropriate guidance to them, the cost of handling patients taking a non-compliant flow by wrongly referring to the outpatient clinic, and the false positive/negative rates of the predictive model adopted. The latter determine the situations in which patients have not received guidance when they should have had or have been guided even though not necessary, respectively. Using these variables, a cost model is built to identify which combinations of process intervention/redesign options and predictive models are likely to minimise the cost overhead of handling the returning patients.

Marco Comuzzi, Jonghyeon Ko, Suhwan Lee
A Data Driven Agent Elicitation Pipeline for Prediction Models

Agent-based simulation is a method for simulating complex systems by breaking them down into autonomous interacting agents. However, to create an agent-based simulation for a real-world environment it is necessary to carefully design the agents. In this paper we demonstrate the elicitation of simulation agents from real-world event logs using process mining methods. Collection and processing of event data from a hospital emergency room setting enabled real-world event logs to be synthesized from observational and digital data and used to identify and delineate simulation agents.

John Bruntse Larsen, Andrea Burattin, Christopher John Davis, Rasmus Hjardem-Hansen, Jørgen Villadsen
A Solution Framework Based on Process Mining, Optimization, and Discrete-Event Simulation to Improve Queue Performance in an Emergency Department

Long waiting lines are a frequent problem in hospitals’ Emergency Departments and can be critical to the patient’s health and experience. This study proposes a three-stage solution framework to address this issue: Process Identification, Process Optimization, and Process Simulation. In the first, we use descriptive statistics to understand the data and obtain indicators as well as Process Mining techniques to identify the main process flow; the Optimization phase is composed of a mathematical model to provide an optimal physician schedule that reduces waiting times; and, finally, Simulation is performed to compare the original process flow and scheduling with the optimized solution. We applied the proposed solution framework to a case study in a Brazilian private hospital. Final data comprised of 65,407 emergency cases which corresponded to 399,631 event log registries in a 13-month period. The main metrics observed were the waiting time before the First General Assessment of a physician and the volume of patients within the system per hour and day of the week. When simulated, the optimal physician scheduling resulted in more than 40% reduction in waiting times and queue length, a 29.3% decrease of queue occurrences, and 54.2% less frequency of large queues.

Bianca B. P. Antunes, Adrian Manresa, Leonardo S. L. Bastos, Janaina F. Marchesi, Silvio Hamacher
A Multi-level Approach for Identifying Process Change in Cancer Pathways

An understudied challenge within process mining is the area of process change over time. This is a particular concern in healthcare, where patterns of care emerge and evolve in response to individual patient needs and through complex interactions between people, process, technology and changing organisational structure. We propose a structured approach to analyse process change over time suitable for the complex domain of healthcare. Our approach applies a qualitative process comparison at three levels of abstraction: a holistic perspective summarizing patient pathways (process model level), a middle level perspective based on activity sequences for individuals (trace level), and a fine-grained detail focus on activities (activity level). Our aim is to identify points in time where a process changed (detection), to localise and characterise the change (localisation and characterisation), and to understand process evolution (unravelling). We illustrate the approach using a case study of cancer pathways in Leeds Cancer Centre where we found evidence of agreement in process change identified at the process model and activity levels, but not at the trace level. In the experiment we show that this qualitative approach provides a useful understanding of process change over time. Examining change at the three levels provides confirmatory evidence of process change where perspectives agree, while contradictory evidence can lead to focused discussions with domain experts. The approach should be of interest to others dealing with processes that undergo complex change over time.

Angelina Prima Kurniati, Ciarán McInerney, Kieran Zucker, Geoff Hall, David Hogg, Owen Johnson
Adopting Standard Clinical Descriptors for Process Mining Case Studies in Healthcare

Process mining can provide greater insight into medical treatment processes and organizational processes in healthcare. A review of the case studies in the literature has identified several different common aspects for comparison, which include methodologies, algorithms or techniques, medical fields and healthcare specialty. However, from a medical perspective, the clinical terms are not reported in a uniform way and do not follow a standard clinical coding scheme. Further, the characteristics of the event log data are not always described. In this paper, we identified 38 clinically-relevant case studies of process mining in healthcare published from 2016 to 2018 that described the tools, algorithms and techniques utilized, and details on the event log data. We then assigned the clinical aspects of patient encounter environment, clinical specialty and medical diagnoses using the standard clinical coding schemes SNOMED CT and ICD-10. The potential outcomes of adopting a standard approach for describing event log data and classifying medical terminology using standard clinical coding schemes are discussed.

Emmanuel Helm, Anna M. Lin, David Baumgartner, Alvin C. Lin, Josef Küng

4th International Workshop on Process Querying (PQ)

Frontmatter
Complex Event Processing for Event-Based Process Querying

Process querying targets the filtering and transformation of business process representations, such as event data recorded by information systems. This paper argues for the application of models and methods developed in the general field of Complex Event Processing (CEP) for process querying. Specifically, if event data is generated continuously during process execution, CEP techniques may help to filter and transform process-related information by evaluating queries over event streams. This paper motivates the use of such event-based process querying, and discuss common challenges and techniques for the application of CEP for process querying. In particular, focusing on event-activity correlation, automated query derivation, and diagnostics for query matches.

Han van der Aa
Storing and Querying Multi-dimensional Process Event Logs Using Graph Databases

Process event data is usually stored either in a sequential process event log or in a relational database. While the sequential, single-dimensional nature of event logs aids querying for (sub)sequences of events based on temporal relations such as “directly/eventually-follows”, it does not support querying multi-dimensional event data of multiple related entities. Relational databases allow storing multi-dimensional event data but existing query languages do not support querying for sequences or paths of events in terms of temporal relations. In this paper, we report on an exploratory case study to store multi-dimensional event data in labeled property graphs and to query the graphs for structural and temporal relations combined. Our main finding is that event data over multiple entities and identifiers with complex relationships can be stored in graph databases in a systematic way. Typical and advanced queries over such multi-dimensional event data can be formulated in the query language Cypher and can be executed efficiently, giving rise to several new research questions.

Stefan Esser, Dirk Fahland

Second International Workshop on Security and Privacy-Enhanced Business Process Management (SPBP)

Frontmatter
A Legal Interpretation of Choreography Models

Model/driven smart contract development approaches are gaining in importance since one of the most popular realizations, blockchain/based smart contracts, are prone to coding errors. However, these modeling approaches predominantly focus on operational aspects of smart contracts, neglecting the legal perspective as manifested by deontic concepts such as obligations or permissions. In this paper, we explore an approach at connecting existing models to Legal Ontologies (LOs) on the example of choreography models, effectively interpreting them as legal contracts. We show how the execution of a choreography imposes sequences of legal states, and discuss consequences and limitations.

Jan Ladleif, Mathias Weske
Provenance Holder: Bringing Provenance, Reproducibility and Trust to Flexible Scientific Workflows and Choreographies

Process adaptation has been in the focus of the BPM community also in the scope of interdisciplinary research on the interface with eScience. In eScience, scientists, software developers and analysts across a range of disciplines model, create and execute different types of data-driven experiments in an exploratory, trial-and-error style. Supporting this style of experimenting by software systems has been enabled in existing work by interactive workflow management systems. The main principles and techniques used are related to the adaptation of workflows and choreographies, as well as to systems integration and software architecture improvement. With the increasing need for collaboration in scientific explorations and adaptation remaining an essential part of the techniques used, the trust in the reproducibility of scientific research gains a most significant importance. To ensure trust in the provenance and reproducibility of collaborative and adaptable scientific experiments, in this paper we introduce a system architecture with focus on a new component that is responsible for ensuring trust in a generic and non-intrusive manner, independent of different process-aware technologies and storage platforms, including blockchain. We contribute the architecture of the Provenance Holder, its components, functionality and two main operations: recording provenance data and retrieving the trusted provenance information. We incorporate the Provenance Holder into the architecture of the interactive ChorSystem and identify the potential directions for future research.

Ludwig Stage, Dimka Karastoyanova
Mining Roles from Event Logs While Preserving Privacy

Process mining aims to provide insights into the actual processes based on event data. These data are widely available and often contain private information about individuals. On the one hand, knowing which individuals (known as resources) performed specific activities can be used for resource behavior analyses like role mining and is indispensable for bottleneck analysis. On the other hand, event data with resource information are highly sensitive. Process mining should reveal insights in the form of annotated models, but should not reveal sensitive information about individuals. In this paper, we show that the problem cannot be solved by naïve approaches like encrypting data, and an anonymized person can still be identified based on a few well-chosen events. We, therefore, introduce a decomposition method and a collection of techniques that preserve the privacy of the individuals, yet, at the same time, roles can be discovered and used for further bottleneck analyses without revealing sensitive information about individuals. To evaluate our approach, we have implemented an interactive environment and applied our approach to several real-life and artificial event logs.

Majid Rafiei, Wil M. P. van der Aalst
Extracting Event Logs for Process Mining from Data Stored on the Blockchain

The integration of business process management with blockchains across organisational borders provides a means to establish transparency of execution and auditing capabilities. To enable process analytics, though, non-trivial extraction and transformation tasks are necessary on the raw data stored in the ledger. In this paper, we describe our approach to retrieve process data from an Ethereum blockchain ledger and subsequently convert those data into an event log formatted according to the IEEE Extensible Event Stream (XES) standard. We show a proof-of-concept software artefact and its application on a data set produced by the smart contracts of a process execution engine stored on the public Ethereum blockchain network.

Roman Mühlberger, Stefan Bachhofner, Claudio Di Ciccio, Luciano García-Bañuelos, Orlenys López-Pintado
A Framework for Supply Chain Traceability Based on Blockchain Tokens

Tracing products and processes across complex supply chain networks has become an integral part of current supply chain management practices. However, the effectiveness and efficiency of existing supply chain traceability mechanisms are hindered by several barriers including lack of data interoperability and information sharing, opportunistic behaviour, lack of transparency and visibility and cyber-physical threats, to name a few. In this paper, we propose a forensics-by-design supply chain traceability framework with audit trails for integrity and provenance guarantees based on malleable blockchain tokens. This framework also provides the establishment of different granularity levels for tracing products across the entire supply chain based on their unique characteristics, supply chain processes and stakeholders engagement. To showcase the applicability of our proposal, we develop a functional set of smart contracts and a local private blockchain. The benefits of our framework are further discussed, along with fruitful areas for future research.

Thomas K. Dasaklis, Fran Casino, Costas Patsakis, Christos Douligeris

First International Workshop on the Value and Quality of Enterprise Modelling (VEnMo)

Frontmatter
Enterprise Modelling of Digital Innovation in Strategies, Services and Processes

We report upon a study performed on 65 cases of digital innovation where graduate business students applied enterprise modelling to analyze and demonstrate the impact and value of implementing digital technologies. As students could freely choose which enterprise modelling techniques to apply, these cases provide insight into which enterprise modelling approaches and which types of enterprise models they preferred to use for the analysis. The study relates those preferences to type of digital technology implemented and the focus area of the digital innovation, i.e., strategy, services (internal and external) and processes. The preliminary insights from this study help directing further research on how enterprise modelling can have value for managerial decision-making on digital innovation.

Geert Poels
Measuring Business Process Model Reuse in a Process Repository

The value of process modeling increases with process model reuse. Previous research into process model reuse has focused on behavioral aspects of reuse such as the intention to reuse, the repeated reuse of a process model over time, and the identification of elements of process models which could be reused. However, process model reuse can also be considered from the perspective of the reuse of process models by other process models in the same repository. Such a measure would be a direct measure of whether process modelers are creating bespoke versions of existing process models or are indeed reusing existing process models and reaping some of the purported benefits of reuse. Furthermore, it would provide a measure of reuse which can be automated. Organizations which operate in a multi-channel, multi-product environment have business processes which frequently share functionality (consider authentication for example) and which may be used in different organizational units. While the reuse of complete process models in a process repository is one of the benefits of using a process repository, no research could be found relating to the measurement of the reuse of complete process models by other process models within such a repository. We believe that this paper is the first to propose and validate a measure of complete process model reuse by other process models in the same process repository. The measure is then applied to a real-world process repository of a large financial services organization, illustrating the applicability and potential usefulness of the measure.

Ross S. Veitch, Lisa F. Seymour
Anti-patterns for Process Modeling Problems: An Analysis of BPMN 2.0-Based Tools Behavior

Process modeling is increasingly conducted in business by non-expert modelers. For this reason, the increasing uptake of notations like BPMN has been accompanied by problems such syntactic and semantic errors. These modeling problems may hamper process understanding and cause unexpected behavior during process execution. A set of common modeling problems has been classified as anti-patterns in the literature. Up until now, it is not clear to which extent these anti-patterns can be spotted during modeling. In this paper, we investigate anti-pattern support based on a selection of prominent BPMN tools. The research contribution is two-fold: we demonstrate the importance of qualification of the analyst for the task of process modeling; and, we identify the need for process modeling tools to detect the use of anti-patterns, feedbacking the user more actively and explicitly about problems to be corrected in the process models.

Clemilson Luís de Brito Dias, Vinicius Stein Dani, Jan Mendling, Lucineia Heloisa Thom
Backmatter
Metadata
Title
Business Process Management Workshops
Editors
Dr. Chiara Di Francescomarino
Remco Dijkman
Uwe Zdun
Copyright Year
2019
Electronic ISBN
978-3-030-37453-2
Print ISBN
978-3-030-37452-5
DOI
https://doi.org/10.1007/978-3-030-37453-2

Premium Partner