Skip to main content

Über dieses Buch

This book constitutes revised papers from the eight International Workshops held at the 16th International Conference on Business Process Management, BPM 2018, in Sydney, Australia, in September 2018:

BPI 2018: 14th International Workshop on Business Process Intelligence;

BPMS2 2018: 11th Workshop on Social and Human Aspects of Business Process Management;‐

PODS4H 2018: 1st International Workshop on Process-Oriented Data Science for Healthcare;

AI4BPM 2018: 1st International Workshop on Artificial Intelligence for Business Process Management;

CCBPM 2018: 1st International Workshop on Emerging Computing Paradigms and Context in Business Process Management;

BP-Meet-IoT / PQ 2018: Joint Business Processes Meet the Internet-of-Things and Process Querying Workshop;

DeHMiMoP 2018: 1st Declarative/Decision/Hybrid Mining and Modelling for Business Processes Workshop;

REBM /EdForum 2018: Joint Requirements Engineering and Business Process Management Workshop and Education Forum

The 45 full papers presented in this volume were carefully reviewed and selected from 90 submissions.



14th International Workshop on Business Process Intelligence (BPI)


Clustering Business Process Activities for Identifying Reference Model Components

Reference models are special conceptual models that are reused for the design of other conceptual models. They confront stakeholders with the dilemma of balancing the size of a model against its reuse frequency. The larger a reference model is, the better it applies to a specific situation, but the less often these situations occur. This is particularly important when mining a reference model from large process logs, as this often produces complex and unstructured models. To address this dilemma, we present a new approach for mining reference model components by vertically dividing complex process traces and hierarchically clustering activities based on their proximity in the log. We construct a hierarchy of subprocesses, where the lower a component is placed the smaller and the more structured it is. The approach is implemented as a proof-of-concept and evaluated using the data from the 2017 BPI challenge.

Jana-Rebecca Rehse, Peter Fettke

Multi-instance Mining: Discovering Synchronisation in Artifact-Centric Processes

In complex systems one can often identify various entities or artifacts. The lifecycles of these artifacts and the loosely coupled interactions between them define the system behavior. The analysis of such artifact system behavior with traditional process discovery techniques is often problematic due to the existence of many-to-many relationships between artifacts, resulting in models that are difficult to understand and statistics that are inaccurate. The aim of this work is to address these issues and enable the calculation of statistics regarding the synchronisation of behaviour between artifact instances. By using a Petri net formalisation with step sequence execution semantics to support true concurrency, we create state-based artifact lifecycle models that support many-to-many relations between artifacts. The approach has been implemented as an interactive visualisation in ProM and evaluated using real-life public data.

Maikel L. van Eck, Natalia Sidorova, Wil M. P. van der Aalst

Improving Merging Conditions for Recomposing Conformance Checking

Efficient conformance checking is a hot topic in the field of process mining. Much of the recent work focused on improving the scalability of alignment-based approaches to support the larger and more complex processes. This is needed because process mining is increasingly applied in areas where models and logs are “big”. Decomposition techniques are able to achieve significant performance gains by breaking down a conformance problem into smaller ones. Moreover, recent work showed that the alignment problem can be resolved in an iterative manner by alternating between aligning a set of decomposed sub-components before merging the computed sub-alignments and recomposing sub-components to fix merging issues. Despite experimental results showing the gain of applying recomposition in large scenarios, there is still a need for improving the merging step, where log traces can take numerous recomposition steps before reaching the required merging condition. This paper contributes by defining and structuring the recomposition step, and proposes strategies with significant performance improvement on synthetic and real-life datasets over both the state-of-the-art decomposed and monolithic approaches.

Wai Lam Jonathan Lee, Jorge Munoz-Gama, H. M. W. Verbeek, Wil M. P. van der Aalst, Marcos Sepúlveda

Efficiently Computing Alignments

Algorithm and Datastructures

Conformance checking is considered to be anything where observed behaviour needs to be related to already modelled behaviour. Fundamental to conformance checking are alignments which provide a precise relation between a sequence of activities observed in an event log and a execution sequence of a model. However, computing alignments is a complex task, both in time and memory, especially when models contain large amounts of parallelism.In this tool paper we present the actual algorithm and memory structures used for the experiments of [15]. We discuss the time complexity of the algorithm, as well as the space and time complexity of the main data structures. We further present the integration in ProM and a basic code snippet in Java for computing alignments from within any tool.

Boudewijn F. van Dongen

Understanding Automated Feedback in Learning Processes by Mining Local Patterns

Process mining, and in particular process discovery, provides useful tools for extracting process models from event-based data. Nevertheless, certain types of processes are too complex and unstructured to be able to be represented with a start-to-end process model. For such cases, instead of extracting a model from a complete event log, it is interesting to zoom in on some parts of the data and explore behavioral patterns on a local level. Recently, local process model mining has been introduced, which is a technique in-between sequential pattern mining and process discovery. Other process mining methods can also used for mining local patterns, if combined with certain data preprocessing. In this paper, we explore discovery of local patterns in the data representing learning processes. We exploit real-life event logs from JMermaid, a Smart Learning Environment for teaching Information System modeling with built-in feedback functionality. We focus on a specific instance of feedback provided in JMermaid, which is a reminder to simulate the model, and locally explore how students react to this feedback. Additionally, we discuss how to tailor local process model mining to a certain case, in order to avoid the computationally expensive task of discovering all available patterns, by combining it with other techniques for dealing with unstructured data, such as trace clustering and window-based data preprocessing.

Galina Deeva, Jochen De Weerdt

11th Workshop on Social and Human Aspects of Business Process Management (BPMS2)


Social Technology Affordances for Business Process Improvement

Organisations across diverse industries have started to embed Enterprise Social Technology (EST) to create collaborative, human-centric environments in their day-to-day operations. With this growing trend, the use of EST within process improvement initiatives is gaining popularity. While the potential that EST brings (in particular with better connecting and influencing people’s participation) to process improvements is widely acknowledged, research providing insights into how this actually takes place and specifically contributes towards process improvement efforts is very limited. This study adopts a ‘technology affordance’ perspective to identify and conceptualise affordances of EST within the context of process improvement activities. Based on forming theory on this topic, a process improvement effort that applied EST was investigated through a series of interviews. The interviews were rigorously designed, and carefully executed and analyzed via a tool-supported data coding and analysis approach. The study outcomes resulted with a refined and partially validated ‘EST affordances for process improvements’ model with 9 EST affordances and 3 ‘contingency variables’.

Paul Mathiesen, Jason Watson, Wasana Bandara

Social Business Process Management (SBPM)

Critical Success Factors (CSF)

Social BPM allows for businesses to adapt and be flexible to ever changing demands. This flexibility is created by the participation and collaboration between users. These interactions are achieved through the successful implementation of Social BPM. This paper will propose critical success factors (CSF) which lead to a successful Social BPM implementation such that these benefits are realised. This is a progress paper which is part of a broader method which will validate against the literature, expert opinions and case studies in order to produce a definitive set of CSFs for Social BPM.

Shamsul Duha, Mohammad E. Rangiha

Enabling Co-creation in Product Design Processes Using 3D-Printing Processes

For a long time, geographical distances restricted competition, nowadays competition is global. Thus, companies must build up strategies to cope with this situation. Individualized products could help enterprises to retain customers and their market position due to a differentiated supply. In this research paper we discuss 3D-printing processes as enabler for Co-Creation in product design processes. It enables enterprises to react quickly to customer preferences and changing trends e.g., in design. Furthermore, 3D-Printing enables the integration of customers into product innovation processes. This ends up in Co-Creation and the emergence of related advantages (e.g., customer-centric products or production processes). However, operational processes when using 3D-printing processes for co-creation were not investigated in depth so far. But nevertheless, the improvement of manufacturing processes is important in BPM practice and research as well. Therefore, we address this gap in our paper.

Michael Möhring, Rainer Schmidt, Barbara Keller, Jennifer Hamm, Sophie Scherzinger, Ann-Kristin Vorndran

Evaluation of WfMC Awards for Case Management: Features, Knowledge Workers, Systems

In recent years, many production case management (PCM) and adaptive case management (ACM) systems have been introduced into the daily workflow of knowledge workers. In many research papers and case studies, the claims about the nature and requirements of knowledge work in general seem to vary. While choosing or creating a case management (CM) solution, typically one has the target knowledge workers and their domain-specific requirements in mind. But knowledge work shows a huge variety of modes of operation, complexity, and collaboration. We want to increase transparency on which features are covered by well-known and award-winning systems for different types of knowledge workers and different classes of systems. This may not unveil gaps between requirements and offered solutions, but it can uncover differences in solutions for varying user bases. We performed a literature review of 48 winners of the WfMC Awards for Excellence in Case Management from 2011 to 2016 and analyzed case studies in regard to targeted knowledge workers, advertised features, and type of system. Different types of knowledge workers showed a different bias on certain system types and features in regard to collaboration and variability of processes.

Johannes Tenschert, Richard Lenz

Investigating the Trade-off Between the Effectiveness and Efficiency of Process Modeling

Despite recent efforts to improve the quality of process models, we still observe a significant dissimilarity in quality between models. This paper focuses on the syntactic condition of process models, and how it is achieved. To this end, a dataset of 121 modeling sessions was investigated. By going through each of these sessions step by step, a separate ‘revision’ phase was identified for 81 of them. Next, by cutting the modeling process off at the start of the revision phase, a partial process model was exported for these modeling sessions. Finally, each partial model was compared with its corresponding final model, in terms of time, effort, and the number of syntactic errors made or solved, in search for a possible trade-off between the effectiveness and efficiency of process modeling. Based on the findings, we give a provisional explanation for the difference in syntactic quality of process models.

Jeroen Bolle, Jan Claes

The Repercussions of Business Process Modeling Notations on Mental Load and Mental Effort

Over the last decade, plenty business process modeling notations emerged for the documentation of business processes in enterprises. During the learning of a modeling notation, an individual is confronted with a cognitive load that has an impact on the comprehension of a notation with its underlying formalisms and concepts. To address the cognitive load, this paper presents the results from an exploratory study, in which a sample of 94 participants, divided into novices, intermediates, and experts, needed to assess process models expressed in terms of eight different process modeling notations, i.e., BPMN 2.0, Declarative Process Modeling, eGantt Charts, EPCs, Flow Charts, IDEF3, Petri Nets, and UML Activity Diagrams. The study focus was set on the subjective comprehensibility and accessibility of process models reflecting participant’s cognitive load (i.e., mental load and mental effort). Based on the cognitive load, a factor reflecting the mental difficulty for comprehending process models in different modeling notations was derived. The results indicate that established modeling notations from industry (e.g., BPMN) should be the first choice for enterprises when striving for process management. Moreover, study insights may be used to determine which modeling notations should be taught for an introduction in process modeling or which notation is useful to teach and train process modelers or analysts.

Michael Zimoch, Rüdiger Pryss, Thomas Probst, Winfried Schlee, Manfred Reichert

First International Workshop on Process-Oriented Data Science for Health Care (PODS4H)


Expectations from a Process Mining Dashboard in Operating Rooms with Analytic Hierarchy Process

The wide-spread adoption of real-time location system is boosting the development of software applications to track persons and assets on real-time and perform analytics. Among the vast amount of data analysis techniques, process mining allows to conform work-flows with heterogeneous multivariate data, enhancing the model understandability and usefulness in clinical environments. However, such applications still find entrance barriers in the clinical context. In this paper we have identified the preferred features of a process mining based dashboard deployed in the operating rooms of a hospital equipped with a real-time location system. Work-flows are inferred and enhanced using process discovery on location data of patients undergoing an intervention, drawing nodes (states in the process) and transitions across the entire process. Analytic Hierarchy Process has been applied to quantify the prioritization of the features contained in the process mining dashboard (filtering data, enhancement, node selection, statistics, etc..), distinguishing on the priorities that each of the different roles in the operating room service assigned to each feature. The staff in the operating rooms (N=10) was classified into three groups: Technical, Clinical and Managerial staff according to their responsibilities. Results show different weights for the features in the process mining dashboard for each group, suggesting that a flexible process mining dashboard is needed to boost its potential in the management of clinical interventions in operating rooms.

Antonio Martinez-Millana, Aroa Lizondo, Roberto Gatta, Vicente Traver, Carlos Fernandez-Llatas

Tailored Process Feedback Through Process Mining for Surgical Procedures in Medical Training: The Central Venous Catheter Case

In healthcare, developing high procedural skill levels through training is a key factor for obtaining good clinical results on surgical procedures. Providing feedback to each student tailored to how the student has performed the procedure each time, improves the effectiveness of the training. Current state-of-the-art feedback relies on Checklists and Global Rating Scales to indicate whether all process steps have been performed and the quality of each execution step. However, there is a process perspective not successfully captured by those instruments, e.g., steps performed but in an undesired order, part of the process repeated an unnecessary number of times, or excessive transition time between steps. In this work, we propose a novel use of process mining techniques to effectively identify desired and undesired process patterns regarding rework, order, and performance, in order to complement the tailored feedback of surgical procedures using a process perspective. The approach has been effectively applied to analyze a real Central Venous Catheter installation training case. In the future, it is necessary to measure the actual impact of feedback on learning.

Ricardo Lira, Juan Salas-Morales, Rene de la Fuente, Ricardo Fuentes, Marcos Sepúlveda, Michael Arias, Valeria Herskovic, Jorge Munoz-Gama

An Application of Process Mining in the Context of Melanoma Surveillance Using Time Boxing

Background: Process mining is a relatively new discipline that helps to discover and analyze actual process executions based on log data. In this paper we apply conformance checking techniques to the process of surveillance of melanoma patients. This process consists of recurring events with time constraints between the events. Objectives: The goal of this work is to show how existing clinical data collected during melanoma surveillance can be prepared and pre-processed to be reused for process mining. Methods: We describe an approach based on time boxing to create process models from medical guidelines and the corresponding event logs from clinical data of patient visits. Results: Event logs were extracted for 1,023 patients starting melanoma surveillance at the Department of Dermatology at the Medical University of Vienna between January 2010 and June 2017. Conformance checking techniques available in the ProM framework were applied. Conclusions: The presented time boxing enables the direct use of existing process mining frameworks like ProM to perform process-oriented analysis also with respect to time constraints between events.

Christoph Rinner, Emmanuel Helm, Reinhold Dunkl, Harald Kittler, Stefanie Rinderle-Ma

Characterization of Drug Use Patterns Using Process Mining and Temporal Abstraction Digital Phenotyping

Understanding and identifying executed patterns, activities and processes for patients of different characteristics provides medical experts a deep understanding of which tasks are critical in the provided care, and may help identify ways to improve them. However, extracting these events and data for patients with complex clinical phenotypes is not a trivial task. This paper provides an approach to identifying specific patient cohorts based on complex digital phenotypes as a starting point to apply process mining tools and techniques and identify patterns or process models. Using temporal abstraction-based digital phenotyping and pattern matching, we identified a cohort of patients with sepsis from the MIMIC II database, and then apply process mining techniques to discover medication use patterns. In the case study we present, the use of temporal abstraction digital phenotyping helped us discover a relevant patient cohort, aiding in the extraction of the data required to generate drug use patterns for medications of different types such as vasopressors, vasodilators and systemic antibacterial antibiotics. For sepsis patients, combining the use of temporal abstraction digital phenotyping and process mining tools and techniques, was proven to help extract accurate cohorts of patients for health care process mining.

Eric Rojas, Daniel Capurro

Pre-hospital Retrieval and Transport of Road Trauma Patients in Queensland

A Process Mining Analysis

Existing process mining methodologies, while noting the importance of data quality, do not provide details on how to assess the quality of event data and how the identification of data quality issues can be exploited in the planning, data extraction and log building phases of any process mining analysis. To this end we adapt CRISP-DM [15] to supplement the Planning phase of the PM $$^2$$ [6] process mining methodology to specifically include data understanding and quality assessment. We illustrate our approach in a case study describing the detailed preparation for a process mining analysis of ground and aero-medical pre-hospital transport processes involving the Queensland Ambulance Service (QAS) and Retrieval Services Queensland (RSQ). We utilise QAS and RSQ sample data to show how the use of data models and some quality metrics can be used to (i) identify data quality issues, (ii) anticipate and explain certain observable features in process mining analyses, (iii) distinguish between systemic and occasional quality issues, and, (iv) reason about the mechanisms by which identified quality issues may have arisen in the event log. We contend that this knowledge can be used to guide the extraction, pre-processing stages of a process mining case study.

Robert Andrews, Moe T. Wynn, Kirsten Vallmuur, Arthur H. M. ter Hofstede, Emma Bosley, Mark Elcock, Stephen Rashford

Analyzing Medical Emergency Processes with Process Mining: The Stroke Case

Medical emergencies are one of the most critical processes that occurs in a hospital. The creation of adequate and timely triage protocols, can make the difference between the life and death of the patient. One of the most critical emergency care protocols is the stroke case. This disease demands an accurate and quick diagnosis for ensuring an immediate treatment in order to limit or even, avoid, the undesired cognitive decline. The aim of this paper is perform an analysis of how Process Mining techniques can support health professionals in the interactive analysis of emergency processes considering critical timing of Stroke, using a Question Driven methodology. To demonstrate the possibilities of Process Mining in the characterization of the emergency process, we have used a real log with 9046 emergency episodes from 2145 stroke patients that occurred from January of 2010 to June of 2017. Our results demonstrate how Process Mining technology can highlight the differences of the stroke patient flow in emergency, supporting professionals in the better understanding and improvement of quality of care.

Carlos Fernandez-Llatas, Gema Ibanez-Sanchez, Angeles Celda, Jesus Mandingorra, Lucia Aparici-Tortajada, Antonio Martinez-Millana, Jorge Munoz-Gama, Marcos Sepúlveda, Eric Rojas, Víctor Gálvez, Daniel Capurro, Vicente Traver

Using Indoor Location System Data to Enhance the Quality of Healthcare Event Logs: Opportunities and Challenges

Hospitals are becoming more and more aware of the need to manage their business processes. In this respect, process mining is increasingly used to gain insight in healthcare processes, requiring the analysis of event logs originating from the hospital information system. Process mining research mainly focuses on the development of new techniques or the application of existing methods, but the quality of all analyses ultimately depends on the quality of the event log. However, limited research has been done on the improvement of data quality in the process mining field, which is the topic of this paper. In particular, this paper discusses, from a conceptual angle, the opportunities that indoor location system data provides to tackle event log data quality issues. Moreover, the paper reflects upon the associated challenges. In this way, it provides the conceptualization for a new area of research, focusing on the systematic integration of an event log with indoor location system data.

Niels Martin

The ClearPath Method for Care Pathway Process Mining and Simulation

Process mining of routine electronic healthcare records can help inform the management of care pathways. Combining process mining with simulation creates a rich set of tools for care pathway improvement. Healthcare process mining creates insight into the reality of patients’ journeys through care pathways while healthcare process simulation can help communicate those insights and explore “what if” options for improvement. In this paper, we outline the ClearPath method, which extends the PM2 process mining method with a process simulation approach that address issues of poor quality and missing data and supports rich stakeholder engagement. We review the literature that informed the development of ClearPath and illustrate the method with case studies of pathways for alcohol-related illness, giant-cell arteritis and functional neurological symptoms. We designed an evidence template that we use to underpin the fidelity of our simulation models by tracing each model element back to literature sources, data and process mining outputs and insights from qualitative research. Our approach may be of benefit to others using process-oriented data science to improve healthcare.

Owen A. Johnson, Thamer Ba Dhafari, Angelina Kurniati, Frank Fox, Eric Rojas

Analysis of Emergency Room Episodes Duration Through Process Mining

This study presents the proposal of a performance analysis method for ER Processes through Process Mining. This method helps to determine which activities, sub-processes, interactions and characteristics of episodes explain why the process has long episode duration, besides providing decision makers with additional information that will help to decrease waiting times, reduce patient congestion and increment quality of provided care. By applying the exposed method to a case study, it was discovered that when a loop is formed between the Examination and Treatment sub-processes, the episode duration lengthens. Moreover, the relationship between case severity and the number of repetitions of the Examination-Treatment loop was also studied. As the case severity increases, the number of repetitions increases as well.

Eric Rojas, Andres Cifuentes, Andrea Burattin, Jorge Munoz-Gama, Marcos Sepúlveda, Daniel Capurro

First International Workshop on Artificial Intelligence for Business Process Management (AI4BPM)


Enhancing Process Data in Manual Assembly Workflows

The rise of Industry 4.0 and the convergence with BPM provide new potential for the automatic gathering of process-related sensor information. In manufacturing, information about human behavior in manual assembly tasks is rare when no interaction with machines is involved. We suggest technologies to automatically detect material picking and placement in the assembly workflow to gather accurate data about human behavior. For material picking, we use background subtraction; for placement detection image classification with neural networks is applied. The detected fine-grained worker activities are then correlated to a BPMN model of the assembly workflow, enabling the measurement of production time (time per state) and quality (frequency of error) on the shop floor as an entry point for conformance checking and process optimization. The approach has been evaluated in a quantitative case study recording the assembly process 30 times in a laboratory within 4 h. Under these conditions, the classification of assembly states with a neural network provides a test accuracy of 99.25% on 38 possible assembly states. Material picking based on background subtraction has been evaluated in an informal user study with 6 participants performing 16 picks, each providing an accuracy of 99.48%. The suggested method is promising to easily detect fine-grained steps in manufacturing augmenting and checking the assembly workflow.

Sönke Knoch, Nico Herbig, Shreeraman Ponpathirkoottam, Felix Kosmalla, Philipp Staudt, Peter Fettke, Peter Loos

Modeling Uncertainty in Declarative Artifact-Centric Process Models

Many knowledge-intensive processes are driven by business entities about which knowledge workers make decisions and to which they add information. Artifact-centric process models have been proposed to represent such knowledge-intensive processes. Declarative artifact-centric process models use business rules that define how knowledge experts can make progress in a process. However, in many business situations knowledge experts have to deal with uncertainty and vagueness. Currently, how to deal with such situations cannot be expressed in declarative artifact-centric process models. We propose the use of fuzzy logic to model uncertainty. We use Guard-Stage-Milestone schemas as declarative artifact-centric process notation and we extend them with fuzzy sentries. We explain how the resulting fuzzy GSM schemas can be evaluated by extending an existing GSM engine with a tool for fuzzy evaluation of rules. We evaluate fuzzy GSM schemas by applying them to an existing fragment of regulations for handling a mortgage contract.

Rik Eshuis, Murat Firat

Extracting Workflows from Natural Language Documents: A First Step

Business process models are used to identify control-flow relationships of tasks extracted from information system event logs. These event logs may fail to capture critical tasks executed outside of regular logging environments, but such latent tasks may be inferred from unstructured natural language texts. This paper highlights two workflow discovery pipeline components which use NLP and sequence mining techniques to extract workflow candidates from such texts. We present our Event Labeling and Sequence Analysis (ELSA) prototype which implements these components, associated approach methodologies, and performance results of our algorithm against ground truth data from the Apache Software Foundation Public Email Archive.

Leslie Shing, Allan Wollaber, Satish Chikkagoudar, Joseph Yuen, Paul Alvino, Alexander Chambers, Tony Allard

DCR Event-Reachability via Genetic Algorithms

In declarative process models, a process is described as a set of rules as opposed to a set of permitted flows. Oftentimes, such rule-based notations are more concise than their flow-based cousins; however, that conciseness comes at a cost: It requires computation to work out which flows are in fact allowed by the rules of the process. In this paper, we present an algorithm to solve the Reachability problem for the declarative Condition Response (DCR) graphs notation: the problem, given a DCR graph and an activity, say “Payout reimbursement”, is to find a flow allowed by the graph that ends with the execution of that task. Existing brute-force solutions to this problem are generally unhelpful already at medium-sized graphs. Here we present a genetic algorithm solving Reachability. We evaluate this algorithm on a selection of DCR graphs, both artificial and from industry, and find that the genetic algorithm with one exception outperforms the best known brute-force solution on both whether a path is found and how quickly it is found.

Tróndur Høgnason, Søren Debois

Classifying Process Instances Using Recurrent Neural Networks

Process Mining consists of techniques where logs created by operative systems are transformed into process models. In process mining tools it is often desired to be able to classify ongoing process instances, e.g., to predict how long the process will still require to complete, or to classify process instances to different classes based only on the activities that have occurred in the process instance thus far. Recurrent neural networks and its subclasses, such as Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM), have been demonstrated to be able to learn relevant temporal features for subsequent classification tasks. In this paper we apply recurrent neural networks to classifying process instances. The proposed model is trained in a supervised fashion using labeled process instances extracted from event log traces. This is the first time we know of GRU having been used in classifying business process instances. Our main experimental results shows that GRU outperforms LSTM remarkably in training time while giving almost identical accuracies to LSTM models. Additional contributions of our paper are improving the classification model training time by filtering infrequent activities, which is a technique commonly used, e.g., in Natural Language Processing (NLP).

Markku Hinkka, Teemu Lehto, Keijo Heljanko, Alexander Jung

Leveraging Regression Algorithms for Predicting Process Performance Using Goal Alignments

Industry-scale context-aware processes typically manifest a large number of variants during their execution. Being able to predict the performance of a partially executed process instance (in terms of cost, time or customer satisfaction) can be particularly useful. Such predictions can help in permitting interventions to improve matters for instances that appear likely to perform poorly. This paper proposes an approach for leveraging the process context, process state, and process goals to obtain such predictions.

Karthikeyan Ponnalagu, Aditya Ghose, Hoa Khanh Dam

First International Workshop on Emerging Computing Paradigms and Context in Business Process Management (CCBPM)


Improved Particle Swarm Optimization Based Workflow Scheduling in Cloud-Fog Environment

Mobile edge devices with high requirements typically need to obtain faster response on local network services. Fog computing is an emerging computing paradigm motivated by this need, which currently is viewed as an extension of cloud computing. This computing paradigm is presented to provide low commutation latency service for workflow applications. However, how to schedule workflow applications for seeking the tradeoff between makespan and cost in cloud-fog environment is facing huge challenge. To address this issue, in current paper, we propose a workflow scheduling algorithm based on improved particle swarm optimization (IPSO), where a nonlinear decreasing function of inertia weight in PSO is designed for promoting PSO to gain the optimal solution. Finally, comprehensive simulation experiment results show that our proposed scheduling algorithm is more cost-effective and can obtain better performance than baseline approach.

Rongbin Xu, Yeguo Wang, Yongliang Cheng, Yuanwei Zhu, Ying Xie, Abubakar Sadiq Sani, Dong Yuan

An Efficient Algorithm for Runtime Minimum Cost Data Storage and Regeneration for Business Process Management in Multiple Clouds

The proliferation of cloud computing provides flexible ways for users to utilize cloud resources to cope with data complex applications, such as Business Process Management (BPM) System. In the BPM system, users may have various usage manner of the system, such as upload, generate, process, transfer, store, share or access variety kinds of data, and these data may be complex and very large in size. Due to the pas-as-you-go pricing model of cloud computing, improper usage of cloud resources will incur high cost for users. Hence, for a typical BPM system usage, data could be regenerated, transferred and stored with multiple clouds, a data storage, transfer and regeneration strategy is needed to reduce the cost on resource usage. The current state-of-art algorithm can find a strategy that achieves minimum data storage, transfer and computation cost, however, this approach has very high computation complexity and is neither efficient nor practical to be applied at runtime. In this paper, by thoroughly investigating the trade-off problem of resources utilization, we propose a Provenance Candidates Elimination algorithm, which can efficiently find the minimum cost strategy for data storage, transfer and regeneration. Through comprehensive experimental evaluation, we demonstrate that our approach can calculate the minimum cost strategy in milliseconds, which outperforms the exiting algorithm by 2 to 4 magnitudes.

Junhua Zhang, Dong Yuan, Lizhen Cui, Bing Bing Zhou

A Lean Architecture for Blockchain Based Decentralized Process Execution

Interorganizational process management bears an enormous potential for improving the collaboration among associated business partners. A major restriction is the need for a trusted third party implementing the process across the participating actors. Blockchain technology can dissolve this lack of trust due to consensus mechanisms. After the rise of cryptocurrencies, the launch of Smart Contracts enables the Ethereum Blockchain to act beyond monetary transactions due to the execution of these small programs. We propose a novel lean architecture of a Blockchain based process execution system with Smart Contracts to dispense with a trusted third party in the context of interorganizational collaborations.

Christian Sturm, Jonas Szalanczi, Stefan Schönig, Stefan Jablonski

Mining Product Relationships for Recommendation Based on Cloud Service Data

With the rapid growth of cloud services, it is more and more difficult for users to select appropriate service. Hence, an effective service recommendation method is need to offer suggestions and selections. In this paper, we propose a two- phase approach to discover related cloud services for recommendation by jointly leveraging services’ descriptive texts and their associated tags. In Phase 1, we use a non-parametric Bayesian method, DPMM to classify a large number of cloud services into an optimal number of clusters. In Phase 2, we recommend a personalized PageRank algorithm to obtain more related services for recommendation among the massive cloud service products in the same cluster. Empirical experiments on a real data set show that the proposed two-phase approach is more successful than other candidate methods for service clustering and recommendation.

Yuanchun Jiang, Cuicui Ji, Yang Qian, Yezheng Liu

SmartCrowd: A Workflow Framework for Complex Crowdsourcing Tasks

Over the past decade, a number of frameworks have been introduced to support different crowdsourcing tasks. However, complex creative tasks have remained out of reach for workflow modeling. Unlike typical tasks, creative tasks are often interdependent, requiring human cognitive ability and team collaboration. The crowd workers are required not only to perform typical tasks, but also to participate in the analysis and manipulation of complex tasks, hence the number and execution order of tasks are unknown until runtime. Thus, it is difficult to model this kind of complex tasks by using existing workflow approaches. Therefore, we propose a workflow modeling approach based on state machine to design crowdsourcing model that can be translated into SCXML code and executed by an open source engine. This approach and engine are embodied in SmartCrowd. Through two evaluations, we found that SmartCrowd can provide support for complex crowdsourcing tasks, especially on creative tasks. Moreover, we introduce a set of basic design patterns, and by employing them to compose complex patterns, our framework can support more crowdsourcing research.

Tianhong Xiong, Yang Yu, Maolin Pan, Jing Yang

Third International Workshop on Process Querying (PQ 2018)


Checking Business Process Models for Compliance – Comparing Graph Matching and Temporal Logic

Business Process Compliance Management (BPCM) is an integral part of Business Process Management (BPM). A key objective of BPCM is to ensure and maintain compliance of business processes models with certain regulations, e.g. governmental laws. As legislation may change fast and unexpectedly, automated techniques for compliance checking are of great interest among researchers and practitioners. Two dominant concepts in this area are graph-based pattern matching and pattern matching based on temporal logic. This paper compares these two approaches by implementing four compliance patterns from literature with both approaches. It discusses what requirements both approaches have towards business process models and shows how to meet them. The results show that temporal logic is not able to fully capture all four patterns.

Dennis M. Riehle

From Complexity to Insight:

Querying Large Business Process Models to Improve Quality

This industry study presents work in querying manufacturing processes using a portal to navigate query results in the broader context of enterprise architecture. The approach addresses the problem of helping stakeholders (e.g., management, marketing, engineering, operations, and finance) understand complex BPM models. Stakeholders approach models from different viewpoints and seek different views. Gleaning insight into improving model quality is challenging when models are large and complex. The focus of this work was on process models only, not process execution, because many legacy organizations have done initial BPM modeling but do not have BPM systems in production or have yet to realize the benefits of coupling log mining with incremental model refinement. The use cases presented address the complexity of multi-year, process models used by ~10,000 workers globally to develop new products. These models were queried to find quality issues, isolate stakeholder data flows, and migrate BPMN [1] activities to cloud-based, micro-services. The approach presented creates filtered process views, which serve as starting points for stakeholders to navigate interconnected models within TOGAF [2] enterprise architecture models. The process querying lifecycle—query, manipulate, and transform—was also applied to other enterprise models. Lessons learned from introducing BPM into a legacy organization, model refinement, limitations of research, and open problems are summarized.

Kurt E. Madsen

Second International Workshop on BP-meet-IoT (BP-meet-IoT 2018)


Retrofitting of Workflow Management Systems with Self-X Capabilities for Internet of Things

The Internet of Things (IoT) introduces various new challenges for business process technologies and workflow management systems (WfMS’s) to be used for managing IoT processes. Especially the interactions with the physical world lead to the emergence of new error sources and unanticipated situations that require a self-adaptive WfMS able to react dynamically to unforeseen situations. Despite a large number of existing WfMS’s, only few systems feature self-x capabilities to be used in the dynamic context of IoT. We present a retrofitting process and generic software component based on the MAPE-K feedback loop to add autonomous capabilities to existing WfMS’s. Using a smart home example process, we show how to retrofit different WfMS’s in an invasive and non-invasive way. Experiments and a brief discussion confirm the feasibility of our retrofitting processes and software component to add self-x capabilities to service-oriented WfMS’s in an IoT context.

Ronny Seiger, Peter Heisig, Uwe Aßmann

On the Contextualization of Event-Activity Mappings

Event log files are used as input to any process mining algorithm. A main assumption of process mining is that each event has been assigned to a distinct process activity already. However, such mapping of events to activities is a considerable challenge. The current status-quo is that approaches indicate only likelihoods of mappings, since there is often more than one possible solution. To increase the quality of event to activity mappings this paper derives a contextualization for event-activity mappings and argues for a stronger consideration of contextual factors. Based on a literature review, the paper provides a framework for classifying context factors for event-activity mappings. We aim to apply this framework to improve the accuracy of event-activity mappings and, thereby, process mining results in scenarios with low-level events.

Agnes Koschmider, Felix Mannhardt, Tobias Heuser

A Classification Framework for IoT Scenarios

The Internet-of-Things (IoT) is here to stay, as applications increasingly make use of IoT devices to deliver value to customers and organizations. Smart home, predictive maintenance, asset tracking are just a few examples of business scenarios that employ the IoT. As concepts from the domain of Business Process Management (BPM) are used to realize IoT scenarios, the need arises to classify which scenarios can profit from BPM concepts. In this contribution, we present a range of IoT scenarios and discuss the dimensions to classify them. Further, we suggest the BPM concepts that might be advantageous to use for realizing IoT scenarios.

Sankalita Mandal, Marcin Hewelt, Maarten Oestreich, Mathias Weske

First Declarative/Decision/Hybrid Mining and Modeling for Business Processes (DeHMiMoP)


Evaluating the Understandability of Hybrid Process Model Representations Using Eye Tracking: First Insights

The EcoKnow project strives to promote flexible case management systems in the public administration and empower end-users (i.e., case workers) to make sense of digitized models of the law. For this, a hybrid representation combining the declarative DCR notation with textual annotations depicting the law text and a simulation tool to simulate the execution of single process instances was proposed. This hybrid representation aims to overcome the notorious limitations of existing declarative notations in term of understandability. Using eye tracking, this paper investigates how users engage with the different artifacts of the hybrid representation.

Amine Abbad Andaloussi, Tijs Slaats, Andrea Burattin, Thomas T. Hildebrandt, Barbara Weber

A Framework to Evaluate and Compare Decision-Mining Techniques

During the last decade several decision mining techniques have been developed to discover the decision perspective of a process from an event log. The increasing number of decision mining techniques raises the importance of evaluating the quality of the discovered decision models and/or decision logic. Currently, the evaluations are limited because of the small amount of available event logs with decision information. To alleviate this limitation, this paper introduces the ‘DataExtend’ technique that allows evaluating and comparing decision-mining techniques with each other, using a sufficient number of event logs and process models to generate evaluation results that are statistically significant. This paper also reports on an initial evaluation using ‘DataExtend’ that involves two techniques to discover decisions, whose results illustrate that the approach can serve the purpose.

Toon Jouck, Massimiliano de Leoni, Benoît Depaire

Compliance Checking for Decision-Aware Process Models

The business processes of an organization are often required to comply with domain-specific regulations. Such regulations can be checked based on the models of the respective processes. These models’ main focus is on the operational part of the process. However, also decisions play a major role in the execution behavior of processes, and they are expressed in separate decision models. In this paper, we investigate the influence of decision models on business process compliance checking. To this end, we formalize decision-aware processes as colored Petri nets, extract the state space, and check compliance rules using temporal logic model checking. The approach improves the quality of existing compliance checking by reducing the risk of false negatives. We provide a prototype and discuss advantages and disadvantages.

Stephan Haarmann, Kimon Batoulis, Mathias Weske

Towards Automated Process Modeling Based on BPMN Diagram Composition

Modeling a business process is a complex task which involves different participants who should be familiar with the chosen modeling notation. In this paper, we propose an idea of generating business process models based on a declarative specification. Given an unordered list of process activities along with their input and output data entities, our method generates a synthetic, complete log of a process. The generated task sequences can then serve as an input to a selected process mining method or be processed by an algorithm constructing a BPMN model directly based on the log and additional information included in the declarative process specification.

Piotr Wiśniewski, Krzysztof Kluza, Antoni Ligęza

Measuring the Complexity of DMN Decision Models

Complexity impairs the maintainability and understandability of conceptual models. Complexity metrics have been used in software engineering and business process management (BPM) to capture the degree of complexity of conceptual models. A vast array of metrics has been proposed for processes in BPM. The recent introduction of the Decision Model and Notation (DMN) standard provides opportunities to shift towards the Separation of Concerns paradigm when it comes to modelling processes and decisions. However, unlike for processes, no studies exist that address the representational complexity of DMN decision models. In this paper, we provide a first set of ten complexity metrics for the decision requirements level of the DMN standard by gathering insights from the process modelling and software engineering fields. Additionally, we offer a discussion on the evolution of those metrics and we provide directions for future research on DMN compexity.

Faruk Hasić, Alexander De Craemer, Thijs Hegge, Gideon Magala, Jan Vanthienen

Joint Requirements Engineering and Business Process Management Workshop/Education Forum (REBPM/EdForum)


Process Weakness Patterns for the Identification of Digitalization Potentials in Business Processes

An important element of digital transformation is the digitalization of processes within enterprises. A major challenge is the systematic identification of digitalization potentials in business processes. Existing approaches require process analysts who identify these potentials by using the time-consuming method of pattern catalogs or by relying on their professional experiences. In this paper, we classify potentials of digitalization and derive corresponding patterns for a future pattern-based analysis procedure. This shall enable the automated identification of digitalization potentials in BPMN diagrams. Those patterns were derived from our work with five companies from different sectors. In comparison to existing approaches, our proposed method could support a more efficient and effective identification of digitalization potentials by process analysts.

Florian Rittmeier, Gregor Engels, Alexander Teetz

From Requirements to Data Analytics Process: An Ontology-Based Approach

Comprehensively describing data analytics requirements is becoming an integral part of developing enterprise information systems. It is a challenging task for analysts to completely elicit all requirements shared by the organization’s decision makers. With a multitude of data available from e-commerce sites, social media and data warehouses selecting the correct set of data and suitable techniques for an analysis itself is difficult and time-consuming. The reason is that analysts have to comprehend multiple dimensions such as existing analytics techniques, background knowledge in the domain of interest and the quality of available data. In this paper, we propose to use semantic models to represent different spheres of knowledge related to data analytics space and use them to assist in analytics requirements definition. By following this approach users can create a sound analytics requirements specification, linked with concepts from the operation domain, available data, analytics techniques and their implementations. Such requirements specifications can be used to drive the creation and management of analytics solutions, well aligned with organizational objectives. We demonstrate the capabilities of the proposed method by applying on a data analytics project for house price prediction.

Madhushi Bandara, Ali Behnaz, Fethi A. Rabhi, Onur Demirors

An Assignment on Information System Modeling

On Teaching Data and Process Integration

An information system is an integrated system of components that cooperatively aim to collect, store, manipulate, process, and disseminate data, information, and knowledge, often offered as digital products. A model of an existing or envisioned information system is its simplified representation developed to serve a purpose for a target audience. A model may represent various aspects of the system, including the structure of information, data constraints, processes that govern information, and organizational rules. Traditionally, the teaching of information system modeling is carried out in a fragmented way, i.e., modeling of different aspects of information systems is taught separately, often across different subjects. The authors’ teaching experience in this area suggests the shortcomings of such fragmented approach, evidenced by the lack of students’ ability to exploit the synergy between data and process constraints in the produced models of information systems.This paper proposes an assignment for undergraduate students which requests to model an information system of an envisioned private teaching institute. The assignment comprises a plethora of requirements grounded in the interplay of data and process constraints, and is accompanied by a tool that supports their explicit representation.

Jan Martijn E. M. van der Werf, Artem Polyvyanyy

Motivational and Occupational Self-efficacy Outcomes of Students in a BPM Course: The Role of Industry Tools vs Digital Games

While past studies have considered the educational benefits of industry tools and simulated games within BPM courses, the relative efficacy of these interventions for student outcomes has not yet been established. In this study we sought to determine whether added exposure to industry tools would be more, or less, effective at influencing students’ perceived competence, intrinsic motivation and occupational self-efficacy for BPM than exposure to a digital BPM game. An experimental study was carried out on 38 students and revealed that students exposed to additional industry tools reported increased levels of occupational self-efficacy, while those exposed to the digital game reported lower levels of perceived competence. Results have useful implications for BPM educators.

Jason Cohen, Thomas Grace


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!