Skip to main content

2016 | Buch

Business Process Management

14th International Conference, BPM 2016, Rio de Janeiro, Brazil, September 18-22, 2016. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 14th International Conference on Business Process Management, BPM 2016, held in Rio de Janeiro, Brazil, in September 2016.
The focus of the conference covers a range of papers focusing on automated discovery, conformance checking, modeling foundations, understandability of process representations, runtime management and predictive monitoring. The topics selected by the authors demonstrate an increasing interest of the research community in the area of process mining, resonated by an equally fast-growing uptake by different industry sectors.

Inhaltsverzeichnis

Frontmatter

Keynotes

Frontmatter
Rethinking BPM in a Cognitive World: Transforming How We Learn and Perform Business Processes
Abstract
If we are to believe the technology hype cycle, we are entering a new era of Cognitive Computing, enabled by advances in natural language processing, machine learning, and more broadly artificial intelligence. These advances, combined with evolutionary progress in areas such as knowledge representation, automated planning, user experience technologies, software-as-a-service and crowdsourcing, have the potential to transform many industries. In this paper, we discuss transformations of BPM that advances in the Cognitive Computing will bring. We focus on three of the most signficant aspects of this transformation, namely: (a) Cognitive Computing will enable “knowledge acquisition at scale”, which will lead to a transformation in Knowledge-intensive Processes (KiP’s); (b) We envision a new process meta-model will emerge that is centered around a “Plan-Act-Learn” cycle; and (c) Cognitive Computing can enable learning about processes from implicit descriptions (at both design- and run-time), opening opportunities for new levels of automation and business process support, for both traditional business processes and KiP’s. We use the term cognitive BPM to refer to a new BPM paradigm encompassing all aspects of BPM that are impacted and enabled by Cognitive Computing. We argue that a fundamental understanding of cognitive BPM requires a new research framing of the business process ecosystem. The paper presents a conceptual framework for cognitive BPM, a brief survey of state of the art in emerging areas of Cognitive BPM, and discussion of key directions for further research.
Richard Hull, Hamid R. Motahari Nezhad
Ontological Considerations About the Representation of Events and Endurants in Business Models
Abstract
Different disciplines have been established to deal with the representation of entities of different ontological natures: the business process modeling discipline focuses mostly on event-like entities, and, in contrast, the (structural) conceptual modeling discipline focuses mostly on object-like entities (known as endurants in the ontology literature). In this paper, we discuss the impact of the event vs. endurant divide for conceptual models, showing that a rich ontological account is required to bridge this divide. Accounting for the ontological differences in events and endurants as well as their relations can lead to a more comprehensive representation of business reality.
Giancarlo Guizzardi, Nicola Guarino, João Paulo A. Almeida

Automated Discovery

Frontmatter
A Unified Approach for Measuring Precision and Generalization Based on Anti-alignments
Abstract
The holy grail in process mining is an algorithm that, given an event log, produces fitting, precise, properly generalizing and simple process models. While there is consensus on the existence of solid metrics for fitness and simplicity, current metrics for precision and generalization have important flaws, which hamper their applicability in a general setting. In this paper, a novel approach to measure precision and generalization is presented, which relies on the notion of anti-alignments. An anti-alignment describes highly deviating model traces with respect to observed behavior. We propose metrics for precision and generalization that resemble the leave-one-out cross-validation techniques, where individual traces of the log are removed and the computed anti-alignment assess the model’s capability to describe precisely or generalize the observed behavior. The metrics have been implemented in ProM and tested on several examples.
B. F. van Dongen, J. Carmona, T. Chatain
A Stability Assessment Framework for Process Discovery Techniques
Abstract
An extensive amount of work has addressed the evaluation of process discovery techniques and the process models they discover based on concepts like fitness, precision, generalization and simplicity. In this paper, we claim that stability could be considered as an important supplementary evaluation dimension for process discovery next to accuracy and comprehensibility, with ties to the generalization concept. As such, our core contribution is a new framework to measure stability of process discovery techniques. In this paper, the design choices of the different components of the framework are explained. Furthermore, using an experimental evaluation involving both artificial and real-life event logs, the appropriateness and relevance of the stability assessment framework is demonstrated.
Pieter De Koninck, Jochen De Weerdt
Measuring the Quality of Models with Respect to the Underlying System: An Empirical Study
Abstract
Fitness and precision are two widely studied criteria to determine the quality of a discovered process model. These metrics measure how well a model represents the log from which it is learned. However, often the goal of discovery is not to represent the log, but the underlying system. This paper discusses the need to explicitly distinguish between a log and system perspective when interpreting the fitness and precision of a model. An empirical analysis was conducted to investigate whether the existing log-based fitness and precision measures are good estimators for system-based metrics. The analysis reveals that incompleteness and noisiness of event logs significantly impact fitness and precision measures. This makes them biased estimators of a model’s ability to represent the true underlying process.
Gert Janssenswillen, Toon Jouck, Mathijs Creemers, Benoît Depaire
Handling Duplicated Tasks in Process Discovery by Refining Event Labels
Abstract
Processes may require to execute the same activity in different stages of the process. A human modeler can express this by creating two different task nodes labeled with the same activity name (thus duplicating the task). However, as events in an event log often are labeled with the activity name, discovery algorithms that derive tasks based on labels only cannot discover models with duplicate labels rendering the results imprecise. For example, for a log where “payment” events occur at the beginning and the end of a process, a modeler would create two different “payment” tasks, whereas a discovery algorithm introduces a loop around a single “payment” task. In this paper, we present a general approach for refining labels of events based on their context in the event log as a preprocessing step. The refined log can be input for any discovery algorithm. The approach is implemented in ProM and was evaluated in a controlled setting. We were able to improve the quality of up to 42 % of the models compared to using a log with imprecise labeling using default parameters and up to 87 % using adaptive parameters. Moreover, using our refinement approach significantly increased the similarity of the discovered model to the original process with duplicate labels allowing for better rediscoverability. We also report on a case study conducted for a Dutch hospital.
Xixi Lu, Dirk Fahland, Frank J. H. M. van den Biggelaar, Wil M. P. van der Aalst
Discovering Duplicate Tasks in Transition Systems for the Simplification of Process Models
Abstract
This work presents a set of methods to improve the understandability of process models. Traditionally, simplification methods trade off quality metrics, such as fitness or precision. Conversely, the methods proposed in this paper produce simplified models while preserving or even increasing fidelity metrics. The first problem addressed in the paper is the discovery of duplicate tasks. A new method is proposed that avoids overfitting by working on the transition system generated by the log. The method is able to discover duplicate tasks even in the presence of concurrency and choice. The second problem is the structural simplification of the model by identifying optional and repetitive tasks. The tasks are substituted by annotated events that allow the removal of silent tasks and reduce the complexity of the model. An important feature of the methods proposed in this paper is that they are independent from the actual miner used for process discovery.
Javier de San Pedro, Jordi Cortadella
From Low-Level Events to Activities - A Pattern-Based Approach
Abstract
Process mining techniques analyze processes based on event data. A crucial assumption for process analysis is that events correspond to occurrences of meaningful activities. Often, low-level events recorded by information systems do not directly correspond to these. Abstraction methods, which provide a mapping from the recorded events to activities recognizable by process workers, are needed. Existing supervised abstraction methods require a full model of the entire process as input and cannot handle noise. This paper proposes a supervised abstraction method based on behavioral activity patterns that capture domain knowledge on the relation between activities and events. Through an alignment between the activity patterns and the low-level event logs an abstracted event log is obtained. Events in the abstracted event log correspond to instantiations of recognizable activities. The method is evaluated with domain experts of a Norwegian hospital using an event log from their digital whiteboard system. The evaluation shows that state-of-the art process mining methods provide valuable insights on the usage of the system when using the abstracted event log, but fail when using the original lower level event log.
Felix Mannhardt, Massimiliano de Leoni, Hajo A. Reijers, Wil M. P. van der Aalst, Pieter J. Toussaint
Discovering and Exploring State-Based Models for Multi-perspective Processes
Abstract
Process mining provides fact-based insights into process behaviour captured in event data. In this work we aim to discover models for processes where different facets, or perspectives, of the process can be identified. Instead of focussing on the events or activities that are executed in the context of a particular process, we concentrate on the states of the different perspectives and discover how they are related. We present a formalisation of these relations and an approach to discover state-based models highlighting them. The approach has been implemented using the process mining framework ProM and provides a highly interactive visualisation of the multi-perspective state-based models. This tool has been evaluated on the BPI Challenge 2012 data of a loan application process and on product user behaviour data gathered by Philips during the development of a smart baby bottle equipped with various sensors.
Maikel L. van Eck, Natalia Sidorova, Wil M. P. van der Aalst
Semantical Vacuity Detection in Declarative Process Mining
Abstract
A large share of the literature on process mining based on declarative process modeling languages, like declare, relies on the notion of constraint activation to distinguish between the case in which a process execution recorded in event data “vacuously” satisfies a constraint, or satisfies the constraint in an “interesting way”. This fine-grained indicator is then used to decide whether a candidate constraint supported by the analyzed event log is indeed relevant or not. Unfortunately, this notion of relevance has never been formally defined, and all the proposals existing in the literature use ad-hoc definitions that are only applicable to a pre-defined set of constraint patterns. This makes existing declarative process mining technique inapplicable when the target constraint language is extensible and may contain formulae that go beyond pre-defined patterns. In this paper, we tackle this hot, open challenge and show how the notion of constraint activation and vacuous satisfaction can be captured semantically, in the case of constraints expressed in arbitrary temporal logics over finite traces. We then extend the standard automata-based approach so as to incorporate relevance-related information. We finally report on an implementation and experimentation of the approach that confirms the advantages and feasibility of our solution.
Fabrizio Maria Maggi, Marco Montali, Claudio Di Ciccio, Jan Mendling

Conformance Checking

Frontmatter
In Log and Model We Trust? A Generalized Conformance Checking Framework
Abstract
While models and event logs are readily available in modern organizations, their quality can seldom be trusted. Raw event recordings are often noisy, incomplete, and contain erroneous recordings. The quality of process models, both conceptual and data-driven, heavily depends on the inputs and parameters that shape these models, such as domain expertise of the modelers and the quality of execution data. The mentioned quality issues are specifically a challenge for conformance checking. Conformance checking is the process mining task that aims at coping with low model or log quality by comparing the model against the corresponding log, or vice versa. The prevalent assumption in the literature is that at least one of the two can be fully trusted. In this work, we propose a generalized conformance checking framework that caters for the common case, when one does neither fully trust the log nor the model. In our experiments we show that our proposed framework balances the trust in model and log as a generalization of state-of-the-art conformance checking techniques.
Andreas Rogge-Solti, Arik Senderovich, Matthias Weidlich, Jan Mendling, Avigdor Gal
A Recursive Paradigm for Aligning Observed Behavior of Large Structured Process Models
Abstract
The alignment of observed and modeled behavior is a crucial problem in process mining, since it opens the door for conformance checking and enhancement of process models. The state of the art techniques for the computation of alignments rely on a full exploration of the combination of the model state space and the observed behavior (an event log), which hampers their applicability for large instances. This paper presents a fresh view to the alignment problem: the computation of alignments is casted as the resolution of Integer Linear Programming models, where the user can decide the granularity of the alignment steps. Moreover, a novel recursive strategy is used to split the problem into small pieces, exponentially reducing the complexity of the ILP models to be solved. The contributions of this paper represent a promising alternative to fight the inherent complexity of computing alignments for large instances.
Farbod Taymouri, Josep Carmona

Modeling Foundations

Frontmatter
Semantics and Analysis of DMN Decision Tables
Abstract
The Decision Model and Notation (DMN) is a standard notation to capture decision logic in business applications in general and business processes in particular. A central construct in DMN is that of a decision table. The increasing use of DMN decision tables to capture critical business knowledge raises the need to support analysis tasks on these tables such as correctness and completeness checking. This paper provides a formal semantics for DMN tables, a formal definition of key analysis tasks and scalable algorithms to tackle two such tasks, i.e., detection of overlapping rules and of missing rules. The algorithms are based on a geometric interpretation of decision tables that can be used to support other analysis tasks by tapping into geometric algorithms. The algorithms have been implemented in an open-source DMN editor and tested on large decision tables derived from a credit lending dataset.
Diego Calvanese, Marlon Dumas, Ülari Laurson, Fabrizio M. Maggi, Marco Montali, Irene Teinemaa
Dynamic Skipping and Blocking and Dead Path Elimination for Cyclic Workflows
Abstract
We propose and study dynamic versions of the classical flexibility constructs skip and block and motivate and define a formal semantics for them. We show that our semantics for dynamic blocking is a generalization of classical dead-path-elimination and solves the long-standing open problem to define dead-path elimination for cyclic workflows. This gives rise to a simple and fully local semantics for inclusive gateways.
Dirk Fahland, Hagen Völzer
The Complexity of Deadline Analysis for Workflow Graphs with Multiple Resources
Abstract
We study whether the executions of a time-annotated sound workflow graph (WFG) meet a given deadline when an unbounded number of resources (i.e., executing agents) is available. We present polynomial-time algorithms and NP-hardness results for different cases. In particular, we show that it can be decided in polynomial time whether some executions of a sound workflow graph meet the deadline. For acyclic sound workflow graphs, it can be decided in linear time whether some or all executions meet the deadline. Furthermore, we show that it is NP-hard to compute the expected duration of a sound workflow graph for unbounded resources, which is contrasting the earlier result that the expected duration of a workflow graph executed by a single resource can be computed in cubic time. We also propose an algorithm for computing the maximum concurrency of the workflow graph, which helps to determine the optimal number of resources needed to execute the workflow graph.
Mirela Botezatu, Hagen Völzer, Lothar Thiele

Understandability of Process Representations

Frontmatter
Dealing with Behavioral Ambiguity in Textual Process Descriptions
Abstract
Textual process descriptions are widely used in organizations since they can be created and understood by virtually everyone. The inherent ambiguity of natural language, however, impedes the automated analysis of textual process descriptions. While human readers can use their context knowledge to correctly understand statements with multiple possible interpretations, automated analysis techniques currently have to make assumptions about the correct meaning. As a result, automated analysis techniques are prone to draw incorrect conclusions about the correct execution of a process. To overcome this issue, we introduce the concept of a behavioral space as a means to deal with behavioral ambiguity in textual process descriptions. A behavioral space captures all possible interpretations of a textual process description in a systematic manner. Thus, it avoids the problem of focusing on a single interpretation. We use a compliance checking scenario and a quantitative evaluation with a set of 47 textual process descriptions to demonstrate the usefulness of a behavioral space for reasoning about a process described by a text. Our evaluation demonstrates that a behavioral space strikes a balance between ignoring ambiguous statements and imposing fixed interpretations on them.
Han van der Aa, Henrik Leopold, Hajo A. Reijers
The Effect of Modularity Representation and Presentation Medium on the Understandability of Business Process Models in BPMN
Abstract
Many factors influence the creation of understandable business process models for an appropriate audience. Understandability of process models becomes critical particularly when a process is complex and its model is large in structure. Using modularization to represent such models hierarchically (e.g. using sub-processes) is considered to contribute to the understandability of these models. To investigate this assumption, we conducted an experiment that involved 2 large-scale real-life business process models that were modeled using BPMN v2.0 (Business Process Model and Notation). Each process was modeled in 3 modularity forms: fully-flattened, flattened where activities are clustered using BPMN groups, and modularized using separately viewed BPMN sub-processes. The objective is to investigate if and how different forms of modularity representation in BPMN collaboration diagrams influence the understandability of process models. In addition to the forms of modularity representation, we also looked into the presentation medium (paper vs. computer) as a factor that potentially influences model comprehension. Sixty business practitioners from a large organization participated in the experiment. The results of our experiment indicate that for business practitioners, to optimally understand a BPMN model in the form of a collaboration diagram, it is best to present the model in a ‘fully-flattened’ fashion (without using collapsed sub-processes in BPMN) in the ‘paper’ format.
Oktay Turetken, Tessa Rompen, Irene Vanderfeesten, Ahmet Dikici, Jan van Moll
Towards Quality-Aware Translations of Activity-Centric Processes to Guard Stage Milestone
Abstract
Current translation approaches from activity-centric process models to artifact-centric Guard Stage Milestone (GSM) models operate on the syntactic level. While such translations allow equivalent traces (behaviors) of executions, we argue that they generate poor GSM models for the intended audience (including business managers and process modelers). A specific deficiency of these translations is their inability to relate to relevant domain knowledge, especially groupings of activities to achieve well-known business goals cannot be obtained by syntactic translations. Ironically, this is a main principle of GSM models. We developed an initial ontology based translation framework [14] that incorporates the missing knowledge for improved translations. In this paper we further extend this framework with two metrics for the assessment of quality aspects of resulting GSM translations with domain knowledge, propose a novel semantic rewriting algorithm that enhances the quality of GSM translations, and provide an evaluation of the achievable quality for different classes of input processes. Our evaluation shows that maximum quality scores are achievable if semantics and structure of the input processes are well aligned. Given poorly aligned input processes, a translation method can optimize one of the metrics but not both.
Julius Köpke, Jianwen Su

Runtime Management

Frontmatter
Untrusted Business Process Monitoring and Execution Using Blockchain
Abstract
The integration of business processes across organizations is typically beneficial for all involved parties. However, the lack of trust is often a roadblock. Blockchain is an emerging technology for decentralized and transactional data sharing across a network of untrusted participants. It can be used to find agreement about the shared state of collaborating parties without trusting a central authority or any particular participant. Some blockchain networks also provide a computational infrastructure to run autonomous programs called smart contracts. In this paper, we address the fundamental problem of trust in collaborative process execution using blockchain. We develop a technique to integrate blockchain into the choreography of processes in such a way that no central authority is needed, but trust maintained. Our solution comprises the combination of an intricate set of components, which allow monitoring or coordination of business processes. We implemented our solution and demonstrate its feasibility by applying it to three use case processes. Our evaluation includes the creation of more than 500 smart contracts and the execution over 8,000 blockchain transactions.
Ingo Weber, Xiwei Xu, Régis Riveret, Guido Governatori, Alexander Ponomarev, Jan Mendling
Classification and Formalization of Instance-Spanning Constraints in Process-Driven Applications
Abstract
In process-driven applications, typically, instances share human, computer, and physical resources and hence cannot be executed independently of each other. This necessitates the definition, verification, and enforcement of restrictions and conditions across multiple instances by so called instance-spanning constraints (ISC). ISC might refer to instances of one or several process types or variants. While real-world applications from, e.g., the logistics, manufacturing, and energy domain crave for the support of ISC, only partial solutions can be found. This work provides a systematic ISC classification and formalization that enables the verification of ISC during design and runtime. Based on a collection of 114 ISC from different domains and sources the relevance and feasibility of the presented concepts is shown.
Walid Fdhila, Manuel Gall, Stefanie Rinderle-Ma, Juergen Mangler, Conrad Indiono
Value at Risk Within Business Processes: An Automated IT Risk Governance Approach
Abstract
Business processes are core operational assets to control firms’ efficiency in value generation. However, the execution and control of business processes is increasingly dependent on Information Technology (IT). Therefore, the risks that arise from relying on IT in business processes must be quantified. This paper proposes the adaptation of the Value at Risk (VaR) financial technique to measure the level of risk within a process portfolio. This is done by quantifying the impact resulting from changes in the performance of IT services. The probability of IT risks is measured daily in order to model the volatility of IT services, especially when they are flexible and changeable. The proposed method enables predicting and estimating the losses of IT risks and their effect on dependent business processes over a time horizon. The incorporation of risk management mechanisms enriches business processes with organizational management capabilities.
Oscar González-Rojas, Sebastian Lesmes

Prediction

Frontmatter
PRISM – A Predictive Risk Monitoring Approach for Business Processes
Abstract
Nowadays, organizations face severe operational risks when executing their business processes. Some reasons are the ever more complex and dynamic business environment as well as the organic nature of business processes. Taking a risk perspective on the business process management (BPM) lifecycle has thus been recognized as an essential research stream. Despite profound knowledge on risk-aware BPM with a focus on process design, existing approaches for real-time risk monitoring treat instances as isolated when detecting risks. They do not propagate risk information to other instances in order to support early risk detection. To address this gap, we propose an approach for predictive risk monitoring (PRISM). This approach automatically propagates risk information, which has been detected via risk sensors, across similar running instances of the same process in real-time. We demonstrate PRISM’s capability of predictive risk monitoring by applying it in the context of a real-world scenario.
Raffaele Conforti, Sven Fink, Jonas Manderscheid, Maximilian Röglinger
Predictive Business Process Monitoring with Structured and Unstructured Data
Abstract
Predictive business process monitoring is concerned with continuously analyzing the events produced by the execution of a business process in order to predict as early as possible the outcome of each ongoing case thereof. Previous work has approached the problem of predictive process monitoring when the observed events carry structured data payloads consisting of attribute-value pairs. In practice, structured data often comes in conjunction with unstructured (textual) data such as emails or comments. This paper presents a predictive process monitoring framework that combines text mining with sequence classification techniques so as to handle both structured and unstructured event payloads. The framework has been evaluated with respect to accuracy, prediction earliness and efficiency on two real-life datasets.
Irene Teinemaa, Marlon Dumas, Fabrizio Maria Maggi, Chiara Di Francescomarino
P-Folder: Optimal Model Simplification for Improving Accuracy in Process Performance Prediction
Abstract
Operational process models such as generalised stochastic Petri nets (GSPNs) are useful when answering performance queries on business processes (e.g. ‘how long will it take for a case to finish?’). Recently, methods for process mining have been developed to discover and enrich operational models based on a log of recorded executions of processes, which enables evidence-based process analysis. To avoid a bias due to infrequent execution paths, discovery algorithms strive for a balance between over-fitting and under-fitting regarding the originating log. However, state-of-the-art discovery algorithms address this balance solely for the control-flow dimension, neglecting possible over-fitting in terms of performance annotations. In this work, we thus offer a technique for performance-driven model reduction of GSPNs, using structural simplification rules. Each rule induces an error in performance estimates with respect to the original model. However, we show that this error is bounded and that the reduction in model parameters incurred by the simplification rules increases the accuracy of process performance prediction. We further show how to find an optimal sequence of applying simplification rules to obtain a minimal model under a given error budget for the performance estimates. We evaluate the approach with a real-world case in the healthcare domain, showing that model simplification indeed yields significant improvements in time prediction accuracy.
Arik Senderovich, Alexander Shleyfman, Matthias Weidlich, Avigdor Gal, Avishai Mandelbaum
Backmatter
Metadaten
Titel
Business Process Management
herausgegeben von
Marcello La Rosa
Peter Loos
Oscar Pastor
Copyright-Jahr
2016
Electronic ISBN
978-3-319-45348-4
Print ISBN
978-3-319-45347-7
DOI
https://doi.org/10.1007/978-3-319-45348-4