Skip to main content

2020 | Book

Business Process Management

18th International Conference, BPM 2020, Seville, Spain, September 13–18, 2020, Proceedings

Editors: Dirk Fahland, Chiara Ghidini, Prof. Dr. Jörg Becker, Prof. Marlon Dumas

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science


About this book

This book constitutes the proceedings of the 18th International Conference on Business Process Management, BPM 2020, held in Seville, Spain, in September 2020. The conference was held virtually due to the COVID-19 pandemic.

The 27 full papers included in this volume were carefully reviewed and selected from 125 submissions. Two full keynote papers are also included. The papers are organized in topical sections named: foundations; engineering; and management.

Table of Contents



Process Minding: Closing the Big Data Gap
The discipline of process mining was inaugurated in the BPM community. It flourished in a world of small(er) data, with roots in the communities of software engineering and databases and applications mainly in organizational and management settings. The introduction of big data, with its volume, velocity, variety, and veracity, and the big strides in data science research and practice pose new challenges to this research field. The paper positions process mining along modern data life cycle, highlighting the challenges and suggesting directions in which data science disciplines (e.g., machine learning) may interact with a renewed process mining agenda.
Avigdor Gal, Arik Senderovich
Characterizing Machine Learning Processes: A Maturity Framework
Academic literature on machine learning modeling fails to address how to make machine learning models work for enterprises. For example, existing machine learning processes cannot address how to define business use cases for an AI application, how to convert business requirements from product managers into data requirements for data scientists, and how to continuously improve AI applications in term of accuracy and fairness, how to customize general purpose machine learning models with industry, domain, and use case specific data to make them more accurate for specific situations etc. Making AI work for enterprises requires special considerations, tools, methods and processes. In this paper we present a maturity framework for machine learning model lifecycle management for enterprises. Our framework is a re-interpretation of the software Capability Maturity Model (CMM) for machine learning model development process. We present a set of best practices from authors’ personal experience of building large scale real-world machine learning models to help organizations achieve higher levels of maturity independent of their starting point.
Rama Akkiraju, Vibha Sinha, Anbang Xu, Jalal Mahmud, Pritam Gundecha, Zhe Liu, Xiaotong Liu, John Schumacher


Extending Temporal Business Constraints with Uncertainty
Temporal business constraints have been extensively adopted to declaratively capture the acceptable courses of execution in a business process. However, traditionally, constraints are interpreted logically in a crisp way: a process execution trace conforms with a constraint model if all the constraints therein are satisfied. This is too restrictive when one wants to capture best practices, constraints involving uncontrollable activities, and exceptional but still conforming behaviors. This calls for the extension of business constraints with uncertainty. In this paper, we tackle this timely and important challenge, relying on recent results on probabilistic temporal logics over finite traces. Specifically, our contribution is threefold. First, we delve into the conceptual meaning of probabilistic constraints and their semantics. Second, we argue that probabilistic constraints can be discovered from event data using existing techniques for declarative process discovery. Third, we study how to monitor probabilistic constraints, where constraints and their combinations may be in multiple monitoring states at the same time, though with different probabilities.
Fabrizio Maria Maggi, Marco Montali, Rafael Peñaloza, Anti Alman
Petri Nets with Parameterised Data
Modelling and Verification
During the last decade, various approaches have been put forward to integrate business processes with different types of data. Each of these approaches reflects specific demands in the whole process-data integration spectrum. One particularly important point is the capability of these approaches to flexibly accommodate processes with multiple cases that need to co-evolve. In this work, we introduce and study an extension of coloured Petri nets, called catalog-nets, providing two key features to capture this type of processes. On the one hand, net transitions are equipped with guards that simultaneously inspect the content of tokens and query facts stored in a read-only, persistent database. On the other hand, such transitions can inject data into tokens by extracting relevant values from the database or by generating genuinely fresh ones. We systematically encode catalog-nets into one of the reference frameworks for the (parameterised) verification of data and processes. We show that fresh-value injection is a particularly complex feature to handle, and discuss strategies to tame it. Finally, we discuss how catalog-nets relate to well-known formalisms in this area.
Silvio Ghilardi, Alessandro Gianola, Marco Montali, Andrey Rivkin
Socially-Aware Business Process Redesign
Existing techniques for the redesign of business processes are mostly concerned with optimizing efficiency and productivity, but do not take social considerations into account. In this paper, we represent social business process redesign (SBPR) as a constrained optimization problem (COP). Assuming a workforce of human and computer resources, SBPR considers two types of decisions: (1) how to allocate tasks among this workforce and (2) which skills it should acquire. The latter decision can be used to control for the amount of automation (by setting an upper bound), which may ensure, for example, that disadvantaged workers are included. We discuss scenarios inspired by real-world considerations where the COP representation of SBPR can be used as a decision support tool. Furthermore, we present an extensive computational analysis that demonstrates the applicability of our COP-based solution to large SBPR instances, as well as a detailed analysis of the factors that influence the performance of the approach. Our work shows that it is feasible to incorporate multiple considerations into redesign decision making, while providing meaningful insights into the trade-offs involved.
Arik Senderovich, Joop J. Schippers, Hajo A. Reijers
Incentive Alignment of Business Processes
Many definitions of business processes refer to business goals, value creation, profits, etc. Nevertheless, the focus of formal methods research on business processes lies on the correctness of the execution semantics of models w.r.t. properties like deadlock freedom, liveness, or completion guarantees. However, the question of whether participants are interested in working towards completion – or in participating in the process at all – has not been addressed as of yet.
In this work, we investigate whether inter-organizational business processes give participants incentives for achieving the business goals: in short, whether incentives are aligned within the process. In particular, fair behavior should pay off and efficient completion of tasks should be rewarded. We propose a game-theoretic approach that relies on algorithms for solving stochastic games from the machine learning community. We describe a method for checking incentive alignment of process models with utility annotations for tasks, which can be used for a priori analysis of inter-organizational business processes. Last but not least, we show that the soundness property is a special case of incentive alignment.
Tobias Heindel, Ingo Weber
PRIPEL: Privacy-Preserving Event Log Publishing Including Contextual Information
Event logs capture the execution of business processes in terms of executed activities and their execution context. Since logs contain potentially sensitive information about the individuals involved in the process, they should be pre-processed before being published to preserve the individuals’ privacy. However, existing techniques for such pre-processing are limited to a process’ control-flow and neglect contextual information, such as attribute values and durations. This thus precludes any form of process analysis that involves contextual factors. To bridge this gap, we introduce PRIPEL, a framework for privacy-aware event log publishing. Compared to existing work, PRIPEL takes a fundamentally different angle and ensures privacy on the level of individual cases instead of the complete log. This way, contextual information as well as the long tail process behaviour are preserved, which enables the application of a rich set of process analysis techniques. We demonstrate the feasibility of our framework in a case study with a real-world event log.
Stephan A. Fahrenkrog-Petersen, Han van der Aa, Matthias Weidlich
A Framework for Estimating Simplicity of Automatically Discovered Process Models Based on Structural and Behavioral Characteristics
A plethora of algorithms for automatically discovering process models from event logs has emerged. The discovered models are used for analysis and come with a graphical flowchart-like representation that supports their comprehension by analysts. According to the Occam’s Razor principle, a model should encode the process behavior with as few constructs as possible, that is, it should not be overcomplicated without necessity. The simpler the graphical representation, the easier the described behavior can be understood by a stakeholder. Conversely, and intuitively, a complex representation should be harder to understand. Although various conformance checking techniques that relate the behavior of discovered models to the behavior recorded in event logs have been proposed, there are no methods for evaluating whether this behavior is represented in the simplest possible way. Existing techniques for measuring the simplicity of discovered models focus on their structural characteristics such as size or density, and ignore the behavior these models encoded. In this paper, we present a conceptual framework that can be instantiated into a concrete approach for estimating the simplicity of a model, considering the behavior the model describes, thus allowing a more holistic analysis. The reported evaluation over real-life event logs for several instantiations of the framework demonstrates its feasibility in practice.
Anna Kalenkova, Artem Polyvyanyy, Marcello La Rosa
Online Process Monitoring Using Incremental State-Space Expansion: An Exact Algorithm
The execution of (business) processes generates valuable tra-ces of event data in the information systems employed within companies. Recently, approaches for monitoring the correctness of the execution of running processes have been developed in the area of process mining, i.e., online conformance checking. The advantages of monitoring a process’ conformity during its execution are clear, i.e., deviations are detected as soon as they occur and countermeasures can immediately be initiated to reduce the possible negative effects caused by process deviations. Existing work in online conformance checking only allows for obtaining approximations of non-conformity, e.g., overestimating the actual severity of the deviation. In this paper, we present an exact, parameter-free, online conformance checking algorithm that computes conformance checking results on the fly. Our algorithm exploits the fact that the conformance checking problem can be reduced to a shortest path problem, by incrementally expanding the search space and reusing previously computed intermediate results. Our experiments show that our algorithm is able to outperform comparable state-of-the-art approximation algorithms.
Daniel Schuster, Sebastiaan J. van Zelst


Looking for Meaning: Discovering Action-Response-Effect Patterns in Business Processes
Process mining enables organizations to capture and improve their processes based on fact-based process execution data. A key question in the context of process improvement is how response s to an event (action) result in desired or undesired outcomes (effects). From a process perspective, this requires understanding the action-response patterns that occur. Current discovery techniques do not allow organizations to gain such insights. In this paper we present a novel approach to tackle this problem. We propose and formalize a technique to discover action-response-effect patterns. In this technique we use well-established statistical tests to uncover potential dependency relations between each response and its effect s on the cases. The goal of this technique is to provide organizations with processes that are: (1) appropriately represented, and (2) effectively filtered to show meaningful relations. The approach is evaluated on a real-world data set from a Dutch healthcare facility in the context of aggressive behavior of clients and the response s of caretakers.
Jelmer J. Koorn, Xixi Lu, Henrik Leopold, Hajo A. Reijers
Extracting Annotations from Textual Descriptions of Processes
Organizations often have textual descriptions as a way to document their main processes. These descriptions are primarily used by the company’s personnel to understand the processes, specially for those ones that cannot interpret formal descriptions like BPMN or Petri nets. In this paper we present a technique based on Natural Language Processing and a query language for tree-based patterns, that extracts annotations describing key process elements like actions, events, agents/patients, roles and control-flow relations. Annotated textual descriptions of processes are a good compromise between understandability (since at the end, it is just text), and behavior. Moreover, as it has been recently acknowledged, obtaining annotated textual descriptions of processes opens the door to unprecedented applications, like formal reasoning or simulation on the underlying described process. Applying our technique on several publicly available texts shows promising results in terms of precision and recall with respect to the state-of-the art approach for a similar task.
Luis Quishpi, Josep Carmona, Lluís Padró
Analyzing Process Concept Drifts Based on Sensor Event Streams During Runtime
Business processes have to adapt to constantly changing requirements at a large scale due to, e.g., new regulations, and at a smaller scale due to, e.g., deviations in sensor event streams such as warehouse temperature in manufacturing or blood pressure in health care. Deviations in the process behavior during runtime can be detected from process event streams as so called concept drifts. Existing work has focused on concept drift detection so far, but has neglected why the drift occurred. To close this gap, this paper provides online algorithms to analyze the root cause for a concept drift using sensor event streams. These streams are typically gathered externally, i.e., separated from the process execution, and can be understood as time sequences. Supporting domain experts in assessing concept drifts through their root cause facilitates process optimization and evolution. The feasibility of the algorithms is shown based on a prototypical implementation. Moreover, the algorithms are evaluated based on a real-world data set from manufacturing.
Florian Stertz, Stefanie Rinderle-Ma, Juergen Mangler
TADE: Stochastic Conformance Checking Using Temporal Activity Density Estimation
In most processes, we have a strong demand for high conformance. We are interested in processes that work as designed, with as little deviations as possible. To assure this property, conformance checking techniques evaluate process instances by comparing their execution to work-flow models. However, this paradigm is depending on the assumption, that the work-flow perspective contains all necessary information to reveal potential non-conformance. In this work we propose the novel method TADE to check for process conformance with regards to another perspective. While traditional methods like token-based replay and alignments focus on workflow-based deviations, we developed time-sensitive stochastic estimators and prove their superiority over the competitors regarding accuracy and runtime efficiency. TADE is based on the well-known kernel density estimation. The probabilities of event occurrences at certain timestamps are modeled, so the fitness of new cases is computed considering this stochastic model. We evaluate this on a real-world building permit application process, which shows its usage capabilities in industrial scenarios.
Florian Richter, Janina Sontheim, Ludwig Zellner, Thomas Seidl
Predictive Business Process Monitoring via Generative Adversarial Nets: The Case of Next Event Prediction
Predictive process monitoring aims to predict future characteristics of an ongoing process case, such as case outcome or remaining timestamp. Recently, several predictive process monitoring methods based on deep learning such as Long Short-Term Memory or Convolutional Neural Network have been proposed to address the problem of next event prediction. However, due to insufficient training data or sub-optimal network configuration and architecture, these approaches do not generalize well the problem at hand. This paper proposes a novel adversarial training framework to address this shortcoming, based on an adaptation of Generative Adversarial Networks (GANs) to the realm of sequential temporal data. The training works by putting one neural network against the other in a two-player game (hence the “adversarial” nature) which leads to predictions that are indistinguishable from the ground truth. We formally show that the worst-case accuracy of the proposed approach is at least equal to the accuracy achieved in non-adversarial settings. From the experimental evaluation it emerges that the approach systematically outperforms all baselines both in terms of accuracy and earliness of the prediction, despite using a simple network architecture and a naive feature encoding. Moreover, the approach is more robust, as its accuracy is not affected by fluctuations over the case length.
Farbod Taymouri, Marcello La Rosa, Sarah Erfani, Zahra Dasht Bozorgi, Ilya Verenich
Exploring Interpretable Predictive Models for Business Processes
There has been a growing interest in the literature on the application of deep learning models for predicting business process behaviour, such as the next event in a case, the time for completion of an event, and the remaining execution trace of a case. Although these models provide high levels of accuracy, their sophisticated internal representations provide little or no understanding about the reason for a particular prediction, resulting in them being used as black-boxes. Consequently, an interpretable model is necessary to enable transparency and empower users to evaluate when and how much they can rely on the models. This paper explores an interpretable and accurate attention-based Long Short Term Memory (LSTM) model for predicting business process behaviour. The interpretable model provides insights into the model inputs influencing a prediction, thus facilitating transparency. An experimental evaluation shows that the proposed model capable of supporting interpretability also provides accurate predictions when compared to existing LSTM models for predicting process behaviour. The evaluation further shows that attention mechanisms in LSTM provide a sound approach to generate meaningful interpretations across different tasks in predictive process analytics.
Renuka Sindhgatta, Catarina Moreira, Chun Ouyang, Alistair Barros
Triggering Proactive Business Process Adaptations via Online Reinforcement Learning
Proactive process adaptation can prevent and mitigate upcoming problems during process execution by using predictions about how an ongoing case will unfold. There is an important trade-off with respect to these predictions: Earlier predictions leave more time for adaptations than later predictions, but earlier predictions typically exhibit a lower accuracy than later predictions, because not much information about the ongoing case is available. An emerging solution to address this trade-off is to continuously generate predictions and only trigger proactive adaptations when prediction reliability is greater than a predefined threshold. However, a good threshold is not known a priori. One solution is to empirically determine the threshold using a subset of the training data. While an empirical threshold may be optimal for the training data used and the given cost structure, such a threshold may not be optimal over time due to non-stationarity of process environments, data, and cost structures. Here, we use online reinforcement learning as an alternative solution to learn when to trigger proactive process adaptations based on the predictions and their reliability at run time. Experimental results for three public data sets indicate that our approach may on average lead to 12.2% lower process execution costs compared to empirical thresholding.
Andreas Metzger, Tristan Kley, Alexander Palm
Video-to-Model: Unsupervised Trace Extraction from Videos for Process Discovery and Conformance Checking in Manual Assembly
Manual activities are often hidden deep down in discrete manufacturing processes. To analyze and optimize such processes, process discovery techniques allow the mining of the actual process behavior. Those techniques require the availability of complete event logs representing the execution of manual activities. Related works about collecting such information from sensor data unobtrusively for the worker are rare. Papers either address the sensor-based recognition of activities or focus on the process discovery part using process mining-compatible data sets. This paper builds on previous works to provide a solution on how execution-level information can be extracted from videos in manual assembly. The test bed consists of an assembly workstation equipped with a single RGB camera. A neural network-based real-time object detector delivers the input for an algorithm, which generates trajectories reflecting the movement paths of the worker’s hands. Those trajectories are automatically assigned to work steps using hierarchical clustering of similar behavior with dynamic time warping. The system has been evaluated in a task-based study with ten participants in a laboratory under realistic conditions. The generated logs have been loaded into the process mining toolkit ProM to discover the underlying process model and to measure the system’s performance using conformance checking.
Sönke Knoch, Shreeraman Ponpathirkoottam, Tim Schwartz
Enhancing Event Log Quality: Detecting and Quantifying Timestamp Imperfections
Timestamp information recorded in event logs plays a crucial role in uncovering meaningful insights into business process performance and behaviour via Process Mining techniques. Inaccurate or incomplete timestamps may cause activities in a business process to be ordered incorrectly, leading to unrepresentative process models and incorrect process performance analysis results. Thus, the quality of timestamps in an event log should be evaluated thoroughly before the log is used as input for any Process Mining activity. To the best of our knowledge, research on the (automated) quality assessment of event logs remains scarce. Our work presents an automated approach for detecting and quantifying timestamp-related issues (timestamp imperfections) in an event log. We define 15 metrics related to timestamp quality across two axes: four levels of abstraction (event, activity, trace, log) and four quality dimensions (accuracy, completeness, consistency, uniqueness). We adopted the design science research paradigm and drew from knowledge related to data quality as well as event log quality. The approach has been implemented as a prototype within the open-source Process Mining framework ProM and evaluated using three real-life event logs and involving experts from practice. This approach paves the way for a systematic and interactive enhancement of timestamp imperfections during the data pre-processing phase of Process Mining projects.
Dominik Andreas Fischer, Kanika Goel, Robert Andrews, Christopher Gerhard Johannes van Dun, Moe Thandar Wynn, Maximilian Röglinger
Automatic Repair of Same-Timestamp Errors in Business Process Event Logs
This paper contributes an approach for automatically correcting “same-timestamp” errors in business process event logs. These errors consist in multiple events exhibiting the same timestamp within a given process instance. Such errors are common in practice and can be due to the logging granularity or the performance load of the logging system. Analyzing logs that have not been properly screened for such problems is likely to lead to wrong or misleading process insights. The proposed approach revolves around two techniques: one to reorder events with same-timestamp errors, the other to assign an estimated timestamp to each such event. The approach has been implemented in a software prototype and extensively evaluated in different settings, using both artificial and real-life logs. The experiments show that the approach significantly reduces the number of inaccurate timestamps, while the reordering of events scales well to large and complex datasets. The evaluation is complemented by a case study in the meat & livestock domain showing the usefulness of the approach in practice.
Raffaele Conforti, Marcello La Rosa, Arthur H. M. ter Hofstede, Adriano Augusto


Explorative Process Design Patterns
The dominating process lifecycle models are characterized by deductive reasoning; that is, during the process analysis stage, problem-centered approaches such as Lean’s seven types of waste are used to identify pain points (e.g., bottlenecks), and defined response patterns are deployed to overcome these. As a result, exploitative business process management (BPM) has reached a high level of maturity. However, explorative BPM, with its focus on adding new value to business processes, lacks an equally mature, deductive set of design patterns. This paper proposes to close this gap by offering seven explorative process design patterns that support the identification of options to create new value from existing business processes. Derived from secondary data analysis, the patterns are presented and comprehensively exemplified. Contributing these seven types of process exploration has the potential to help complement the focus on operationally excellent processes with a view on revenue-resilient business processes.
Michael Rosemann
Quo Vadis, Business Process Maturity Model? Learning from the Past to Envision the Future
To support companies in systematically improving their business processes, academia has developed and published various business process maturity models in recent decades. Tarhan et al. (2016) expressed initial doubts about the quality of many of the models in their literature review. This paper extends their review by five years (2015–2019) and additionally analyzes the publication outlets as an indicator of model quality. The results strongly provide that business process maturity models are mainly released in less-recognized journals. A reason for this might be problems with replicability and relevance, which are the main criteria for acceptance in higher-quality journals. This finding motivated the derivation of literature-based criteria to increase the transparency, the replicability, and the content relevance of these models. These criteria are a first step to support researchers in publishing more transparent and replicable business process maturity models and to guide reviewers when evaluating papers that are considered for publication. In addition, practitioners benefit from more useful and accessible models.
Vanessa Felch, Björn Asdecker
A Typological Framework of Process Improvement Project Stakeholders
Stakeholder engagement is well established as a critical success factor in Business Process Management (BPM) projects. Yet, guidelines to identify the relevant stakeholder groups and their specific activity is lacking. This study addresses this gap through a typological framework of stakeholder groups in process improvement (PI) projects. The framework is developed inductively from an in-depth case study and contextualized through a synthesis of literature from two different areas; process management and stakeholder research. The resulting framework offers a comprehensive matrix of six diverse stakeholder groups based on their affiliation to a BPM project and their role in the process. The framework differentiates between internal and external stakeholders and identifies three categories of each, namely; those impacted by; a catalyst for; and/or a facilitator of; the process improvement efforts. The framework recognizes the fundamental differences between BPM stakeholders with insight into the origin of those differences and provides a basis for planning and executing engagement activities with different stakeholder groups.
Charon Abbott, Wasana Bandara, Paul Mathiesen, Erica French, Mary Tate
BP-IT Alignment in SMEs: A Game-Based Learning Approach to Increase Stakeholders’ Maturity
This research addresses the lack of sufficient guidance to SMEs with regards to how BP-IT alignment can be achieved in an affordable and scalable manner. The design science approach focused on SME managers and consultants as potential users of the Aligner that incorporates game-based training for raising awareness and maturity building. The deliverables are the Aligner design manual, a game design document as well as a prototype of the user interface, all of which can be considered a blueprint for game production. All parts of the blueprint are based on theoretical grounding and/or support from the literature and were demonstrated to and evaluated with the relevant stakeholders. Evaluation by representatives of both user groups indicate that the concept and UI prototype of the Aligner are indeed suitable for addressing SME-specific circumstances of BP-IT alignment. Given also the affordance and scalability of a gamified solution in principle, the validated blueprints promise to be useful for the development and application of practical games and incorporate transferable know-how for BPM-related game design regarding BP-IT alignment in SMEs.
Mahi Samir Ismail, Ralf Klischewski
Understanding Quality in Declarative Process Modeling Through the Mental Models of Experts
Imperative process models have become immensely popular. However, their use is usually limited to rigid and repetitive processes. Considering the inherent flexibility in most processes in the real-world and the increased need for managing knowledge-intensive processes, the adoption of declarative languages becomes more pertinent than ever. While the quality of imperative models has been extensively investigated in the literature, little is known about the dimensions affecting the quality of declarative models. This work takes an advanced stride to investigate the quality of declarative models. Following the theory of Personal Construct Psychology (PCT), our research introduces a novel method within the Business Process Management (BPM) field to explore quality in the eyes of expert modelers. The findings of this work summarize the dimensions defining the quality of declarative models. The outcome shows the potential of PCT as a basis to discover quality dimensions and advances our understanding of quality in declarative process models.
Amine Abbad Andaloussi, Christopher J. Davis, Andrea Burattin, Hugo A. López, Tijs Slaats, Barbara Weber
Adding Intelligent Robots to Business Processes: A Dilemma Analysis of Employees’ Attitudes
Given the advancements in artificial intelligence, organizations are increasingly interested in applying robotics to their business processes. Unlike the many technological implications, we focus on the human side of robotics which remains under-investigated for higher-skilled employees. We particularly consider employee acceptance of intelligent robots with cognitive skills. During 48 interviews, hypothetical dilemmas regarding manual work, full- and semi-automation are discussed by office workers, managers and IT consultants. The results show that employees are positive about intelligent robots. The majority are willing to transfer repetitive tasks as long as humans can control outputs for accountability. However, employees prefer keeping tasks with creativity and human interaction. Many tasks can thus already be replaced by robotics, but more attention is needed for the facilitating role of organizations (e.g., training). The findings affect innovation strategies for implementing intelligent robots with reduced social implications. The idea of a step-by-step plan encourages a gradual adoption.
Amy Van Looy
How to Keep RPA Maintainable?
Robotic Process Automation (RPA) is a term for software tools that operate on the user interface while trying to mimic a real user. Organizations are eager to adopt RPA, since the technology promises significant benefits such as cost savings. However, it is unclear how organizations should govern RPA. The burden of maintenance, in particular, can become high once an organization scales up its RPA efforts. To prevent or diminish high maintenance efforts, we propose in this paper 11 guidelines to establish low-maintenance RPA implementations. The guidelines are particularly applicable in those contexts where business units themselves oversee these implementations with a Center of Excellence in the background. The guidelines are derived from a literature study and four case studies; they are validated with experts using the Delphi method.
Philip Noppen, Iris Beerepoot, Inge van de Weerd, Mathieu Jonker, Hajo A. Reijers
A Consolidated Framework for Implementing Robotic Process Automation Projects
Robotic process automation (RPA) is a disruptive technology to automate already digital yet manual tasks and subprocesses as well as whole business processes. In contrast to other process automation technologies, RPA only accesses the presentation layer of IT systems and imitates human behavior. Due to the novelty of this approach and the varying approaches when implementing the technology, up to 50% of RPA projects fail. To tackle this issue, we use a design science research approach to develop a framework for the initiation of RPA projects. We analyzed a total of 23 case studies of RPA implementation projects to derive a preliminary sequential model. We then used expert interviews to validate and refine the model. The result is a consolidated framework with variable stages, that offers guidelines with enough flexibility to be applicable in complex and heterogeneous corporate environments. We conclude the paper with a discussion and an outlook on research opportunities on adapting and scaling RPA technology in projects.
Lukas-Valentin Herm, Christian Janiesch, Alexander Helm, Florian Imgrund, Kevin Fuchs, Adrian Hofmann, Axel Winkelmann
A Multi Perspective Framework for Enhanced Supply Chain Analytics
Supply chain analytics, especially in the field of food supply has become a strategic business function. Monthly executive sales and operation planning meetings utilize supply chain analytics to inform strategic business decisions. Having identified gaps in the strategic management of food supply chains, a multi perspective supply chain analytics framework is developed incorporating process and data attributes to support decision making. Using Design Science as the research methodology, a novel framework with a supporting IT artefact is built and presented with early evaluation results.
The resulting multi perspective supply chain analytics framework equips practitioners to identify strategic issues, providing important decision support information. The case study further illustrates the framework has applicability across all integrated food supply chains. This research has highlighted gaps in the application of process science to the supply chain management domain, particularly in the area of simultaneous assessment of process and data. The outcomes contribute to research in this domain providing a framework that will enhance the significant reference modelling and operational management work that has occurred in this field.
Owen Keates, Moe Thandar Wynn, Wasana Bandara
Event Log Generation in a Health System: A Case Study
Process mining has recently gained considerable attention as a family of methods and tools that aim at discovering and analyzing business process executions. Process mining starts with event logs, i.e., ordered lists of performed activities. Since event data is typically not stored in a process-oriented way, event logs have to be generated first. Experience shows that event log generation takes a substantial effort in process mining projects. This case study reports on the experiences made during the event log generation from the real-world data warehouse of a large U.S. health system. As the focal point, the case study looks at activities and processes that are related to the treatment of low back pain. Guided by the main phases of event log generation, i.e., extraction, correlation, and abstraction, we report on challenges faced, solutions found, and lessons learned. The paper concludes with future research directions that have been derived from the lessons learned from the case study.
Simon Remy, Luise Pufahl, Jan Philipp Sachs, Erwin Böttinger, Mathias Weske
Business Process Management
Dirk Fahland
Chiara Ghidini
Prof. Dr. Jörg Becker
Prof. Marlon Dumas
Copyright Year
Electronic ISBN
Print ISBN