Skip to main content
Top

2024 | Book

Business Process Management Workshops

BPM 2023 International Workshops, Utrecht, The Netherlands, September 11–15, 2023, Revised Selected Papers

insite
SEARCH

About this book

This book constitutes revised papers from the International Workshops held at the 21st International Conference on Business Process Management, BPM 2023, in Utrecht, The Netherlands, during September 2023.
Papers from the following workshops are included:• 7th International Workshop on Artificial Intelligence for Business Process Management (AI4BPM 2023)• 7th International Workshop on Business Processes Meet Internet-of-Things (BP-Meet-IoT 2023)• 19th International Workshop on Business Process Intelligence (BPI 2023)• 16th International Workshop on Social and Human Aspects of Business Process Management (BPMS2 2023)• 2nd International Workshop on Data-Driven Business Process Optimization (BPO 2023)• 11th International Workshop on Declarative, Decision and Hybrid Approaches to Processes (DEC2H 2023)• 1st International Workshop on Digital Twins for Business Processes (DT4BP 2023)• 1st International Workshop on Formal Methods for Business Process Management (FM-BPM 2023) • 2nd International Workshop on Natural Language Processing for Business Process Management (NLP4BPM 2023)• 1st International Workshop on Object-Centric Processes from A to Z (OBJECTS 2023)• 3rd International Workshop on Change, Drift, and Dynamics of Organizational Processes (ProDy 2023)
Each of the workshops focused on particular aspects of business process management. Overall, after a thorough review process, 42 full papers were selected from a total of 86 submissions.

Table of Contents

Frontmatter

7th International Workshop in Artificial Intelligence for Business Process Management (AI4BPM 2023)

Frontmatter
A Responsibility Framework for Computing Optimal Process Alignments

In this paper we propose a novel approach to process alignment which leverages on contextual information captured by way of responsibilities. On the one hand, responsibilities may justify deviations. In these cases, we consider deviations as correct behaviors rather than errors. On the other hand, responsibilities can either be met or neglected in the execution trace. Thus, we prefer alignments where neglected responsibilities are minimized.The paper proposes a formal framework for responsibilities in a process model, including the definition of cost functions to determine optimal alignments. It also outlines a branch-and-bound algorithm for their computation.

Matteo Baldoni, Cristina Baroglio, Elisa Marengo, Roberto Micalizio
A Discussion on Generalization in Next-Activity Prediction

Next activity prediction aims to forecast the future behavior of running process instances. Recent publications in this field predominantly employ deep learning techniques and evaluate their prediction performance using publicly available event logs. This paper presents empirical evidence that calls into question the effectiveness of these current evaluation approaches. We show that there is an enormous amount of example leakage in all of the commonly used event logs, so that rather trivial prediction approaches perform almost as well as ones that leverage deep learning. We further argue that designing robust evaluations requires a more profound conceptual engagement with the topic of next-activity prediction, and specifically with the notion of generalization to new data. To this end, we present various prediction scenarios that necessitate different types of generalization to guide future research.

Luka Abb, Peter Pfeiffer, Peter Fettke, Jana-Rebecca Rehse
An Experiment on Transfer Learning for Suffix Prediction on Event Logs

Predicting future activity occurrences for a process instance is a key challenge in predictive process monitoring. Sequential deep learning models have been improving the prediction accuracy for this suffix prediction task. Training such models with many parameters on large event logs requires expensive hardware and is often time consuming. Transfer learning addresses this issue by starting from a pre-trained model to be used as starting point for the training on other data sets thereby reducing training time or improving accuracy in a given time budget. Transfer learning has shown to be very effective for natural language processing and image classification. However, research on transfer learning for predictive process monitoring is scarce and missing for suffix prediction. This paper contributes an experimental study on the effectiveness of transfer learning for suffix prediction using two sequential deep learning architectures (GPT and LSTM). Base models are trained on two public event logs and used as starting point for transfer learning on eight event logs from different domains. The experiments show that even with half of the available training budget and without using very large event logs for the base model, the results obtained in the transfer learning setting are often better and in some cases competitive to when trained using random initialization. A notable exception is an event log with a very large vocabulary of activity labels. This seems to indicate dependence of transfer learning on specific data properties such as vocabulary size and warranting further research.

Mathieu van Luijken, István Ketykó, Felix Mannhardt
A Case for Business Process-Specific Foundation Models

The inception of large language models has helped advance the state-of-the-art on numerous natural language tasks. This has also opened the door for the development of foundation models for other domains and data modalities (e.g., images and code). In this paper, we argue that business process data has unique characteristics that warrant the creation of a new class of foundation models to handle tasks like activity prediction, process optimization, and decision making. These models should also tackle the challenges of applying AI to business processes which include data scarcity, multi-modal representations, domain specific terminology, and privacy concerns. To support our claim, we show the effectiveness of few-shot learning and transfer learning in next activity prediction, crucial properties for the success of foundation models.

Yara Rizk, Praveen Venkateswaran, Vatche Isahagian, Austin Narcomey, Vinod Muthusamy
Using Reinforcement Learning to Optimize Responses in Care Processes: A Case Study on Aggression Incidents

Previous studies have used prescriptive process monitoring to find actionable policies in business processes and conducted case studies in similar domains, such as the loan application process and the traffic fine process. However, care processes tend to be more dynamic and complex. For example, at any stage of a care process, a multitude of actions is possible. In this paper, we follow the reinforcement approach and train a Markov decision process using event data from a care process. The goal was to find optimal policies for staff members when clients are displaying any type of aggressive behavior. We used the reinforcement learning algorithms Q-learning and SARSA to find optimal policies. Results showed that the policies derived from these algorithms are similar to the most frequent actions currently used but provide the staff members with a few more options in certain situations.

Bart J. Verhoef, Xixi Lu
ProtoNER: Few Shot Incremental Learning for Named Entity Recognition Using Prototypical Networks

Key value pair (KVP) extraction or Named Entity Recognition (NER) from visually rich documents has been an active area of research in document understanding and data extraction domain. Several transformer based models such as LayoutLMv2 [1], LayoutLMv3 [2], and LiLT [3] have emerged achieving state of the art results. However, addition of even a single new class to the existing model requires (a) re-annotation of entire training dataset to include this new class and (b) retraining the model again. Both of these issues really slow down the deployment of updated model.We present ProtoNER: Prototypical Network based end-to-end KVP extraction model that allows addition of new classes to an existing model while requiring minimal number of newly annotated training samples. The key contributions of our model are: (1) No dependency on dataset used for initial training of the model, which alleviates the need to retain original training dataset for longer duration as well as data re-annotation which is very time consuming task, (2) No intermediate synthetic data generation which tends to add noise and results in model’s performance degradation, and (3) Hybrid loss function which allows model to retain knowledge about older classes as well as learn about newly added classes.Experimental results show that ProtoNER finetuned with just 30 samples is able to achieve similar results for the newly added classes as that of regular model finetuned with 2600 samples.

Ritesh Kumar, Saurabh Goyal, Ashish Verma, Vatche Isahagian

7th International Workshop on Business Processes Meet the Internet-of-Things (BP-Meet-IoT 2023)

Frontmatter
Experiences from the Internet-of-Production: Using “Data-Models-in-the-Middle” to Fight Complexity and Facilitate Reuse

Data-driven approaches play a key role in improving operational processes and production is no exception. The Internet-of-Production (IoP) is an ambitious initiative aiming at cross-domain collaboration in production while exploiting semantically adequate and context-aware data at different levels of granularity. The Internet-of-Things (IoT), in the context of production also referred to as Industry 4.0 or the Industrial Internet of Things, provides a wide range of data assets. However, these are often handled in an ad-hoc manner with little support for reuse. Data pipelines convert machine- or system-specific data into a format suitable for data-science techniques such as machine learning. Based on an analysis of the data used in IoP, we developed so-called “Data-Models-in-the-Middle” (DMMs). Two such models are described in this paper: Measurement and Event Data (MAED) and Object-Centric Event data (OCED). OCED enables Object-Centric Process Mining (OCPM), allowing organizations to view their operational processes from any perspective using a single source of truth. However, OCED is not suitable for low-level machine data that contain a mixture of continuous measurements (e.g., time series data describing position, temperature, force, speed, etc.) and discrete events. Therefore, we also propose MAED as a data format. The combination of both “Data-Models-in-the-Middle” (MAED and OCED) provides a good coverage of many production-related use cases.

Wil M. P. van der Aalst
Analyzing Behavior in Cyber-Physical Systems in Connected Vehicles: A Case Study

Cyber-physical systems embedded in various areas connected to the internet generate unprecedented volumes of data. Humans frequently interact with these systems, thus allowing companies to analyze the data and gain valuable insights into user-product interactions. To analyze the underlying behavior recorded in data, process-mining techniques can be used. However, to apply process mining, the low-level measurements have to be transformed into an event log. In this work, we analyze enriched and transformed connected-vehicle data dealing with an assistance system using process-mining techniques. We analyze the time spent in states of the system, compare behavior between different models using Kruskal-Wallis’ and Dunn’s test, and discover reasons for state switching. We demonstrate how companies can apply process mining on data collected by internet-of-things devices to understand the usage, ratify system requirements, and develop their products.

Harry H. Beyel, Omar Makke, Oleg Gusikhin, Wil M. P. van der Aalst
An Object-Centric Approach to Handling Concurrency in IoT-Aware Processes

The increasing adoption of IoT in the context of Business Process Management (BPM) makes it necessary to efficiently coordinate concurrent processes and activities that involve physical resources. Traditional approaches to handling concurrency in BPM systems are not suitable for automating IoT-aware processes due to novel challenges raised by the IoT. We propose to handle concurrency in IoT based on object-centric processes implemented in the PHILharmonicFlows framework. The framework facilitates the data-driven modeling and coordination of object lifecycles and interactions, which are suitable to address concurrency in IoT-aware processes. The approach is demonstrated in a scenario from smart manufacturing. The results show that PHILharmonicFlows offers a flexible and comprehensible solution for coordinating concurrent activities in IoT settings with constrained physical resources.

Florian Gallik, Yusuf Kirikkayis, Ronny Seiger, Manfred Reichert
Viola: Detecting Violations of Behaviors from Streams of Sensor Data

Sensor networks and the Internet of Things enable the easy collection of environmental data. With this data it is possible to perceive the activities carried out in an environment. For example, in healthcare, sensor data could be used to identify and monitor the daily routine of people with dementia. In fact, changes in routines could be a symptom of the worsening of the disease. Streaming conformance checking techniques aim at identifying in real-time, from a stream of events, whether the observed behavior differs from the expected one. However, they require a stream of activities, not sensor data. The artifact-driven process monitoring approach combines the structure of the control-flow with the data in an E-GSM model. This paper presents Viola, the first technique capable of automatically mining an E-GSM model from a labeled sensor data log, which is then suitable for runtime monitoring from an unlabeled sensor stream to accomplish our goal (i.e., streaming conformance checking). This approach is implemented and has been validated with synthetic sensor data and a real-world example.

Gemma Di Federico, Giovanni Meroni, Andrea Burattin
An Event-Centric Metamodel for IoT-Driven Process Monitoring and Conformance Checking

Process monitoring and conformance checking analyze process events describing process executions. However, such events are not always available or in a form suitable for these analysis tasks, for example for manual processes and (semi-)automated processes whose executions are not controlled by a Process-Aware Information System. To bridge this gap, we propose to leverage Internet of Things (IoT) technologies for sensing low-level events and abstracting them into high-level process events to enable process monitoring and conformance checking. We propose an event-centric metamodel for monitoring and conformance checking systems that is agnostic with respect to process characteristics such as level of automation, system support, and modeling paradigm. We demonstrate the applicability of the metamodel by instantiating it for processes represented by different modeling paradigms.

Marco Franceschetti, Ronny Seiger, Barbara Weber

19th International Workshop on Business Process Intelligence (BPI 2022)

Frontmatter
Relationships Between Change Patterns in Dynamic Event Attributes

Process mining utilizes process execution data to discover and analyse business processes. Event logs represent process execution data, providing information about activities executed in a process instance. In addition to generic event attributes like activity and timestamp, events might contain domain-specific attributes, such as a blood sugar measurement in a healthcare environment. Many of these values change during a typical process quite frequently. Hence, we refer to those as dynamic event attributes. Change patterns can be derived from dynamic event attributes, describing if the attribute values change from one activity to another. However, change patterns can only be identified in an isolated manner, neglecting the chance of finding co-occuring change patterns. This paper provides an approach to identify relationships between change patterns. We applied the proposed technique on the MIMIC-IV real-world dataset on hospitalizations in the US and evaluated the results with a medical expert. The approach is implemented in Python using the PM4Py framework.

Jonas Cremerius, Hendrik Patzlaff, Mathias Weske
State of the Art: Automatic Generation of Business Process Models

Manual Business Process Modeling can be a time-consuming and error-prone process, resulting in low-quality models. Therefore, various approaches have been proposed in research and practice to partially or fully automate the creation of business process models. We conducted a systematic literature review (LR) on automatic generation of business process models. The LR demonstrates that there exist various approaches for automated generation of business process models, which differ in terms of the input data, generation methods, and modeling languages utilized. Despite active research into automatic model generation, there is still considerable scope for improvement. New technologies are continually being developed to support automatic model generation. Additionally, the increasing digitalization of processes and organizations generates vast amounts of data that can be harnessed for modeling.

Selina Schüler, Sascha Alpers
Navigating Event Abstraction in Process Mining: A Comprehensive Analysis of Sub-problems, Data, and Process Characteristic Considerations

Process mining technologies assume event logs with an appropriate level of granularity, but many systems generate low-level event logs, resulting in complex process models. Event abstraction addresses this issue by transforming low-level event logs into abstracted event logs, enabling the derivation of business-level process models. However, practitioners often struggle to choose suitable event abstraction methods. This is primarily due to the lack of comparative studies that analyze the differences between methods and the insufficient information regarding the data and relevant process characteristics to be considered. This study conducts a comprehensive literature review on event abstraction to overcome these challenges. The review focuses on summarizing specific sub-problems in event abstraction, identifying types of data that can be utilized, and highlighting important process characteristics that should be considered. The insights and guidance provided by this review will be valuable to practitioners seeking to select and implement effective event abstraction techniques.

Jungeun Lim, Minseok Song
Timed Alignments with Mixed Moves

We study conformance checking for timed models, that is, process models that consider both the sequence of events that occur, as well as the timestamps at which each event is recorded. Time-aware process mining is a growing subfield of research, and as tools that seek to discover timing-related properties in processes develop, so does the need for conformance-checking techniques that can tackle time constraints and provide insightful quality measures for time-aware process models. One of the most useful conformance artefacts is the alignment, that is, finding the minimal changes necessary to correct a new observation to conform to a process model. In this paper, we solve the timed alignment problem where the metrics used to compare timed processes allow weighted mixed moves, i.e. an error on the timestamp of an event may or may not propagate to its successors, and we provide linear time algorithms for a large class of such weighted mixed distances, both for distance computation and alignment on models with sequential causal processes.

Neha Rino, Thomas Chatain

16th International Workshop on Social and Human Aspects of Business Process Management (BPMS2 2023)

Integrating Social Media and Business Process Management: Exploring the Role of AI Agents and the Benefits for Agility

Business Process Management (BPM) faces increasing societal and business challenges. Societal issues include effectively managing unexpected changes and ensuring employee engagement during process modifications. The former category involves the limitations of technology when introducing process changes while Agile BPM and Social BPM concepts have been explored in previous studies to tackle the latter. These investigations have resulted in the development of a Social-Media (SM) based BPM platform and an agility framework. The SM platform draws inspiration from popular social media platforms such as Twitter and Instagram. Its primary objective is to merge the design-time and run-time phases of the business process (BP) lifecycle as well as to actively engage stakeholders in the dynamic design and implementation of BPs. By simplifying the response to potential changes and maintaining stakeholder involvement throughout the process, the platform addresses the aforementioned challenges. This article establishes a correlation between SM platform concepts and BPM concepts, emphasizing how they facilitate the concurrent design and execution of BPs. Furthermore, the article demonstrates the application of the SM platform in implementing the agility framework through a use case analysis involving a scientific paper submission process. To overcome the current emphasis on stakeholder involvement and human capabilities in applying the platform, we finally explore the possibilities of employing AI agents as automated assistants.

Mehran Majidian Eidgahi, Anne-Marie Barthe-Delanoë, Dominik Bork, Sina Namaki Araghi, Guillaume Macé-Ramète, Frederick Benaben
Design Principles for Using Business Process Management Systems

Organizations aim to achieve operational excellence to reduce costs and improve the quality of their business processes. Business process management (BPM) enables continuous improvement of business processes. Business process management systems (BPMS) serve as an entry point to BPM activities and afford firms to manage, execute, and automate business processes. This study follows an action design research approach to design a BPMS in use together with a medium-sized German fashion company. We concurrently evaluated the artifact-in-use by tracking performance indicators that are aligned with the company’s objective. As a result of our formalization of learning, we propose seven design principles for using BPMSs to achieve continuous improvement of business processes. These design principles comprise user management, process modeling, automation, logging, monitoring, integration, and case handling.

Sebastian Dunzer, Willi Tang, Nico Höchstädter, Sandra Zilker, Martin Matzner
Towards a Measurement Instrument for Assessing Capabilities When Innovating Less-Structured Business Processes

Many contemporary organizations deal with different types of business processes, including less-structured business processes (LSBP). The latter are characterized by more unpredictable situations, ad-hoc tasks, limited process details, and a stronger emphasis on the human aspects as compared to the more structured business processes (SBP). However, existing BPM approaches often encounter challenges when driving innovation within organizations that have LSBP characteristics. To address this issue, organizations still lack a measurement tool to assess their process innovation capability (PIC) that is tailored towards LSBP (as opposed to generic BPM measurement instruments). Our study fills this gap by gradually building and testing a PIC-LSBP measurement instrument. We rely on empirical data from an international Delphi study, followed by a demonstration in two real-life organizations (i.e., one manufacturing organization in Asia and one service organization in Europe). The study aims at showing the practical application of the uncovered measurement instrument, seeking to identify and effectively address process innovation in more dynamic and complex business environments. The resulting instrument consists of six main capability areas, 18 sub-areas, and 55 formative measurement items. The findings highlight the significance of leveraging an instrument as a means to assess an organization’s current innovation capability with respect to a specific process type. Based on this insight, our research contributes to a broader understanding of how organizations can assess their existing process situation and derive action plans.

Joklan Imelda Camelia Goni, Amy Van Looy

2nd International Workshop on Data-Driven Business Process Optimization (BPO 2023)

Frontmatter
Timed Process Interventions: Causal Inference vs. Reinforcement Learning

The shift from the understanding and prediction of processes to their optimization offers great benefits to businesses and other organizations. Precisely timed process interventions are the cornerstones of effective optimization. Prescriptive process monitoring (PresPM) is the sub-field of process mining that concentrates on process optimization. The emerging PresPM literature identifies state-of-the-art methods, causal inference (CI) and reinforcement learning (RL), without presenting a quantitative comparison. Most experiments are carried out using historical data, causing problems with the accuracy of the methods’ evaluations and preempting online RL. Our contribution consists of experiments on timed process interventions with synthetic data that renders genuine online RL and the comparison to CI possible, and allows for an accurate evaluation of the results. Our experiments reveal that RL’s policies outperform those from CI and are more robust at the same time. Indeed, the RL policies approach perfect policies. Unlike CI, the unaltered online RL approach can be applied to other, more generic PresPM problems such as next best activity recommendations. Nonetheless, CI has its merits in settings where online learning is not an option.

Hans Weytjens, Wouter Verbeke, Jochen De Weerdt
Towards Data-Driven Business Process Redesign Through the Lens of Process Mining Case Studies

Process mining is widely used for business process analysis, but rarely informs Business Process Redesign (BPR) activities. We review process mining literature and BPR framework to create thematic maps of state-of-the-art process mining analyses, techniques, outcomes and BPR best practices. We collect 156 case studies where process mining is applied and use them to validate the proposed themes. We reveal connections between the themes to explore the synergy between process mining and process redesign. Our work contributes to the development of an approach for BPR practitioners to systematically leverage the process mining capabilities, providing a solid starting point for data-driven BPR.

Zeping Wang, Rehan Syed, Chun Ouyang
Analyzing the Devil’s Quadrangle of Process Instances Through Process Mining

The Devil’s Quadrangle is a framework used in Business Process Management to describe the inherent process performance trade-offs regarding the time, costs, flexibility, and quality dimensions. In practice, improving a process through one of these dimensions might have a negative effect on the performance of the other dimensions. The dimensions considered by the Devil’s Quadrangle are often used for defining indicators that illustrate the overall performance of processes. From a Process Mining perspective, analyzing these dimensions at higher granularity levels, such as for every process instance, is of interest. To achieve this, this work proposes a method for defining Process Mining filters based on metrics related to performance indicators of the four Devil’s Quadrangle dimensions. The metrics are calculated for every process instance, which allows using the filters to observe differences in process behavior while considering constraints to the performance indicators and trade-offs among the four dimensions. It is expected that this visualization will be helpful during exploratory process analysis. It will facilitate the identification of process instances that conform to the filters applied to the performance indicators, as well as the dimensions where improvement is required while considering process instances that do not conform to the applied filters. A Celonis dashboard with the proposed filters has been generated to validate the method.

Ignacio Velásquez, Marcos Sepúlveda

11th International Workshop on DEClarative, DECision and Hybrid approaches to processes (DEC2H 2023)

Frontmatter
Generating Event Logs from Hybrid Process Models

To carry on controlled experiments in process mining, it is necessary to generate event logs with specific characteristics. This has led to the development of log generation techniques in which a process model is simulated to generate an event log that is both compliant with the process model and also has certain user-defined properties (e.g., a certain number of traces, traces with certain lengths, etc.). Such techniques are available for a variety of modeling languages, both procedural and declarative. However, they are limited to simulating a single (procedural or declarative) process model at a time and do not allow simulating concurrent executions of multiple separate, but interacting, processes. In this paper, we introduce a log generation approach that takes multiple (procedural and declarative) process models (i.e., a Hybrid Business Process Representation) as input and produces an event log matching the concurrent execution of these models on the same case instances. We discuss the details of our approach and evaluate its implementation.

Anti Alman, Fabrizio Maria Maggi, Marco Montali, Andrey Rivkin
Exploring Hybrid Modelling of Industrial Process – Mining Use Case

Modelling industrial processes, especially in heavy industries, becomes a challenging task. The main aspects that influence modelling are related to the availability of suitable event logs and process variability, impacting the selection of modelling language and discovery algorithms. Our paper presents the results of modelling a real-life industrial process, namely longwall shearer operation. We started with motivation resulting from previous findings based on a comparison of imperative and declarative modelling of the process in question. It led us to the creation of a hybrid model and conformance checking to evaluate its ability to express sophisticated traces in the process execution. Our results show that the hybrid model reveals fewer deviations than analysed pure paradigm models (Petri nets and Declare).

Edyta Brzychczy, Krzysztof Kluza, Katarzyna Gdowska
Pareto-Optimal Trace Generation from Declarative Process Models

Declarative process models (DPMs) enable the description of business process models with a high level of flexibility by being able to describe the constraints that compliant traces must abide by. In this way, a well-formed declarative specification generates a family of compliant traces. However, little is known about the difference between different compliant traces, as the only criterion used for comparison is satisfiability. In particular, we believe that not all compliant traces are alike: some might be sub-optimal in their resource usage. In this work, we would like to support users of DPMs in the selection of compliant and optimal traces. In particular, we use Dynamic Condition Response (DCR) graphs as our language to represent DPMs, extending it with a parametric definition of costs linked to events. Multiple types of cost imply that different traces might be optimal, each according to a different cost dimension. We encode cost-effective finite trace generation as a Constraint Optimisation Problem (COP) and showcase the feasibility of the implementation via an implementation in MiniZinc. Our initial benchmarks suggest that the implementation is capable of providing answers efficiently for processes of varying size, number of constraints, and trace length.

Juan F. Diaz, Hugo A. López, Luis Quesada, Juan C. Rosero
Empirical Evidence of DMN Errors in the Wild - An SAP Signavio Case Study

While the Decision Model and Notation standard (DMN) is considered to be an increasingly popular standard, there is a broad consensus that human modelling errors can easily occur in the creation of DMN models. Yet, while this consensus is clear, there is only limited evidence of which error types exactly may occur in practice. In this work, we therefore present some empirical evidence on DMN errors in the wild. Specifically, we analyze the SAP-SAM dataset by SAP Signavio, containing over 500 000 real-world conceptual models. Our results show that modelling errors, such as missing rules, occur frequently in real-life settings (36.1% of all models contained some form of issue). Furthermore, we analyze the distribution of which error types have occurred (relative to an existing classification of DMN error types from a previous work). To the best of our knowledge, this is the largest DMN study conducted to date (N = 5 668 DMN models).

Carl Corea, Timotheus Kampik, Patrick Delfmann

2nd International Workshop on Digital Twins for Business Processes (DT4BP 2023)

Frontmatter
Digital Twin of the Organization for Support of Customer Journeys and Business Processes

We present a new approach for a Digital Twin of an Organization (DTO). Its focus is to better support weakly structured and knowledge-intensive business processes. Driving forces behind this development are the increasing demands for organizational agility, customer focus and information utilization. Existing business information systems show a number of shortcomings in this respect: Process-Aware Information Systems (PAIS) either neglect agility (e.g., workflow management systems as tightly framed PAIS) or process representation (e.g., groupware as unframed PAIS). Customer Relationship Management (CRM) systems fall behind in customer data integration and analysis for customer journey support, and Business Intelligence (BI) focuses too much on structured data. In contrast, a DTO is highly information-based, real-time enabled and visualization-oriented and thus better fits the requirements. However, organizations as complex socio-technical systems with open boundaries are challenging for digital twins. We have chosen a process-oriented representation for the virtual part of the DTO based on internal and external data. The increased availability of unstructured data, for example, from process contexts, supports process redesign during execution: this results in more process agility while maintaining a comprehensible process framing. An Insight Engine augmented with artificial intelligence processes the data. In a case study we present a DTO prototype for the service sector. More precisely, the DTO supports a marketing campaign from design to service execution associated to the respective customer journey, representing business process and customer journey in a joint representation.

Wolfgang Groher, Uwe V. Riss
Framing the Digital Business Process Twin: From a Holistic Maturity Model to a Specific and Substantial Use Case in the Automotive Industry

The digital twin is a frequent subject of discussion in both academia and industry. High expectations are placed on the concept of the digital twin across many industries. The idea of transferring and adapting this concept to business processes reveals new potentials, perspectives, and paths for the advanced digital transformation of enterprises. The effective exploitation of the potentials of the digital twin in the business process context is of priority for many business sectors. While the idea of a business-process-related digital twin has already been discussed in academia, there is still a lack of holistic, proven, and well-defined real-world use cases. This paper aims to deliver a substantial and valuable contribution to the research regarding the integration of digital twins and business processes through understanding and describing the expected and realistic capabilities of the digital twin for business processes. This is achieved through the development of two deliverables: The thesis of this paper is positioned around the novel idea that the development of a business-process-related digital twin must follow an evolutionary course based on consecutive steps. This position is supported and reinforced by a first unique maturity model to distinguish between the development stages of a digital twin for business processes. As the second deliverable, the functionality of the depicted digital business process twin is applied and discussed within the perimeter of a specific use case from the automotive industry. The novelty of this paper results from the integration of the terms digital model, digital shadow, and digital twin into a maturity-based concept in the spotlight of business processes. The novelty is driven by applying this concept to a real-world scenario.

Markus Rabe, Emre Kilic

1st International Workshop on Formal Methods for Business Process Management (FM-BPM 2023)

Frontmatter
Model-Independent Error Bound Estimation for Conformance Checking Approximation

Conformance checking techniques quantify correspondence between a process’s execution and a reference process model using event data. Alignments, used for conformance statistics, are computationally expensive for complex models and large datasets. Recent studies show accurate approximations can be achieved by selecting subsets of model behavior. This paper presents a novel approach deriving error bounds for conformance checking approximation based on arbitrary activity sequences. The proposed approach allows for the selection of relevant subsets for improved accuracy. Experimental evaluations validate its effectiveness, demonstrating enhanced accuracy compared to traditional alignment methods.

Mohammadreza Fani Sani, Martin Kabierski, Sebastiaan J. van Zelst, Wil M. P. van der Aalst
Repair of Unsound Data-Aware Process Models

Process-aware Information Systems support the enactment of business processes, and rely on a model that prescribes which executions are allowed. As a result, the model needs to be sound for the process to be carried out. Traditionally, soundness has been defined and studied by only focusing on the control-flow. Some works proposed techniques to repair the process model to ensure soundness, ignoring data and decision perspectives. This paper puts forward a technique to repair the data perspective of process models, keeping intact the control flow structure. Processes are modeled by acyclic Data Petri Nets. Our approach repairs the Constraint Graph, a finite symbolic abstraction of the infinite state-space of the underlying Data Petri Net. The changes in the Constraint Graph are then projected back onto the Data Petri Net.

Matteo Zavatteri, Davide Bresolin, Massimiliano de Leoni
Non-Automata Based Conformance Checking of Declarative Process Specifications Based on ASP

We investigate the use of Answer Set Programming (ASP) for the problem of conformance checking of LTL-based Declarative Process Specifications. In particular, we propose ASP solutions that are independent of automata. That is: in related works, the semantics of the declarative process specifications are often captured by means of finite state automata. This means that for conformance checking, the constraints of the specification first have to be transformed into a corresponding automata representation, which introduces a computational burden. In this work, we present a new ASP-based approach which encodes the constraint semantics directly and therefore can be used to check conformance without the need of performing automata operations. We implement our approach and perform experiments with real-life datasets, comparing our approach to a selection of state-of-the-art approaches. Our experiments show that our approach can outperform existing approaches in some cases. Furthermore, our approach can easily be extended to check whether the considered constraint sets are satisfiable (i.e., consistent).

Isabelle Kuhlmann, Carl Corea, John Grant
Equivalence of Data Petri Nets with Arithmetic

Data Petri nets (DPNs) with arithmetic have gained popularity as a model for data-aware processes, thanks to their ability to balance simplicity with expressiveness and because they can be automatically discovered from event logs. While model checking techniques for DPNs have been studied, there are analysis tasks highly relevant for BPM that are beyond these methods. We focus here on process equivalence and process refinement with respect to language and configuration spaces; such comparisons are important in the context of process repair and discovery. To solve these tasks, we propose an approach for bounded DPNs based on constraint graphs, which are faithful abstractions of the reachable state space. Though the considered verification tasks are undecidable in general, we show that our method is a decision procedure for large classes of DPNs relevant in practice.

Marco Montali, Sarah Winkler

2nd International Workshop on Natural Language Processing for Business Process Management (NLP4BPM 2023)

Frontmatter
Abstractions, Scenarios, and Prompt Definitions for Process Mining with LLMs: A Case Study

Large Language Models (LLMs) are capable of answering questions in natural language for various purposes. With recent advancements (such as GPT-4), LLMs perform at a level comparable to humans for many proficient tasks. The analysis of business processes could benefit from a natural process querying language and using the domain knowledge on which LLMs have been trained. However, it is impossible to provide a complete database or event log as an input prompt due to size constraints. In this paper, we apply LLMs in the context of process mining by i) abstracting the information of standard process mining artifacts and ii) describing the prompting strategies. We implement the proposed abstraction techniques into pm4py, an open-source process mining library. We present a case study using available event logs. Starting from different abstractions and analysis questions, we formulate prompts and evaluate the quality of the answers.

Alessandro Berti, Daniel Schuster, Wil M. P. van der Aalst
Text-Aware Predictive Process Monitoring of Knowledge-Intensive Processes: Does Control Flow Matter?

Predictive process monitoring (PPM) enables organizations to predict the behavior of ongoing processes, e.g., the lead time. This is of great interest for knowledge-intensive processes (KIPs), which often cover long time spans. With such insights, resource allocation or customer relationship management could be improved. While already many PPM methods exist, they have not yet been applied to KIPs. Thus, we extend PPM research by using machine learning and natural language processing (NLP) to develop and evaluate a novel text-aware PPM approach tailored towards monitoring KIPs. By developing suitable features and considering various time intervals, our approach encodes and aggregates the event log. Using two real-world event logs, we assess our methodology. We demonstrate that the MAE improves as compared to state-of-the-art PPM methods. It shows that the control flow perspective of KIPs should primarily be neglected, while considering more structured features and unstructured textual information is essential.

Katharina Brennig, Kay Benkert, Bernd Löhr, Oliver Müller
Large Language Models Can Accomplish Business Process Management Tasks

Business Process Management (BPM) aims to improve organizational activities and their outcomes by managing the underlying processes. To achieve this, it is often necessary to consider information from various sources, including unstructured textual documents. Therefore, researchers have developed several BPM-specific solutions that extract information from textual documents using Natural Language Processing techniques. These solutions are specific to their respective tasks and cannot accomplish multiple process-related problems as a general-purpose instrument. However, in light of the recent emergence of Large Language Models (LLMs) with remarkable reasoning capabilities, such a general-purpose instrument with multiple applications now appears attainable. In this paper, we illustrate how LLMs can accomplish text-related BPM tasks by applying a specific LLM to three exemplary tasks: mining imperative process models from textual descriptions, mining declarative process models from textual descriptions, and assessing the suitability of process tasks from textual descriptions for robotic process automation. We show that, without extensive configuration or prompt engineering, LLMs perform comparably to or better than existing solutions and discuss implications for future BPM research as well as practical usage.

Michael Grohs, Luka Abb, Nourhan Elsayed, Jana-Rebecca Rehse
Collecting Activities and States in German Business Process Models

In this paper we describe a publicly available dataset containing 6,266 verb phrases which can be used to express activities and states in business process models in German language. This addresses the problem that often activities or states in labels of business process models are not expressed by a single verb but rather by a multiword expression.Verb phrases that are considered semantically equivalent have been grouped into synsets. This helps to identify the actual meaning of a textual label.The dataset has been compiled from a comprehensive analysis of 6,711 business process models with German labels and a study of already available collections of multiword expressions in the literature.The resource can be used for algorithms that analyze business process models with respect to the semantics of their labels.

Ralf Laue, Kristin Kutzner, Martin Läuter

1st International Workshop of Object-centric Processes from A to Z (OBJECTS 2023)

Typed Petri Nets with Variable Arc Weights

Object-centric processes have become increasingly popular in the last years mainly due to the establishment of object-centric process mining. One of the most popular formalisms for describing lifecycles of objects and capturing relationships between them are object-centric Petri nets. An important feature of such nets is the ability to transfer an arbitrary number of same-typed objects upon transition firing by means of so-called variable arcs. In this work, we generalise the concept of variable arcs by introducing a fairly simple and versatile mechanism of arc weight parameterization via linear combinations of type-dependent weight variables, and incorporating it into the new formalism of typed Petri nets with variable arc weights. Moreover, we demonstrate that such extended variable arcs can be effectively eliminated, making the resulting net model expressively equivalent to a classical P/T-net. This result allows a natural transfer of analytical techniques available for P/T-nets to the formalisms like object-centric Petri nets.

Irina A. Lomazova, Alexey A. Mitsyuk, Andrey Rivkin
Addressing Convergence, Divergence, and Deficiency Issues

The application of process mining algorithms to event logs requires the extraction of cases, describing end-to-end runs through the process. When extracting cases for object-centric event data, this extraction is often subject to convergence, divergence, and deficiency issues. Recently, connected-components extraction was proposed, extracting graph-based cases, called process executions, from the graph of event precedence constraints. This paper shows that only case extraction based on connected-components is free of convergence, divergence, and deficiency issues. This proof has several implications for future research in object-centric process mining. First, if a downstream process mining task is negatively affected by these quality issues, connected-components extraction is the only way to mitigate these. Second, additional requirements that would conflict with connected-components extraction would render the mitigation of quality issues infeasible, making trade-offs between quality issues necessary. Third, as traditional event logs are a special case of object-centric event logs and connected-components extraction is equivalent to the traditional case concept for a traditional event log, new extraction techniques, as well as object-centric adaptations of algorithms, should be backward-compatible.

Jan Niklas Adams, Wil M. P. van der Aalst
A Model-Driven Engineering Perspective for the Object-Centric Event Data (OCED) Metamodel

Object-centric process mining has been proposed as a practical solution for dealing with the multiple views from which a process can be analyzed regarding its relation with organizational data (i.e., objects). The Object-Centric Event Data (OCED) metamodel was recently proposed as a data exchange standard for object-centric process mining. As far as we know, the metamodel has yet to be studied from a Model-Driven Engineering (MDE) perspective. This paper provides an Ecore-based representation of the OCED metamodel and explores its capabilities from an MDE perspective. We also study how the Business Process and Organizational Data Integrated Metamodel (BPODIM), i.e., a proposal for integrating process and organizational data, can be aligned with OCED providing fruitful information for OCED improvement.

Daniel Calegari, Andrea Delgado
Predictive Analytics for Object-Centric Processes: Do Graph Neural Networks Really Help?

The object-centric process paradigm is increasingly gaining popularity in academia and industry. According to this paradigm, the process delineates through the parallel execution of different execution flows, each referring to a different object involved in the process. Object interaction is present, and takes place through bridging events where these parallel executions synchronize and exchange data. However, the complex intricacy of instances of such processes relating to each other via many-to-many associations makes a direct application of predictive process analytics approaches designed for single-id event logs impossible. This paper reports on the experience of comparing the predictions of two techniques based on gradient boosting or the Long Short-Term Memory (LSTM) network against two based on graph neural networks. The four techniques were empirically evaluated on event logs related to two real object-centric processes, and more than 20 different KPI definitions. The results show that graph-based neural networks generally perform worse than techniques based on Gradient Boosting. Considering that graph-based neural networks have training times that are 8-10 times larger, the conclusion is that their use does not seem to be justified.

Riccardo Galanti, Massimiliano de Leoni

3rd International Workshop on Change, Drift, and Dynamics of Organizational Processes (ProDy 2023)

A Method for Creating Process Architecture: Driving Organizational Change

The present study aims to validate the proposed Process Architecture Creation (PAC) method for driving organizational change and facilitating process optimization. To achieve this objective, a systematic literature was conducted to obtain a theoretical basis on existing approaches to PA creation method proposition. To validate the proposed model, a specialized validation was conducted through a focus group consisting of nine Business Process Management (BPM) specialists. The results present a new PAC method composed of four stages: Stage 1, Identify the organization’s business context; Stage 2, Define the PA; Stage 3, Align PA and Business Architecture; and Stage 4, Define the measurement and control mechanism. The PAC Method may help organizations involved in BPM promotion initiatives to create their own PAs by following the stages proposed in this work. According to the focus group specialists, the architecture created using this method will provide a holistic view of the organization’s processes and routines, helps optimize problem-prone processes, support the establishment of change plans aligned with process alignment, aids in identifying the impact of necessary changes, and contributes to the implementation of specific technologies, such as robotic process automation. The originality of this work consists of the consolidation of the PA concept, which will provide benefits to the research field by serving as a basis for in-depth knowledge of the subject.

Emerson Lima Aredes, Marina Lourenção, Roger Burlton, Hajo Reijers, Silvia Inês Dallavale de Pádua
Ambidextrous Business Process Management: Unleashing the Dual Power of Innovation and Efficiency

Ambidextrous Business Process Management (ABPM) is a strategic approach that aims to balance innovation and efficiency in managing business processes. This concept recognizes the need to simultaneously exploit existing processes and explore new opportunities for improvement. However, there is a need for a conceptual understanding of how changes in business processes and routines can be effectively described and studied within the context of ABPM. This article presents a case study conducted on ABPM in a Shared Service Center (SSC) in Poland. The study aimed to explore the essence of ABPM and provide insights into its practical implications. By addressing the research question of how changes in business processes and routines can be conceptually described and theorized, this article contributes to the body of knowledge in the field of ABPM. It offers insights into the theoretical foundations and principles underlying the effective management of change in processes and routines. The findings presented in this case study have implications for both practitioners and researchers in the field of BPM. Practitioners can derive substantial advantages from acquiring a profound comprehension of the strategic management of change in processes and routines within a SSC. Researchers can leverage the furnished conceptual framework and theoretical insights to propel the advancement of knowledge in the domain of ABPM, specifically in the context of SSC.

Piotr Sliż
Adding Dynamic Simulation to Business Process Modeling via System Dynamics

Business process modeling and system dynamics are different approaches that are used in the design and management of organizations. Both approaches are concerned with the processes in, and around, organizations with the aim to identify, design and understand their behavior as well as potential improvements. At the same time, these approaches differ considerably in their methodological focus. While business process modeling specifically takes the (control flow of) business processes as its primary focus, system dynamics takes the analysis of complex and multi-faceted systems as its core focus. More explicitly combining both approaches has the potential to better model and analyze (by way of simulation) complex business processes, while specifically also including more relevant facets from the environment of these business processes. Furthermore, the inherent ability for simulation of system dynamics models, can be used to simulate the behavior of processes over time, while also putting business processes in a broader multi-faceted context. In this paper, we report on initial results on making such a more explicit combination of business process modeling and system dynamics. In doing so, we also provide a step-by-step guide on how to use BPMN based models and system dynamics models together to model and analyze complex business processes, while illustrating this in terms of a case study on the maintenance of building facades.

H. A. Proper, Q. Zhu, J. P. P. Ravesteijn, W. Gielingh
Backmatter
Metadata
Title
Business Process Management Workshops
Editors
Jochen De Weerdt
Luise Pufahl
Copyright Year
2024
Electronic ISBN
978-3-031-50974-2
Print ISBN
978-3-031-50973-5
DOI
https://doi.org/10.1007/978-3-031-50974-2