Skip to main content
Top

2015 | Book

Business Process Management Workshops

BPM 2014 International Workshops, Eindhoven, The Netherlands, September 7-8, 2014, Revised Papers

insite
SEARCH

About this book

This book constitutes the refereed proceedings of ten international workshops held in Eindhoven, The Netherlands, in conjunction with the 12th International Conference on Business Process Management, BPM 2014, in September 2014.

The ten workshops comprised Process-oriented Information Systems in Healthcare (ProHealth 2014), Security in Business Processes (SBP 2014), Process Model Collections: Management and Reuse (PMC-MR 2014), Business Processes in Collective Adaptive Systems (BPCAS 2014), Data- and Artifact-centric BPM (DAB 2014), Business Process Intelligence (BPI 2014), Business Process Management in the Cloud (BPMC 2014), Theory and Applications of Process Visualization (TaProViz 2014), Business Process Management and Social Software (BPMS2 2014) and Decision Mining and Modeling for Business Processes (DeMiMoP 2014). The 38 revised full and eight short papers presented were carefully reviewed and selected from 84 submissions. In addition, six short papers resulting from the Doctoral Consortium at BPM 2014 are included in this book.

Table of Contents

Frontmatter

ProHealth 2014

Frontmatter
Modeling and Monitoring Variability in Hospital Treatments: A Scenario Using CMMN

Healthcare faces the challenge to deliver high treatment quality and patient satisfaction while being cost efficient which is tackled by introducing clinical pathways to standardize the treatment processes. At the Jena University Hospital, the clinical pathway for living donor liver transplantation was modeled using Business Process Model and Notation. A survey based on that model investigates on the transferability of this pathway to other hospitals and lists the requirements for a general model including the need for flexibility caused by differences in treatments in various hospitals. In this paper, we show an approach to tackle the requirements for such a flexible process by using the Case Management Model and Notation standard. Further, we show how case monitoring and analysis can be established by using an approach combining event processing and case management. The holistic approach is exemplified by using a scenario of the evaluation of living liver donors.

Nico Herzberg, Kathrin Kirchner, Mathias Weske
Recommendations for Medical Treatment Processes: The PIGS Approach

Medical treatment processes in hospitals are complex interactions of various actors, in the course of which treatment decisions have to be made.Clinical practice guidelines and clinical pathways provide guidance for practicioners for certain diseases. Especially for multi-morbid patients several guidelines and pathways might apply. Current process-oriented IT support in hospitals, however, does not consider multiple models for treatment recommendations.

In this contribution we define Treatment Cases (TCs) and give an operational semantics for their evolution based on Business Process Model and Notation (BPMN) models, which we use to model guidelines, pathways and standard operating procedures. This work is part of the Process Information and Guidance System (PIGS) approach, which aims at providing an overview of treatment history and give recommendations based on the current treatment state and multiple process models.

Marcin Hewelt, Aaron Kunde, Mathias Weske, Christoph Meinel
Modelling and Implementation of Correct by Construction Healthcare Workflows

We present a rigorous methodology for the modelling and implementation of correct by construction healthcare workflows. It relies on the theoretical concept of proofs-as-processes that draws a connection between logical proofs and process workflows. Based on this, our methodology offers an increased level of trust through mathematical guarantees of correctness for the constructed workflows, including type correctness, systematic resource management, and deadlock and livelock freedom. Workflows are modelled as compositions of abstract processes and can be deployed as executable code automatically. We demonstrate the benefits of our approach through a prototype system involving workflows for assignment and delegation of clinical services while tracking responsibility and accountability explicitly.

Petros Papapanagiotou, Jacques Fleuriot
MET4: Supporting Workflow Execution for Interdisciplinary Healthcare Teams

This paper describes MET4, a multi-agent system that supports interdisciplinary healthcare teams (IHTs) in executing patient care workflows. Using the concept of capability, the system facilitates the maintenance of an IHT and assignment of workflow tasks to the most appropriate team members. Moreover, following the principles of participatory medicine, MET4 facilitates involving the patient and her preferences in the management process. We describe conceptual foundations of the system and present the system design combining multi-agent and ontology-driven paradigms. Finally, we present a prototype implementation of MET4 and illustrate operations of the prototype using an example involving chronic pain management in palliative care.

Szymon Wilk, Davood Astaraky, Wojtek Michalowski, Daniel Amyot, Runzhuo Li, Craig Kuziemsky, Pavel Andreev
Enhancing Guideline-Based Decision Support with Distributed Computation Through Local Mobile Application

We introduce the need for a distributed guideline-based decision support (DSS) process, describe its characteristics, and explain how we implemented this process within the European Union’s MobiGuide project. In particular, we have developed a mechanism of sequential, piecemeal

projection

, i.e., ‘downloading’ small portions of the guideline from the central DSS server, to the local DSS in the patient’s mobile device, which then applies that portion, using the mobile device’s local resources. The mobile device sends a

callback

to the central DSS when it encounters a triggering pattern predefined in the projected module, which leads to an appropriate predefined action by the central DSS, including sending a new projected module, or directly controlling the rest of the workflow. We suggest that such a distributed architecture that explicitly defines a dialog between a central DSS server and a local DSS module, better balances the computational load and exploits the relative advantages of the central server and of the local mobile device.

Erez Shalom, Yuval Shahar, Ayelet Goldstein, Elior Ariel, Silvana Quaglini, Lucia Sacchi, Nick Fung, Valerie Jones, Tom Broens, Gema García-Sáez, Elena Hernando
Storlet Engine for Executing Biomedical Processes Within the Storage System

The increase in large biomedical data objects stored in long term archives that continuously need to be processed and analyzed requires new storage paradigms. We propose expanding the storage system from only storing biomedical data to directly producing value from the data by executing computational modules - storlets - close to where the data is stored. This paper describes the Storlet Engine, an engine to support computations in secure sandboxes within the storage system. We describe its architecture and security model as well as the programming model for storlets. We experimented with several data sets and storlets including de-identification storlet to de-identify sensitive medical records, image transformation storlet to transform images to sustainable formats, and various medical imaging analytics storlets to study pathology images. We also provide a performance study of the Storlet Engine prototype for OpenStack Swift object storage.

Simona Rabinovici-Cohen, Ealan Henis, John Marberg, Kenneth Nagin

SBP 2014

Frontmatter
Conformance Checking Based on Partially Ordered Event Data

Conformance checking is becoming more important for the analysis of business processes. While the diagnosed results of conformance checking techniques are used in diverse context such as enabling auditing and performance analysis, the quality and reliability of the conformance checking techniques themselves have not been analyzed rigorously. As the existing conformance checking techniques heavily rely on the total ordering of events, their diagnostics are unreliable and often even misleading when the timestamps of events are coarse or incorrect. This paper presents an approach to incorporate flexibility, uncertainty, concurrency and explicit orderings between events in the input as well as in the output of conformance checking using

partially ordered traces

and

partially ordered alignments

, respectively. The paper also illustrates various ways to acquire partially ordered traces from existing logs. In addition, a quantitative-based quality metric is introduced to objectively compare the results of conformance checking. The approach is implemented in ProM plugins and has been evaluated using artificial logs.

Xixi Lu, Dirk Fahland, Wil M. P. van der Aalst
Online Compliance Monitoring of Service Landscapes

Today, it is a challenging task to keep a service application running over the internet safe and secure. Based on a collection of security requirements, a so-called

golden configuration

can be created for such an application. When the application has been configured according to this golden configuration, it is assumed that it satisfies these requirements, that is, that it is safe and secure. This assumption is based on the best practices that were used for creating the golden configuration, and on assumptions like that nothing out-of-the-ordinary occurs. Whether the requirements are actually violated, can be checked on the traces that are left behind by the configured service application. Today’s applications typically log an enormous amount of data to keep track of everything that has happened. As such, such an event log can be regarded as the ground truth for the entire application: A security requirement is violated if and only if it shows in the event log. This paper introduces the ProMSecCo tool, which has been built to check whether the security requirements that have been used to create the golden configuration are violated by the event log as generated by the configured service application.

J. M. E. M. van der Werf, H. M. W. Verbeek
Privacy Preserving Business Process Fusion

Virtual enterprises (VEs) are temporary and loosely coupled alliances of businesses that join their skills to catch new business opportunities. However, the dependencies among the activities of a prospective VE cross the boundaries of the VE constituents. It is therefore crucial to allow the VE constituents to discover their local views of the interorganizational workflow, enabling each company to re-shape, optimize and analyze the possible local flows that are consistent with the processes of the other VE constituents. We refer to this problem as

VE process fusion

. Even if it has been widely investigated, no previous work addresses VE process fusion in the presence of privacy constraints. In this paper we demonstrate how private intersection of regular languages can be used as the main building block to implement the privacy preserving fusion of business processes modeled by means of bounded Petri nets.

Roberto Guanciale, Dilian Gurov

PMC-MR 2014

Frontmatter
Configuring Configurable Process Models Made Easier: An Automated Approach

Configurable process models have shown their usefulness for capturing the commonalities and variability within business processes. However, an end user will require an abstraction from the configurable process model, which is a highly technical artifact, to select a suitable configuration. Currently, the creation of such an abstraction requires considerable steps and technical knowledge. We provide an approach to construct such an abstraction automatically on the basis of an understanding of common concepts underlying process models on the one hand and automated analysis techniques on the other. Our approach also guarantees the consistency between the configuration choices of the end user. A positive yet preliminary evaluation with business users has been carried out to test the usability of our approach.

D. M. M. Schunselaar, Henrik Leopold, H. M. W. Verbeek, Wil M. P. van der Aalst, Hajo A. Reijers
When Language Meets Language: Anti Patterns Resulting from Mixing Natural and Modeling Language

Business process modeling has become an integral part of many organizations for documenting and redesigning complex organizational operations. However, the increasing size of process model repositories calls for automated quality assurance techniques. While many aspects such as formal and structural problems are well understood, there is only a limited understanding of semantic issues caused by natural language. One particularly severe problem arises when modelers employ natural language for expressing control-flow constructs such as gateways or loops. This may not only negatively affect the understandability of process models, but also the performance of analysis tools, which typically assume that process model elements do not encode control-flow related information in natural language. In this paper, we aim at increasing the current understanding of mixing natural and modeling language and therefore exploratively investigate three process model collections from practice. As a result, we identify a set of nine anti patterns for mixing natural and modeling language.

Fabian Pittke, Henrik Leopold, Jan Mendling
vrBPMN* and FM: An Approach to Model Business Process Line

Recently, Business Process Management (BPM) has increasing demanded reuse of business process models. In order to represent these models, diverse techniques have been used, such as the variability management and business process configuration, arising the Business Process Line (BPL) area. In order to model BPL, the joint use of vrBPMN (varant-rich Business Process Model and Notation) and Feature Model (FM) has been considered as a relevant alternative. However, there is a kind of FM elements that does not have a properly correspondent in vrBPMN, that is, the IOR element. In addition to that, FM and vrBPMN have some redundant informations. The main contribution of this paper is to propose an extension to the vrBPMN notation, named vrBPMN*, and, together with FM, makes it possible to adequately model BPL. We conducted an empirical study to analyze the viability of using vrPBMN* and FM to model business process, as well as their building time and correctness. As main results, we have observed that the proposed notation favours business process modeling, reducing time and increasing correctness of produced models.

Geraldo Landre, Edilson Palma, Débora Maria Paiva, Elisa Yumi Nakagawa, Maria Istela Cagnin

BPCAS 2014

Frontmatter
Context-Aware Programming for Hybrid and Diversity-Aware Collective Adaptive Systems

Collective adaptive systems (CASs) have been researched intensively since many years. However, the recent emerging developments and advanced models in service-oriented computing, cloud computing and human computation have fostered several new forms of CASs. Among them, Hybrid and Diversity-aware CASs (HDA-CASs) characterize new types of CASs in which a collective is composed of hybrid machines and humans that collaborate together with different complementary roles. This emerging HDA-CAS poses several research challenges in terms of programming, management and provisioning. In this paper, we investigate the main issues in programming HDA-CASs. First, we analyze context characterizing HDA-CASs. Second, we propose to use the concept of hybrid compute units to implement HDA-CASs that can be elastic. We call this type of HDA-CASs

$$h^2$$

CAS

(Hybrid Compute Unit-based HDA-CAS). We then discuss a meta-view of

$$h^2$$

CAS

that describes a

$$h^2$$

CAS

program. We analyze and present program features for

$$h^2$$

CAS

in four main different contexts.

Hong-Linh Truong, Schahram Dustdar
Towards Cognitive BPM as the Next Generation BPM Platform for Analytics-Driven Business Processes

Human-centric business processes in the enterprise are knowledge-intensive, and rely on human judgment for decision making. This reliance impacts the integrity, uniformity, efficiency and consistency of the process enactment over time. We describe our experience and challenges in driving the adoption of business process management tools in the sales process at a large IT services provider. We argue that in knowledge-driven and human-centric activities such as IT services sales, current technologies do not meet the needs. They do not support data capture and analytics around process and using it to reason about and take action on the process. Process definition and enactment, both, needs to be dynamic, flexible and adaptive based on the understanding of the state of affairs. We present a vision and a framework for a

cognitive BPM system

where the knowledge management, user interaction and analytics are employed to dynamically assess and proactively adapt the process based on the results of data analytics and continuous learning from the actions taken.

Hamid R. Motahari Nezhad, Rama Akkiraju
Towards Ensuring High Availability in Collective Adaptive Systems

Collective Adaptive Systems support the interaction and adaptation of virtual and physical entities towards achieving common objectives. For these systems, several challenges at the modeling, provisioning, and execution phases arise. In this position paper, we define the necessary underpinning concepts and identify requirements towards ensuring high availability in such systems. More specifically, based on a scenario from the EU Project ALLOW Ensembles, we identify the necessary requirements and derive an architectural approach that aims at ensuring high availability by combining active workflow replication, service selection, and dynamic compensation techniques.

David Richard Schäfer, Santiago Gómez Sáez, Thomas Bach, Vasilios Andrikopoulos, Muhammad Adnan Tariq

DAB 2014

Frontmatter
Analytics Process Management: A New Challenge for the BPM Community

Today, essentially all industry sectors are developing and applying “big data analytics” to gain new business insights and new operational efficiencies. Essentially two forms of analytics processing support these business-targeted applications: (i) “analytics explorations” that search for business-relevant insights in support of description, prediction, and prescription; and (ii) “analytics flows” that are deployed and executed repeatedly to apply such insights to support reporting and enhance existing business processes. The human environment that surrounds business-targeted analytics involves a multitude of stake-holder roles, and a number of distinct processes are required for the development, deployment, maintanence, and governance of these analytics. This short paper presents preliminary work on a framework for Analytics Process Management (APM), a new branch of Business Process Management (BPM) that is intended to address the central challenges managing analytics flows at scale. APM is focused on the processes that manage the overall lifecycle of analytics flows and their executions, and their integration into “operational” business processes that have been the traditional domain of BPM. The paper identifies key meta-data that should be maintained for analytics flows and their executions, and identifies the core business processes that are needed to create, apply, compare, and maintain such flows. The paper also raises key research questions that need to be addressed in the emerging area of APM.

Fenno F. (Terry) Heath III, Richard Hull
Towards Location-Aware Process Modeling and Execution

Business Process Management (BPM) has emerged as one of the abiding systematic management approaches in order to design, execute and govern organizational business processes. Traditionally, most attention within the BPM community has been given to studying control-flow aspects, without taking other contextual aspects into account. This paper contributes to the existing body of work by focusing on the particular context of geospatial information. We argue that explicitly taking this context into consideration in the modeling and execution of business processes can contribute to improve their effectiveness and efficiency. As such, the goal of this paper is to make the modeling and execution aspects of BPM location-aware. We do so by proposing a Petri net modeling extension which is formalized by means of a mapping to coloured Petri nets. Our approach has been implemented using CPN Tools and a simulation extension was developed to support the execution and validation of location-aware process models. We also illustrate the feasibility of coupling business process support systems with geographical information systems by means of an experimental case setup.

Xinwei Zhu, Guobin Zhu, Seppe K. L. M. vanden Broucke, Jan Vanthienen, Bart Baesens
Extending CPN Tools with Ontologies to Support the Management of Context-Adaptive Business Processes

Colored Petri Nets (CPN) are a widely used graphical modeling language to manage business processes. Business processes often appear in dynamic environments; therefore, context adaptation has recently emerged as a new challenge to explicitly address fitness between business process modeling and its execution environment. Although CPN can introduce data by defining internal data records, this is not enough to capture the complexity and dynamics of the execution context data. This paper extends CPN tools to support the management of context-adaptive business processes. To achieve this challenge, CPN tools are integrated with ontology-based context models that properly represent and manage the business process context. This allows context to be appropriately modeled at design time, and queried and updated at runtime. The combination of ontologies with CPN tools presents a way to bridge business processes management with context data management while treating data and behavior as separate concerns. In this way, system design, reuse, and maintenance are also improved.

Estefanía Serral, Johannes De Smedt, Jan Vanthienen
Using Data-Object Flow Relations to Derive Control Flow Variants in Configurable Business Processes

Focusing on the relationship between behavioural and information perspectives in this paper we present an approach to support flexibility of Business Processes. The approach extends Feature Model descriptions with data-objects in order to derive process fragments and process variants. The approach has been applied to a data-intensive scenario such as the reporting activity of EU projects with encouraging results.

Riccardo Cognini, Flavio Corradini, Andrea Polini, Barbara Re
The BE2 model: When Business Events meet Business Entities

This work presents a unified modelling approach based on two declarative models which integrates event- and entity-driven paradigms while preserving a clear separation of concerns between their executions. The proposed approach provides business users with a unified model that is both sufficiently equipped to capture the inherent complexity of realistic business operations, and still enables gaining a complete overview of the entire business arena. This was attained building upon the business-entity-model and the-event-model as two complementary modeling paradigms yielding the BE

2

model. In this paper we describe how these two models have been systematically unified using an ontological formalism as a common conceptual baseline. The approach also derives a set of modeling guidelines to preserve the consistency across the two specifications. We demonstrate the actual instantiation of the proposed BE

2

model in the Transport and Logistics domain.

Fabiana Fournier, Lior Limonad
Extending Process Logs with Events from Supplementary Sources

Since organizations typically use more than a single IT system, information about the execution of a process is rarely available in a single event log. More commonly, data is scattered across different locations and unlinked by common case identifiers. We present a method to extend an incomplete main event log with events from supplementary data sources, even though the latter lack references to the cases recorded in the main event log. We establish this correlation by using the control-flow, time, resource, and data perspectives of a process model, as well as alignment diagnostics. We evaluate our approach on a real-life event log and discuss the reliability of the correlation under different circumstances. Our evaluation shows that it is possible to correlate a large portion of the events by using our method.

Felix Mannhardt, Massimiliano de Leoni, Hajo A. Reijers

BPI 2014

Frontmatter
A Case Study in Workflow Scheduling Driven by Log Data

This paper shows through a case study the potential for optimizing resource allocation in business process execution. While most resource allocation mechanisms focus on assigning resources to the tasks that they are authorized to perform, we assign resources to the tasks that they can provably perform most efficiently, by mining the execution logs. This gives rise to the minimization of the cost of the process execution. We present various cost measures and how hybrid algorithms can balance their conflicting goals. Our case study indicates significant potential for further research into optimal resource allocation mechanisms.

Mirela Botezatu, Hagen Völzer, Remco Dijkman
Decomposed Process Mining: The ILP Case

Over the last decade process mining techniques have matured and more and more organizations started to use process mining to analyze their operational processes. The current hype around “big data” illustrates the desire to analyze ever-growing data sets. Process mining starts from event logs—multisets of traces (sequences of events)—and for the widespread application of process mining it is vital to be able to handle “big event logs”. Some event logs are “big” because they contain many traces. Others are big in terms of different activities. Most of the more advanced process mining algorithms (both for process discovery and conformance checking) scale very badly in the number of activities. For these algorithms, it could help if we could split the big event log (containing many activities) into a collection of smaller event logs (which each contain fewer activities), run the algorithm on each of these smaller logs, and merge the results into a single result. This paper introduces a

generic framework

for doing exactly that, and makes this concrete by implementing algorithms for decomposed process discovery and decomposed conformance checking using Integer Linear Programming (ILP) based algorithms. ILP-based process mining techniques provide precise results and formal guarantees (e.g., perfect fitness), but are known to scale badly in the number of activities. A small case study shows that we can gain orders of magnitude in run-time. However, in some cases there is tradeoff between run-time and quality.

H. M. W. Verbeek, Wil M. P. van der Aalst
Evaluating the Performance of a Batch Activity in Process Models

The goal of many organizations of today is optimization of business process management. A factor for optimization of business processes is reduction of costs associated with mass production and customer service. Recently, an approach to incorporate batch activities in process models was proposed to improve the process performance by synchronizing a group of process instances. However, the issue of optimal utilization of batch activities and estimation of associated costs remained still open. In this paper, we present an approach to evaluate batch activity performance, based on techniques from queuing theory. Thus, cost functions are introduced in order to (1) compare usual (i.e., non-batch) and batch activity execution and (2) find the optimal configuration of a batch activity. The approach is applied to a real-world use case from the healthcare domain.

Luise Pufahl, Ekaterina Bazhenova, Mathias Weske
Genetic Process Mining: Alignment-Based Process Model Mutation

The Evolutionary Tree Miner (ETM) is a genetic process discovery algorithm that enables the user to guide the discovery process based on preferences with respect to four process model quality dimensions: replay fitness, precision, generalization and simplicity.

Traditionally, the ETM algorithm uses random creation of process models for the initial population, as well as random mutation and crossover techniques for the evolution of generations. In this paper, we present an approach that improves the performance of the ETM algorithm by enabling it to make guided changes to process models, in order to obtain higher quality models in fewer generations. The two parts of this approach are:

$$(1)$$

creating an initial population of process models with a reasonable quality;

$$(2)$$

using information from the alignment between an event log and a process model to identify quality issues in a given part of a model, and resolving those issues using guided mutation operations.

M. L. van Eck, J. C. A. M. Buijs, B. F. van Dongen
Exploring Processes and Deviations

In process mining, one of the main challenges is to discover a process model, while balancing several quality criteria. This often requires repeatedly setting parameters, discovering a map and evaluating it, which we refer to as

process exploration

. Commercial process mining tools like Disco, Perceptive and Celonis are easy to use and have many features, such as log animation, immediate parameter feedback and extensive filtering options, but the resulting maps usually have no executable semantics and due to this, deviations cannot be analysed accurately. Most more academically oriented approaches (e.g., the numerous process discovery approaches supported by ProM) use maps having executable semantics (models), but are often slow, make unrealistic assumptions about the underlying process, or do not provide features like animation and seamless zooming. In this paper, we identify four aspects that are crucial for process exploration:

zoomability

,

evaluation

,

semantics

, and

speed

. We compare existing commercial tools and academic workflows using these aspects, and introduce a new tool, that aims to combine the best of both worlds. A feature comparison and a case study show that our tool bridges the gap between commercial and academic tools.

Sander J. J. Leemans, Dirk Fahland, Wil M. P. van der Aalst
Experimenting with an OLAP Approach for Interactive Discovery in Process Mining

Business process analysts must face the task of analyzing, monitoring and promoting improvements to different business processes. Process mining has emerged as a useful tool for analyzing event logs that are registered by information systems. It allows the discovering of process models considering different perspectives (control-flow, organizational, time). However, currently they lack the ability to explore jointly and interactively the different perspectives, which hinder the understanding of what is happening in the organization. This article proposes a novel approach for interactive discovery aimed at providing process analysts with a tool that allow them to explore multiple perspectives at different levels of detail, which is inspired on OLAP interactive concepts. This approach was implemented as a ProM plug-in and tested in an experiment with real users. Its main advantages are the productivity and operability when performing process discovery.

Gustavo Pizarro, Marcos Sepúlveda
Merging Event Logs with Many to Many Relationships

Process mining techniques enable the discovery and analysis of business processes, identifying opportunities for improvement. However, processes are often comprised of separately managed procedures that have separate log files, impossible to mine in an integrative manner. A preprocessing step that merges log files is quite straightforward when the logs have common case IDs. However, when cases in the different logs have many-to-many relationships among them this is more challenging. In this paper we present an approach for merging event logs which is capable of dealing with all kinds of relationships between logs, one-to-one or many-to-many. The approach matches cases in the logs, using temporal relations and text mining techniques. We have implemented the algorithm and tested it on a comprehensive set of synthetic logs.

Lihi Raichelson, Pnina Soffer
Process Model Realism: Measuring Implicit Realism

Determining the quality of a discovered process model is an important but non-trivial task. In this article, we focus on evaluating the realism level of a discovered process model, i.e. to what extent does the model contain the process behavior that is present in the true underlying process and nothing more. The IR Measure is proposed which represents the probability that a discovered model would have produced a log that is missing a certain amount of behavior observed in the discovered model. This measure expresses the strength of evidence that the discovered process model could be the true underlying model. Empirical results show that the Measure behaves as expected. The IR value drops when the discovered model contains unrealistic behavior. The IR value decreases as the amount of unrealistic behavior in the discovered model increases. The IR value increases as the amount of behavior in the underlying process increases, ceteris paribus.

Benoît Depaire
Analyzing a TCP/IP-Protocol with Process Mining Techniques

In many legacy software systems the communication between client and server is based on proprietary Ethernet protocols. We consider the case that the implementation and specification of such a protocol is unknown and try to reconstruct the rules of the protocol by observation of the network communication. To this end, we translate TCP/IP-logs to appropriate event logs and apply Petri net based process mining techniques. The results of this contribution are a systematic approach to mine client/server protocols, an according tool chain involving existing tools and new tools, and an evaluation of this approach, using a concrete example from practice.

Christian Wakup, Jörg Desel

BPMC 2014

Frontmatter
YAWL in the Cloud: Supporting Process Sharing and Variability

The cloud is at the centre of attention in various fields, including that of BPM. However, all BPM systems in the cloud seem to be nothing more than an installation in the cloud with a web-interface for a single organisation, while cloud technology offers an excellent platform for cooperation on an intra- and inter-organisational level. In this paper, we show how cloud technology can be used for supporting different variants of the same process (due to “couleur locale”), and how these organisations can aid each other in achieving the completion of a running case. In this paper we describe how we have brought a BPM system (YAWL) into the cloud that supports variants.

D. M. M. Schunselaar, H. M. W. Verbeek, H. A. Reijers, Wil M. P. van der Aalst

TaProViz 2014

Frontmatter
A Generic Approach for Calculating and Visualizing Differences Between Process Models in Multidimensional Process Mining

Process mining automatically generates process models from event logs. In multidimensional process mining, these models can be analyzed from various viewpoints by clustering event traces according to their attributes, e.g. age or region of the patient for a healthcare process. For each cluster, a distinct process model is calculated. Since these models are supposed to be identical in most parts, differences between them are hard to spot. Therefore, a tool for emphasizing these differences is needed. To face the different challenges presented by multidimensional process mining like the representational bias, such an approach has to be customizable to support different modeling languages and different layout and differencing algorithms. This paper presents a generic approach to calculate and visualize differences between process models which can be used to compare models in multidimensional process mining.

Carsten Cordes, Thomas Vogelgesang, Hans-Jürgen Appelrath
Enabling a User-Friendly Visualization of Business Process Models

Enterprises are facing increasingly complex business processes. Engineering processes in the automotive domain, for example, may comprise hundreds or thousands of process tasks. In such a scenario, existing modeling notations do not always allow for a user-friendly process visualization. In turn, this hampers the comprehensibility of business processes, especially for non-experienced process participants. This paper tackles this challenge by suggesting alternative ways of visualizing large and complex process models. A controlled experiment with 22 subjects provides first insights into how users perceive these approaches.

Markus Hipp, Achim Strauss, Bernd Michelberger, Bela Mutschler, Manfred Reichert
Lights, Camera, Action! Business Process Movies for Online Process Discovery

Nowadays, organizational information systems are able to collect high volumes of data in event logs every day. Through process mining techniques, it is possible to extract information from such logs to support organizations in checking process conformance, detecting bottlenecks, and carrying on performance analysis. However, to analyze such “big data” through process mining, events coming from process executions (in the form of event streams) must be processed on-the-fly as they occur. The work presented in this paper is built on top of a technique for the online discovery of declarative process models presented in our previous work. In particular, we introduce a tool providing a dynamic visualization of the models discovered over time showing, as a “process movie”, the sequence of valid business rules at any point in time based on the information retrieved from an event stream. The effectiveness of the visualizer is validated through an event stream pertaining to health insurance claims handling in a travel agency.

Andrea Burattin, Marta Cimitile, Fabrizio Maria Maggi
Towards a Generalized Notion of Audio as Part of the Concrete Syntax of Business Process Modeling Languages

The considerations presented in this position paper provide a starting point for incorporating model representations on additional perception channels, e. g., by using audio, into the concrete syntax specification of modeling languages. As it turns out, this cannot be done inside the frameworks of existing concrete syntax specification approaches, but first means to question the state-of-the-art of treating concrete syntax definitions as static mapping structures, and subsequently requires to widen the understanding of what concrete syntax is to an operative view on how humans interact with models.

Jens Gulden

BPMS2 2014

Frontmatter
Tagging Model for Enhancing Knowledge Transfer and Usage during Business Process Execution

Tagging mechanisms allow users to label content, which mainly resides in Internet resources or Web 2.0 social tools, with descriptive terms (without relying on a controlled vocabulary) for navigation, filtering, search and retrieval. The current paper presents two tagging models. The first combines structured, automatically generated metadata, with manually inserted unstructured tagging labels, to facilitate annotation of content that enhances knowledge transfer and usage; and embeds tagging capabilities within a business process model for activation during execution. The second describes a tagged knowledge cycle that allows process performers to create and tag their knowledge and experiences as the process is carried out, and have it transferred for use by an associated stakeholder. The benefits of the models are discussed in the context of service processes, using an illustrative scenario of an inbound telesales process workflow with embedded Web 2.0. social tools.

Reuven Karni, Meira Levy
Classification Framework for Context Data from Business Processes

New business concepts such as Enterprise 2.0 foster the use of social software in enterprises. Especially social production significantly increases the amount of data in the context of business processes. Unfortunately, these data are still an unearthed treasure in many enterprises. Due to advances in data processing such as Big Data, the exploitation of context data becomes feasible. To provide a foundation for the methodical exploitation of context data, this paper introduces a classification, based on two classes, intrinsic and extrinsic data.

Michael Möhring, Rainer Schmidt, Ralf-Christian Härting, Florian Bär, Alfred Zimmermann
Business Processes in Connected Communities

The notion of

Connected Communities

is evolving from widespread use of social media and networks and represents an emerging paradigm for online connectivity, collaboration and information usage where the basis of interaction is between individual human participants with shared motives in a specific context. Traditional BPM practices operate on the basis of a common frame of reference with regard to overall business strategy and the constituent activities, and makes similar assumptions with respect to human participants in those processes. This paper reviews the implications of digital connectedness between human actors in a process-oriented context, surveys potential community archetypes and outlines core characteristics of connected communities and their significance in a broader BPM context.

Nick Russell, Alistair Barros
Social-Software-Based Support for Enterprise Architecture Management Processes

Modern enterprises reshape and transform continuously by a multitude of management processes with different perspectives. They range from business process management to IT service management and the management of the information systems. Enterprise Architecture (EA) Management seeks to provide such a perspective and to align the diverse management perspectives. Therefore, EA Management cannot rely on hierarchic - in a tayloristic manner designed - management processes to achieve and promote this alignment. It, conversely, has to apply bottom-up, information-centered coordination mechanisms to ensure that different management processes are aligned with each other and enterprise strategy. Social software provides such a bottom-up mechanism for providing support within EAM-processes. Consequently, challenges of EA management processes are investigated, and contributions of social software presented. A cockpit provides interactive functions and visualization methods to cope with this complexity and enable the practical use of social software in enterprise architecture management processes.

Rainer Schmidt, Alfred Zimmermann, Michael Möhring, Dierk Jugel, Florian Bär, Christian M. Schweda
oBPM – An Opportunistic Approach to Business Process Modeling and Execution

Traditional workflow management systems are suited to automate well-structured processes based on a strict sequence of user tasks and a top-down approach to model creation. Such a conventional approach does not comply with the bottom-up philosophy of social software and therefore makes it difficult to incorporate its strengths in process modeling and automation. Opportunistic Business Process Modeling (oBPM) aims to overcome these limitations by introducing a new paradigm for modeling and executing business processes that is both user- and document-centric, adequate for bottom-up modeling, agile process modification, opportunistic task scheduling and process mining.

David Grünert, Elke Brucker-Kley, Thomas Keller

DeMiMoP 2014

Frontmatter
Constructing Probabilistic Process Models Based on Hidden Markov Models for Resource Allocation

A Hidden Markov Model (HMM) is a temporal statistical model which is widely utilized for various applications such as gene prediction, speech recognition and localization prediction. HMM represents the state of the process in a discrete variable, where the values are the possible observations of the world. For the purpose of process mining for resource allocation, HMM can be applied to discover a probabilistic workflow model from activities and identify the observations based on the resources utilized by each activity. In this paper, we introduce a process discovery method that combines an organizational perspective with a probabilistic approach to address the resource allocation and improve the productivity of resource management, maximizing the likelihood of the model using the Expectation-Maximization procedure.

Berny Carrera, Jae-Yoon Jung
Business Rules: From SBVR to Information Systems

The ability to rapidly adapt to change is an important feature of any information system. Business rules play an important role in this dynamic as they have a high probability of being changed as time goes by. To closely follow these constant changes, the information system must be very flexible. The Semantic of Business Vocabularies and Business Rules (SBVR) is a model designed to be used by business people to communicate their business rules in a standard and formal way. SBVR and most of the business rules models do not define how to implement business rules in an information system in such a way that all flexibility needs are met. In this work we suggest an approach in with business rule implementations are confined to rules engines (independent systems designed to execute business rules) and kept separated from system executions. We will start explaining our method to identify which events in the SBVR model could lead to a violation of the defined business rules. Then we define a standard model to establish communication between business rule engines and all other information system elements. Our model uses the identified events of the first part to define when each business rule must be verified by the rule engines.

Jandisson Soares de Jesus, Ana Cristina Vieira de Melo
Integration of Business Processes with Visual Decision Modeling. Presentation of the HaDEs Toolchain

Business Rules (BRs) allow for defining statements that determine or constrain some aspects of the business and precise what can be done in a specific situation. Together with Business Processes (BPs) they compose an efficient framework for business logic specification. In this paper, issues related to integration of the visual design of BRs with BPs modeling methods are considered. As a result of our previous research, the HaDEs framework for visual design of the rule bases was developed. The main goal of this paper is to present the HaDEs toolchain applications and show how they can be used for designing of decision process and effectively integrated with BPs modeling methods.

Krzysztof Kluza, Krzysztof Kaczor, Grzegorz J. Nalepa
Generating Business Process Recommendations with a Population-Based Meta-Heuristic

In order to provide both guidance and flexibility to users during process execution, recommendation systems have been proposed. Existing recommendation systems mainly focus on offering recommendation according to the process optimization goals (time, cost…). In this paper we offer a new approach that primarily focuses on maximizing the flexibility during execution. This means that by following the recommendations, the user retains maximal flexibility to divert from them later on. This makes it possible to handle (possibly unknown) emerging constraints during execution. The main contribution of this paper is an algorithm that uses a declarative process model to generate a set of imperative process models that can be used to generate recommendations.

Steven Mertens, Frederik Gailly, Geert Poels
Bidimensional Process Discovery for Mining BPMN Models

This paper presents “BPMN Miner”, a process discovery technique that uses BPMN as the representational language for the discovery result. The proposed approach is novel in the sense that it is able to represent control-flow with BPMN constructs, but also because it augments the control-flow perspective with an organizational dimension by discovering swimlanes that represent organizational roles in the business process. Additional advantages of the proposed mining approach can be summarized as follows: it provides intuitive and easy-to-use abstraction/specification functionality which makes it applicable to event logs with various complexity levels, it provides instant feedback about the conformance between the input log and the resulting model based on a dedicated fitness metric, it is robust to noise, and it can easily integrate with modeling and other BPM tools with exporting functionality through the XPDL-format. In this way, BPMN Miner will take process mining one step closer to the status of indispensable for business process reengineering as discovered models are immediately available in the preferred language of a majority of practitioners, educators and researchers.

Jochen De Weerdt, Seppe K. L. M. vanden Broucke, Filip Caron
Designing and Evaluating an Interpretable Predictive Modeling Technique for Business Processes

Process mining is a field traditionally concerned with retrospective analysis of event logs, yet interest in applying it online to running process instances is increasing. In this paper, we design a predictive modeling technique that can be used to quantify probabilities of how a running process instance will behave based on the events that have been observed so far. To this end, we study the field of grammatical inference and identify suitable probabilistic modeling techniques for event log data. After tailoring one of these techniques to the domain of business process management, we derive a learning algorithm. By combining our predictive model with an established process discovery technique, we are able to visualize the significant parts of predictive models in form of Petri nets. A preliminary evaluation demonstrates the effectiveness of our approach.

Dominic Breuker, Patrick Delfmann, Martin Matzner, Jörg Becker

Doctoral Consortium at BPM 2014

Frontmatter
Detecting, Assessing, and Mitigating Data Inaccuracy-Related Risks in Business Processes

Business process activities and their outcomes rely on data that is commonly stored in databases. If the stored data is not accurate, namely, it does not reflect the relevant real world values, the process execution might be disrupted and the process might not be able to reach its goal. Detection of such cases and analysis of their causes may help redesign processes to reduce the potential risks. In this research, we aim to develop a semi-automated method that will enable detection, assessment, and mitigation of risks related to data inaccuracy in business processes. The method will be built on and evaluated with real cases from the industry.

Arava Tsoury, Soffer Pnina, Iris Reinhartz-Berger
Adaptation of Business Process Management to Requirements of Small and Medium-Sized Enterprises in the Context of Strategic Flexibility

Business Process Management (BPM) has become a necessity for companies in a highly competitive business environment, as it is a powerful tool to enhance process and service performance. Unfortunately, BPM is primary linked to parameters in larger enterprises; no flexible and effective methods for a practical application are available for specific constraints of Small and Medium-sized Enterprises (SME). As a sequentially result, BPM for SMEs still remains largely atheoretical. Critical success factors in literature are often case-specific or of a generic kind and most of those papers fail to put their research within a theoretical grounded framework. This research focuses on maintaining the strategic flexibility and explores the situation in SMEs as well as the possibilities for an adaptation of an agile BPM within this type of companies. Aim is to propose a model that links business process management, organizational behavior and a framework for an adoption of BPM within this type of companies.

Felix Reher
A Language for Process Map Design

Organizations are leaning towards becoming more process-oriented in order to better serve their customers. An approach that enables achieving such process orientation is business process management (BPM). In this context business process modeling is used to graphically represent business processes. As a result organizations are faced with large collections of process models. The process models are typically organized in a process architecture which comprises a number of levels. The most top level is commonly the process map where all processes of one organization and the relations between them are depicted in a very abstract manner. Whereas there are well-defined languages for modelling the details of singular processes (e.g. BPMN, EPC), such a language for supporting the design of process maps is still missing. As a result, we are faced with a vast variety of process map designs from practice, as practitioners typically rely on their own creativity when undertaking this task. This study addresses this gap by using various methods to develop a language for process map design which will support practitioners to design their process maps in a standardized manner.

Monika Malinova
Graph-Based Process Model Matching

Nowadays organizations acquire multiple repositories with process specifications. Organization stakeholders such as business analysts and process designers need to have access and retrieve such information as it is proven that adapting existing business processes in order to meet current business needs is more effective and less error-prone than developing them from scratch. This thesis concentrates on process retrieval and will propose a business process searching mechanism, taking advantage and extending existing graph based matching techniques, with the aim to exploit the knowledge that already exists within an organization.

Christina Tsagkani
Service Analysis and Simulation in Process Mining

Modern business processes are supported by information systems that record process-related events into event logs.

Arik Senderovich
Process Discovery and Exploration

Process mining, and in particular process discovery, have gained traction as a technique for analysing actual process executions from event data recorded in event logs. Process discovery aims to automatically derive a model of the process. Current process discovery techniques either do not provide executable semantics, do not guarantee to return models without deadlocks, or do not achieve a right balance between quality criteria.

Sander J. J. Leemans
Backmatter
Metadata
Title
Business Process Management Workshops
Editors
Fabiana Fournier
Jan Mendling
Copyright Year
2015
Electronic ISBN
978-3-319-15895-2
Print ISBN
978-3-319-15894-5
DOI
https://doi.org/10.1007/978-3-319-15895-2

Premium Partner