Skip to main content

2014 | Buch

Business Process Management

12th International Conference, BPM 2014, Haifa, Israel, September 7-11, 2014. Proceedings

herausgegeben von: Shazia Sadiq, Pnina Soffer, Hagen Völzer

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 12th International Conference on Business Process Management, BPM 2014, held in Haifa, Israel, in September 2014. The 21 regular papers and 10 short papers included in this volume were carefully reviewed and selected from 123 submissions. The papers are organized in 9 topical sections on declarative processes, user-centered process approaches, process discovery, integrative BPM, resource and time management in BPM, process analytics, process enabled environments, discovery and monitoring, and industry papers.

Inhaltsverzeichnis

Frontmatter

Declarative Processes

Monitoring Business Metaconstraints Based on LTL and LDL for Finite Traces

Runtime monitoring is one of the central tasks to provide operational decision support to running business processes, and check on-the-fly whether they comply with constraints and rules. We study runtime monitoring of properties expressed in LTL on finite traces (LTF

f

) and its extension LDF

f

. LDF

f

is a powerful logic that captures all monadic second order logic on finite traces, which is obtained by combining regular expressions with LTF

f

, adopting the syntax of propositional dynamic logic (PDL). Interestingly, in spite of its greater expressivity, LDF

f

has exactly the same computational complexity of LTF

f

. We show that LDF

f

is able to capture, in the logic itself, not only the constraints to be monitored, but also the de-facto standard RV-LTL monitors. This makes it possible to declaratively capture monitoring metaconstraints, i.e., constraints about the evolution of other constraints, and check them by relying on usual logical services for temporal logics instead of ad-hoc algorithms. This, in turn, enables to flexibly monitor constraints depending on the monitoring state of other constraints, e.g., “compensation” constraints that are only checked when others are detected to be violated. In addition, we devise a direct translation of LDF

f

formulas into nondeterministic automata, avoiding to detour to Büchi automata or alternating automata, and we use it to implement a monitoring plug-in for the

ProM

suite.

Giuseppe De Giacomo, Riccardo De Masellis, Marco Grasso, Fabrizio Maria Maggi, Marco Montali
Hierarchical Declarative Modelling with Refinement and Sub-processes

We present a new declarative model with composition and hierarchical definition of processes, featuring (a) incremental refinement, (b) adaptation of processes, and (c) dynamic creation of sub-processes. The approach is motivated and exemplified by a recent case management solution delivered by our industry partner Exformatics A/S. The approach is achieved by extending the Dynamic Condition Response (DCR) graph model with

interfaces

and composition along those interfaces. Both refinement and sub-processes are then constructed in terms of that composition. Sub-processes take the form of hierarchical (complex) events, which dynamically instantiate sub-processes. The extensions are realised and supported by a prototype simulation tool.

Søren Debois, Thomas Hildebrandt, Tijs Slaats
Discovering Target-Branched Declare Constraints

Process discovery is the task of generating models from event logs. Mining processes that operate in an environment of high variability is an ongoing research challenge because various algorithms tend to produce spaghetti-like models. This is particularly the case when procedural models are generated. A promising direction to tackle this challenge is the usage of declarative process modelling languages like Declare, which summarise complex behaviour in a compact set of behavioural constraints. However, Declare constraints with branching are expensive to be calculated.In addition, it is often the case that hundreds of branching Declare constraints are valid for the same log, thus making, again, the discovery results unreadable. In this paper, we address these problems from a theoretical angle. More specifically, we define the class of Target-Branched Declare constraints and investigate the formal properties it exhibits. Furthermore, we present a technique for the efficient discovery of compact Target-Branched Declare models. We discuss the merits of our work through an evaluation based on a prototypical implementation using both artificial and real-world event logs.

Claudio Di Ciccio, Fabrizio Maria Maggi, Jan Mendling

User-Centered Process Approaches

Crowd-Based Mining of Reusable Process Model Patterns

Process mining is a domain where computers undoubtedly outperform humans. It is a

mathematically complex

and

computationally demanding

problem, and event logs are at

too low a level of abstraction

to be intelligible in large scale to humans. We demonstrate that if instead the data to mine from are

models

(not logs), datasets are

small

(in the order of dozens rather than thousands or millions), and the knowledge to be discovered is

complex

(reusable model patterns), humans outperform computers. We design, implement, run, and test a

crowd-based pattern mining

approach and demonstrate its viability compared to automated mining. We specifically mine

mashup

model patterns (we use them to provide interactive recommendations inside a mashup tool) and explain the analogies with mining business process models. The problem is relevant in that reusable model patterns encode valuable modeling and domain knowledge, such as best practices or organizational conventions, from which modelers can learn and benefit when designing own models.

Carlos Rodríguez, Florian Daniel, Fabio Casati
A Recommender System for Process Discovery

Over the last decade, several algorithms for process discovery and process conformance have been proposed. Still, it is well-accepted that there is no dominant algorithm in any of these two disciplines, and then it is often difficult to apply them successfully. Most of these algorithms need a close-to expert knowledge in order to be applied satisfactorily. In this paper, we present a recommender system that uses portfolio-based algorithm selection strategies to face the following problems: to find the best discovery algorithm for the data at hand, and to allow bridging the gap between general users and process mining algorithms. Experiments performed with the developed tool witness the usefulness of the approach for a variety of instances.

Joel Ribeiro, Josep Carmona, Mustafa Mısır, Michele Sebag
Listen to Me: Improving Process Model Matching through User Feedback

Many use cases in business process management rely on the identification of correspondences between process models. However, the sparse information in process models makes matching a fundamentally hard problem. Consequently, existing approaches yield a matching quality which is too low to be useful in practice. Therefore, we investigate incorporating user feedback to improve matching quality. To this end, we examine which information is suitable for feedback analysis. On this basis, we design an approach that performs matching in an iterative, mixed-initiative approach: we determine correspondences between two models automatically, let the user correct them, and analyze this input to adapt the matching algorithm. Then, we continue with matching the next two models, and so forth. This approach improves the matching quality, as showcased by a comparative evaluation. From this study, we also derive strategies on how to maximize the quality while limiting the additional effort required from the user.

Christopher Klinkmüller, Henrik Leopold, Ingo Weber, Jan Mendling, André Ludwig

Process Discovery

Beyond Tasks and Gateways: Discovering BPMN Models with Subprocesses, Boundary Events and Activity Markers

Existing techniques for automated discovery of process models from event logs generally produce flat process models. Thus, they fail to exploit the notion of subprocess, as well as error handling and repetition constructs provided by contemporary process modeling notations, such as the Business Process Model and Notation (BPMN). This paper presents a technique for automated discovery of BPMN models containing subprocesses, interrupting and non-interrupting boundary events and activity markers. The technique analyzes dependencies between data attributes attached to events in order to identify subprocesses and to extract their associated logs. Parent process and subprocess models are then discovered using existing techniques for flat process model discovery. Finally, the resulting models and logs are heuristically analyzed in order to identify boundary events and markers. A validation with one synthetic and two real-life logs shows that process models derived using the proposed technique are more accurate and less complex than those derived with flat process discovery techniques.

Raffaele Conforti, Marlon Dumas, Luciano García-Bañuelos, Marcello La Rosa
A Genetic Algorithm for Process Discovery Guided by Completeness, Precision and Simplicity

Several process discovery algorithms have been presented in the last years. These approaches look for complete, precise and simple models. Nevertheless, none of the current proposals obtains a good integration between the three objectives and, therefore, the mined models have differences with the real models. In this paper we present a genetic algorithm (ProDiGen) with a hierarchical fitness function that takes into account completeness, precision and simplicity. Moreover, ProDiGen uses crossover and mutation operators that focus the search on those parts of the model that generate errors during the processing of the log. The proposal has been validated with 21 different logs. Furthermore, we have compared our approach with two of the state of the art algorithms.

Borja Vázquez-Barreiros, Manuel Mucientes, Manuel Lama
Constructs Competition Miner: Process Control-Flow Discovery of BP-Domain Constructs

Process Discovery techniques help a business analyst to understand the actual processes deployed in an organization, i.e. based on a log of events, the actual activity workflow is discovered. In most cases their results conform to general purpose representations like Petri nets or Causal nets which are preferred by academic scholars but difficult to comprehend for business analysts. In this paper we propose an algorithm that follows a top-down approach to directly mine a process model which consists of common BP-domain constructs and represents the main behaviour of the process. The algorithm is designed so it can deal with noise and not-supported behaviour. This is achieved by letting the different supported constructs compete with each other for the most suitable solution from top to bottom using ”soft” constraints and behaviour approximations. The key parts of the algorithm are formally described and evaluation results are presented and discussed.

David Redlich, Thomas Molka, Wasif Gilani, Gordon Blair, Awais Rashid

Integrative BPM

Chopping Down Trees vs. Sharpening the Axe – Balancing the Development of BPM Capabilities with Process Improvement

The management and improvement of business processes is an evergreen topic of organizational design. With many techniques and tools for process modeling, execution, and improvement being available, research pays progressively more attention to the organizational impact of business process management (BPM) and the development of BPM capabilities. Despite knowledge about the capabilities required for successful BPM, there is a lack of guidance on how these BPM capabilities should be developed and balanced with the improvement of individual business processes. As a first step to address this research gap, we propose a decision model that enables valuating and selecting BPM roadmaps, i.e., portfolios of scheduled projects with different effects on business processes and BPM capabilities. The decision model is grounded in the literature related to project portfolio selection, process performance measurement, and value-based management. We also provide an extensive demonstration example to illustrate how the decision model can be applied.

Martin Lehnert, Alexander Linhart, Maximilian Röglinger
Implicit BPM: A Business Process Platform for Transparent Workflow Weaving

The integration of business processes into existing applications involves considerable development efforts and costs for IT departments. This precludes the pervasive implementation of BPM in organizations where important applications remain isolated from the existing workflows.

In this paper, we introduce a novel concept, Workflow Weaving, based on non-intrusive techniques, which achieves transparent integration of business processes into organizational applications. This concept relies on BPM standards, Aspect Oriented Programming, and Web patterns to transparently weave business models among current web applications. A prototype platform is presented, which includes our design of a distributed architecture, and a natural and expressive DSL.

Rubén Mondéjar, Pedro García-López, Carles Pairot, Enric Brull
Modeling Concepts for Internal Controls in Business Processes – An Empirically Grounded Extension of BPMN

With the increasing number and complexity of legal obligations, ensuring business process compliance presents a major challenge for today’s organizations. In this regard, implementing a set of control means and regularly auditing their effectiveness are suitable measures to ensure a compliant design and enactment of a business process. However, common business process modeling languages (PML) do not provide appropriate concepts to comprehensively model such control means. Not surprisingly, common PMLs are not widely used in the audit domain. To address this gap, this paper presents an extension of a PML with modeling concepts for process-integrated control means. As it is based on previous empirical research with auditors, the extension especially considers their requirements. The results of a laboratory experiment with 78 participants demonstrate that the extension supports auditors to gain a more comprehensive understanding of internal controls in a process model compared to current audit practice.

Martin Schultz, Michael Radloff

Resource and Time Management in BPM

Mining Resource Scheduling Protocols

In service processes, as found in the telecommunications, financial, or healthcare sector, customers compete for the scarce capacity of service providers. For such processes, performance analysis is important and it often targets the time that customers are delayed prior to service. However, this wait time cannot be fully explained by the load imposed on service providers. Indeed, it also depends on resource scheduling protocols, which determine the order of activities that a service provider decides to follow when serving customers. This work focuses on automatically learning resource decisions from events. We hypothesize that queueing information serves as an essential element in mining such protocols and hence, we utilize the queueing perspective of customers in the mining process. We propose two types of mining techniques: advanced classification methods from data mining that include queueing information in their explanatory features and heuristics that originate in queueing theory. Empirical evaluation shows that incorporating the queueing perspective into mining of scheduling protocols improves predictive power.

Arik Senderovich, Matthias Weidlich, Avigdor Gal, Avishai Mandelbaum
Dealing with Changes of Time-Aware Processes

The proper handling of temporal process constraints is crucial in many application domains. Contemporary process-aware information systems (PAIS), however, lack a sophisticated support of time-aware processes. As a particular challenge, the execution of time-aware processes needs to be flexible as time can neither be slowed down nor stopped. Hence, it should be possible to dynamically adapt time-aware process instances to cope with unforeseen events. In turn, when applying such dynamic changes, it must be re-ensured that the resulting process instances are

temporally consistent

; i.e., they still can be completed without violating any of their temporal constraints. This paper presents the ATAPIS framework which extends well established process change operations with temporal constraints. In particular, it provides pre- and post-conditions for these operations that guarantee for the temporal consistency of the changed process instances. Furthermore, we analyze the effects a change has on the temporal properties of a process instance. In this context, we provide a means to significantly reduce the complexity when applying multiple change operations. Respective optimizations will be crucial to properly support the temporal perspective in adaptive PAIS.

Andreas Lanz, Manfred Reichert
Temporal Anomaly Detection in Business Processes

The analysis of business processes is often challenging not only because of intricate dependencies between process activities but also because of various sources of faults within the activities. The automated detection of potential business process anomalies could immensely help business analysts and other process participants detect and understand the causes of process errors.

This work focuses on temporal anomalies, i.e., anomalies concerning the runtime of activities within a process. To detect such anomalies, we propose a Bayesian model that can be automatically inferred form the Petri net representation of a business process. Probabilistic inference on the above model allows the detection of non-obvious and interdependent temporal anomalies.

Andreas Rogge-Solti, Gjergji Kasneci

Process Analytics

A General Framework for Correlating Business Process Characteristics

Process discovery techniques make it possible to automatically derive process models from event data. However, often one is not only interested in discovering the control-flow but also in answering questions like “What do the cases that are late have in common?”, “What characterizes the workers that skip this check activity?”, and “Do people work faster if they have more work?”, etc. Such questions can be answered by combining process mining with classification (e.g., decision tree analysis). Several authors have proposed ad-hoc solutions for specific questions, e.g., there is work on predicting the remaining processing time and recommending activities to minimize particular risks. However, as shown in this paper, it is possible to unify these ideas and provide a general framework for deriving and correlating process characteristics. First, we show how the desired process characteristics can be derived and linked to events. Then, we show that we can derive the selected dependent characteristic from a set of independent characteristics for a selected set of events. This can be done for any process characteristic one can think of. The approach is highly generic and implemented as plug-in for the

ProM

framework. Its applicability is demonstrated by using it to answer to a wide range of questions put forward by the UWV (the Dutch Employee Insurance Agency).

Massimiliano de Leoni, Wil M. P. van der Aalst, Marcus Dees
Behavioral Comparison of Process Models Based on Canonically Reduced Event Structures

We address the problem of diagnosing behavioral differences between pairs of business process models. Specifically, given two process models, we seek to determine if they are behaviorally equivalent, and if not, we seek to describe their differences in terms of behavioral relations captured in one model but not in the other. The proposed solution is based on a translation from process models to Asymmetric Event Structures (AES). A naïve version of this translation suffers from two limitations. First, it produces redundant difference diagnostic statements because an AES may contain unnecessary event duplication. Second, it is not applicable to process models with cycles. To tackle the first limitation, we propose a technique to reduce event duplication in an AES while preserving canonicity. For the second limitation, we propose a notion of unfolding that captures all possible causes of each event in a cycle. From there we derive an AES where repeated events are distinguished from non-repeated ones and that allows us to diagnose differences in terms of repetition and causal relations in one model but not in the other.

Abel Armas-Cervantes, Paolo Baldan, Marlon Dumas, Luciano García-Bañuelos
Where Did I Go Wrong?
Explaining Errors in Business Process Models

Business process modeling is still a challenging task — especially since more and more aspects are added to the models, such as data lifecycles, security constraints, or compliance rules. At the same time, formal methods allow for a detection of errors in the early modeling phase. Detected errors are usually explained with a path from the initial to the error state. These paths can grow unmanageably and make the understanding and fixing of errors very time consuming. This paper addresses this issue and proposes a novel explanation of errors: Instead of listing the actions on the path to the error, only the decisions that lead to it are reported and highlighted in the original model. Furthermore, we exploit concurrency to create a compact artifact to explain errors.

Niels Lohmann, Dirk Fahland

Industry Papers

User-Friendly Property Specification and Process Verification – A Case Study with Vehicle-Commissioning Processes

Testing in the automotive industry is supposed to guarantee that vehicles are shipped without any flaw. Respective processes are complex, due to the variety of components and electronic devices in modern vehicles. To achieve error-free processes, their formal analysis is required. Specifying and maintaining properties the processes must satisfy in a user-friendly way is a core requirement on any verification system. We have observed that there are few pattern properties that testing processes adhere to, and we describe these patterns. They depend on the context of the processes, e.g., the components of the vehicle or testing stations. We have developed a framework that instantiates the property patterns at verification time and then verifies the process against these instances. Our empirical evaluation with the industrial partner has shown that our framework does detect property violations in processes. From expert interviews we conclude that our framework is user-friendly and well suited to operate in a real production environment.

Richard Mrasek, Jutta Mülle, Klemens Böhm, Michael Becker, Christian Allmann
Analysis of Operational Data for Expertise Aware Staffing

Knowledge intensive business services such as IT Services, rely on the expertise of the knowledge workers for performing the activities involved in the delivery of services. The activities performed could range from performing simple, repetitive tasks to resolving more complex situations. The expertise of the task force can also vary from novices who cost less to advanced skill workers and experts who are more expensive. Staffing of service systems relies largely on the assumptions underlying the operational productivity of the workers. Research independently points to the impact of factors such as complexity of work and expertise of the worker on worker productivity. In this paper, we examine the impact of complexity of work, priority or importance of work and expertise of the worker together, on the operational productivity of the worker. For our empirical analysis, we use the data from real-life engagement in the IT service management domain. Our finding, on the basis of the data indicates, not surprisingly, that experts are more suitable for complex or high priority work with strict service levels. In the same setting, when experts are given simpler tasks of lower priority, they tend to not perform better than their less experienced counterparts. The operational productivity measure of experts and novices is further used as an input to a discrete event simulation based optimization framework that model real-life service system to arrive at an optimal staffing. Our work demonstrates that data driven techniques, similar to the one presented here is useful for making more accurate staffing decisions by understanding worker efficiency derived from the analysis of operational data.

Renuka Sindhgatta, Gaargi Banerjee Dasgupta, Aditya Ghose
From a Family of State-Centric PAIS to a Configurable and Parameterized Business Process Architecture

The paper presents a solution to model and refine processes of long-living business objects. The proposed BPMN model describes the life-cycle of one business object, covering the passed states, the events that invoke state changes and the acitivities that are triggered to perform operations on the object which as a result lead to new states.

Starting from that state-centric operational design model a configurable business process architecture is derived that is controlled by a state automaton and runs on a BPMS that supports BMPN 2.0. Architectural rules are provided to ensure the behavioral correctness of this architecture. The solution emerged from an industrial use case at the transition between two generations of a family of Process Aware Information Systems (PAIS). As a proof of concept the architecture of a platform for the management of delivery times for a wholesaler is described.

Andreas Rulle, Juliane Siegeris

Short Papers: Process Enabled Environments

DRain: An Engine for Quality-of-Result Driven Process-Based Data Analytics

The analysis of massive amounts of diverse data provided by large cities, combined with the requirements from multiple domain experts and users, is becoming a challenging trend. Although current process-based solutions rise in data awareness, there is less coverage of approaches dealing with the Quality-of-Result (QoR) to assist data analytics in distributed data-intensive environments. In this paper, we present the fundamental building blocks of a framework for enabling process selection and configuration through user-defined QoR at runtime. These building blocks form the basis to support modeling, execution and configuration of data-aware process variants in order to perform analytics. They can be integrated with different underlying APIs, promoting abstraction, QoR-driven data interaction and configuration. Finally, we carry out a preliminary evaluation on the URBEM scenario, concluding that our framework spends little time on QoR-driven selection and configuration of data-aware processes.

Aitor Murguzur, Johannes M. Schleicher, Hong-Linh Truong, Salvador Trujillo, Schahram Dustdar
Use Your Best Device! Enabling Device Changes at Runtime

The usage of different computing devices, like desktop computers or smartphones, in our everyday lifes increases continuously. Moreover, smart watches and other wearables are ready to accompany us in our daily habits. As a consequence, applications are developed for a variety of different computing devices, in order to give users the freedom to choose a device that really fits their current situation. If this situation changes, a different device may become more suitable than the chosen one. In most cases, applications do not support changing the executing device at runtime, since this is usually not considered at design time and would require the transferal of the current state. In this paper, we present an approach to define device changes for process-driven applications. To this extent, we enrich process models with deployment information, which allows specifying where it should be possible to change the device while keeping the application’s state. Additionally, we have adapted a process engine to support the execution of these enriched process models. Thereby, we take a further step towards human-centric BPM that enables users to use their most suitable device.

Dennis Bokermann, Christian Gerth, Gregor Engels
Specifying Flexible Human Behavior in Interaction-Intensive Process Environments

Fast changing business environments characterized by unpredictable variations call for flexible process-aware systems. The BPM community addressed this challenge through various approaches but little focus has been on how to specify (respectively constrain) flexible human involvement: how human process participants may collaborate on a task, how they may obtain a joint decision that drives the process, or how they may communicate out-of-band for clarifying task-vital information. Experience has shown that pure process languages are not necessarily the most appropriate technique for specifying such flexible behavior. Hence selecting appropriate modeling languages and strategies needs thorough investigation. To this end, this paper juxtaposes the capabilities of representative human-centric specification languages hADL and Little-JIL and demonstrate their joint applicability for modeling interaction-intensive processes.

Christoph Dorn, Schahram Dustdar, Leon J. Osterweil
Separating Execution and Data Management: A Key to Business-Process-as-a-Service (BPaaS)

In most business process management (BPM) systems, the interleaving nature of data management and business process (BP) execution makes it hard for providing “Business-Process-as-a-Service” (BPaaS) due to the enormous effort required on maintaining both the engines as well as the data for the clients. In this paper we formulate a concept of a self-guided artifact, which extends artifact-centric BP models by capturing

all needed data

for a BP throughout its execution. Taking advantage of self-guided artifacts, the SeGA framework is presented to support the separation of data and BP execution.

Yutian Sun, Jianwen Su, Jian Yang
Assessing the Need for Visibility of Business Processes – A Process Visibility Fit Framework

Real-time visibility of relevant information during process execution becomes increasingly feasible leveraging advanced information technologies. However, it remains vague where organizations should exploit the new, but also cost-intensive opportunities. Theoretically grounded in the Information Processing View (IPV) this paper proposes a decision framework that identifies business processes which need technology investments enabling real-time visibility. The framework considers both, visibility requirements of processes as well as visibility capabilities of information technology.

Enrico Graupner, Martin Berner, Alexander Maedche, Harshavardhan Jegadeesan

Short Papers: Discovery and Monitoring

The Automated Discovery of Hybrid Processes

The declarative-procedural dichotomy is highly relevant when choosing the most suitable process modeling language to represent a discovered process. Less-structured processes with a high level of variability can be described in a more compact way using a declarative language. By contrast, procedural process modeling languages seem more suitable to describe structured and stable processes. However, in various cases, a process may incorporate parts that are better captured in a declarative fashion, while other parts are more suitable to be described procedurally. In this paper, we present a technique for discovering from an event log a so-called

hybrid

process model. A hybrid process model is hierarchical, where each of its sub-processes may be specified in a declarative or procedural fashion. We have implemented the proposed approach as a plug-in of the ProM platform. To evaluate the approach, we used our plug-in to mine a real-life log from a financial context.

Fabrizio Maria Maggi, Tijs Slaats, Hajo A. Reijers
Declarative Process Mining: Reducing Discovered Models Complexity by Pre-Processing Event Logs

The discovery of declarative process models by mining event logs aims to represent flexible or unstructured processes, making them visible to business and improving their manageability. Although promising, the declarative perspective may still produce models that are hard to understand, both due to their size and to the high number of restrictions of the process activities. This work presents an approach to reduce declarative model complexity by aggregating activities according to inclusion and hierarchy semantic relations. The approach was evaluated through a case study with an artificial event log and its results showed complexity reduction on the resulting hierarchical model.

Pedro H. Piccoli Richetti, Fernanda Araujo Baião, Flávia Maria Santoro
SECPI: Searching for Explanations for Clustered Process Instances

This paper presents SECPI (Search for Explanations of Clusters of Process Instances), a technique that assists users with understanding a trace clustering solution by finding a minimal set of control-flow characteristics whose absence would prevent a process instance from remaining in its current cluster. As such, the shortcoming of current trace clustering techniques regarding the provision of insight into the computation of a particular partitioning is addressed by learning concise individual rules that clearly explain why a certain instance is part of a cluster.

Jochen De Weerdt, Seppe vanden Broucke
Business Monitoring Framework for Process Discovery with Real-Life Logs

Business analysis with processes extracted from real-life system logs has recently become important for improving business performance. Since business users desire to see the current situations of business with visualized process models from various perspective, we need an analysis platform that supports changes of viewpoint. We have developed a runtime monitoring framework for log analysis. Our framework can simultaneously extract process instances and derive appropriate metrics in a single pass through the logs. We tested our proposed framework with a real-life system log. The results for twenty days of data show synthesized process models along with an analysis axis. They were synthesized from the metric-annotated process instances generated by our framework.

Mari Abe, Michiharu Kudo
Predictive Task Monitoring for Business Processes

Information sources providing real-time status of physical objects have drastically increased in recent times. So far, research in business process monitoring has mainly focused on checking the completion of tasks. However, the availability of real-time information allows for a more detailed tracking of individual business tasks. This paper describes a framework for controlling the safe execution of tasks and signalling possible misbehaviours at runtime. It outlines a real use case on smart logistics and the preliminary results of its application.

Cristina Cabanillas, Claudio Di Ciccio, Jan Mendling, Anne Baumgrass
Backmatter
Metadaten
Titel
Business Process Management
herausgegeben von
Shazia Sadiq
Pnina Soffer
Hagen Völzer
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-10172-9
Print ISBN
978-3-319-10171-2
DOI
https://doi.org/10.1007/978-3-319-10172-9