Skip to main content

2016 | Buch

Advanced Information Systems Engineering

28th International Conference, CAiSE 2016, Ljubljana, Slovenia, June 13-17, 2016. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 28th International Conference on Advanced Information Systems Engineering, CAiSE 2016, held in Ljubljana, Slovenia, in June 2016.

The 35 papers presented in this volume were carefully reviewed and selected from 211 submissions.

The program included the following paper sessions:

Collaboration, Business Process Modeling. Innovation, Gamication, Mining and Business Process Performance, Requirements Engineering, Process Mining, Conceptual Modeling, Mining and Decision Support, Cloud and Services, Variability and Configuration, Open Source Software, and Business Process Management.

Inhaltsverzeichnis

Frontmatter

Collaboration

Frontmatter
View-Based Near Real-Time Collaborative Modeling for Information Systems Engineering

Conceptual modeling is a creative, social process that is driven by the views of involved stakeholders. However, few systems offer view-based conceptual modeling on the Web using lock-free synchronous collaborative editing mechanisms. Based on a (meta-)modeling framework that supports near real-time collaborative modeling and metamodeling in the Web browser, this paper proposes an exploratory approach for collaboratively defining views and viewpoints on conceptual models. Viewpoints are defined on the metamodeling layer and instantiated as views within a model editor instance. The approach was successfully used for various conceptual modeling languages and it is based on user requirements for model-based creation and generation of next-generation community applications. An end-user evaluation showed the usefulness, usability and limitations of view-based collaborative modeling. We expect that Web-based collaborative modeling powered by view extensions will pave the way for a new generation of collaboratively and socially engineered information systems.

Petru Nicolaescu, Mario Rosenstengel, Michael Derntl, Ralf Klamma, Matthias Jarke
A Framework for Model-Driven Execution of Collaboration Structures

Human interaction-intensive process environments need collaboration support beyond traditional BPM approaches. Process primitives are ill suited to model and execute collaborations for shared artifact editing, chatting, or voting. To this end, this paper introduces a framework for specifying and executing such collaboration structures. The framework explicitly supports the required human autonomy in shaping the collaboration structure. We demonstrate the application of our framework to an exemplary collaboration-intensive hiring process.

Christoph Mayr-Dorn, Schahram Dustdar
Social Business Intelligence in Action

Social Business Intelligence (SBI) relies on user-generated content to let decision-makers analyze their business in the light of the environmental trends. SBI projects come in a variety of shapes, with different demands. Hence, finding the right cost-benefit compromise depending on the project goals and time horizon and on the available resources may be hard for the designer. In this paper we discuss the main factors that impact this compromise aimed at providing a guideline to the design team. First we list the main architectural options and their methodological impact. Then we discuss a case study focused on an SBI project in the area of politics, aimed at assessing the effectiveness and efficiency of these options and their methodological sustainability.

Matteo Francia, Enrico Gallinucci, Matteo Golfarelli, Stefano Rizzi

Business Process Modeling

Frontmatter
To Integrate or Not to Integrate – The Business Rules Question

Due to complex and fragmented enterprise systems and modelling landscapes, organizations struggle to cope with change propagation, compliance management and interoperability. Two aspects related to the above are business process models and business rules, both of which have a role to play in the enterprise setting. Redundancy and inconsistency between business rules and business process models is prevalent, highlighting the need for consideration of integrated modelling of the two. An important prerequisite of achieving integrated modelling is the ability to decide whether a rule should be integrated into a business process model or modelled independently. However, in the current literature, little guidance can be found that can help modellers to make such a decision. Accordingly, our aim is to empirically test factors that affect such decisions. In this paper, we describe 12 such factors and present the results of an empirical evaluation of their importance. Through our study, we identify seven factors that can provide guidance for integrated modelling.

Wei Wang, Marta Indulska, Shazia Sadiq
Micro-Benchmarking BPMN 2.0 Workflow Management Systems with Workflow Patterns

Although Workflow Management Systems (WfMSs) are a key component in workflow technology, research work for assessing and comparing their performance is limited. This work proposes the first micro-benchmark for WfMSs that can execute BPMN 2.0 workflows. To this end, we focus on studying the performance impact of well-known workflow patterns expressed in BPMN 2.0 with respect to three open source WfMSs. We executed all the experiments under a reliable environment and produced a set of meaningful metrics. This paper contributes to the area of workflow technology by defining building blocks for more complex BPMN 2.0 WfMS benchmarks. The results have shown bottlenecks on architectural design decisions, resource utilization, and limits on the load a WfMS can sustain, especially for the cases of complex and parallel structures. Experiments on a mix of workflow patterns indicated that there are no unexpected performance side effects when executing different workflow patterns concurrently, although the duration of the individual workflows that comprised the mix was increased.

Marigianna Skouradaki, Vincenzo Ferme, Cesare Pautasso, Frank Leymann, André van Hoorn
Improving Understandability of Declarative Process Models by Revealing Hidden Dependencies

Declarative process models have become a mature alternative to procedural ones. Instead of focusing on what has to happen, they rather follow an outside-in approach based on a rule base containing different types of constraints. The models are well-capable of representing flexible behavior, as everything that is not forbidden by the constraints in the model is possible during execution. These models, however, are more difficult to comprehend and require a higher mental effort of both the modeler and the reader. Since constraints can be added freely to the model, it is often overseen what impact the combination of them has. This is often referred to as hidden dependencies. This paper proposes a methodology to make these dependencies explicit for the declarative process modeling language Declare by considering a Declare model as a graph and relying on the constraints’ characteristics. Moreover, this paper also contributes by empirically confirming that a tool that can visualize hidden dependency information on top of a Declare model has a significant positive impact on the understandability of Declare models.

Johannes De Smedt, Jochen De Weerdt, Estefanía Serral, Jan Vanthienen

Innovation, Gamification

Frontmatter
The Authentication Game - Secure User Authentication by Gamification?

Knowledge-based authentication with username and password still is the predominant authentication method in practice. As the number of online accounts increases, users need to remember more and more passwords, leading to the choice of better memorable but insecure passwords. Therefore, it is important to take into account the users’ behavior to improve IT security. While gamification has been proposed as a concept to influence users’ behavior in various domains, it has not been applied to user authentication methods so far. Therefore, in this paper an approach for a gamified authentication method is presented. Using a prototype implementation, a qualitative evaluation in an empirical study is performed. Results illustrate the general feasibility of the proposed approach.

Frank Ebbers, Philipp Brune
Improving the Length of Customer Relationships on the Mobile Computer Game Business

Long lasting customer relationships have proven to be beneficial to the success of a company. The computer game business has traditionally been about developing and then selling products to the customers, but today the games apply different marketing strategies such as free-to-play model, which changes the role of a customer. The Existence, Relatedness and Growth (ERG) theory provides a model to assess how the customer could be understood, and why the game companies should implement features that support growth of the customers’ presence and make them a critical component to the developer. In this article we compare five game companies to find out how they understand their customers, how they build their relationships and let customers to grow their online identity. The results show that the growth is still minor concern, but companies have plans to improve this aspect.

Erno Vanhala, Jussi Kasurinen
ADInnov: An Intentional Method to Instil Innovation in Socio-Technical Ecosystems

This paper presents an intentional-based modelling method aimed to support the analysis, the diagnosis and innovations for socio-technical ecosystems. Understanding and improving socio-technical ecosystems is still indeed a major challenge in the information systems domain. Current information systems’ methods do not consider the particularities of socio-technical ecosystems where breakthrough innovation is not always possible. The proposed method called ADInnov aims at guiding a continuous innovation cycle in socio-technical ecosystems by focusing on the resolution of their blocking points. It combines different user-centred techniques such as interviews, serious games or storyboarding. The method, represented with the MAP formalism, results from the lessons learned in a healthcare domain project (InnoServ). Through an empirical study, project managers evaluated the method appropriateness.

Mario Cortes-Cornax, Agnès Front, Dominique Rieu, Christine Verdier, Fabrice Forest

Mining and Business Process Performance

Frontmatter
A Visual Approach to Spot Statistically-Significant Differences in Event Logs Based on Process Metrics

This paper addresses the problem of comparing different variants of the same process. We aim to detect relevant differences between processes based on what was recorded in event logs. We use transition systems to model behavior and to highlight differences. Transition systems are annotated with measurements, used to compare the behavior in the variants. The results are visualized as transitions systems, which are colored to pinpoint the significant differences. The approach has been implemented in ProM, and the implementation is publicly available. We validated our approach by performing experiments using real-life event data. The results show how our technique is able to detect relevant differences undetected by previous approaches while it avoids detecting insignificant differences.

Alfredo Bolt, Massimiliano de Leoni, Wil M. P. van der Aalst
Business Process Performance Mining with Staged Process Flows

Existing business process performance mining tools offer various summary views of the performance of a process over a given period of time, allowing analysts to identify bottlenecks and their performance effects. However, these tools are not designed to help analysts understand how bottlenecks form and dissolve over time nor how the formation and dissolution of bottlenecks – and associated fluctuations in demand and capacity – affect the overall process performance. This paper presents an approach to analyze the evolution of process performance via a notion of Staged Process Flow (SPF). An SPF abstracts a business process as a series of queues corresponding to stages. The paper defines a number of stage characteristics and visualizations that collectively allow process performance evolution to be analyzed from multiple perspectives. It demonstrates the advantages of the SPF approach over state-of-the-art process performance mining tools using a real-life event log of a Dutch bank.

Hoang Nguyen, Marlon Dumas, Arthur H. M. ter Hofstede, Marcello La Rosa, Fabrizio Maria Maggi
Minimizing Overprocessing Waste in Business Processes via Predictive Activity Ordering

Overprocessing waste occurs in a business process when effort is spent in a way that does not add value to the customer nor to the business. Previous studies have identified a recurrent overprocessing pattern in business processes with so-called “knockout checks”, meaning activities that classify a case into “accepted” or “rejected”, such that if the case is accepted it proceeds forward, while if rejected, it is cancelled and all work performed in the case is considered unnecessary. Thus, when a knockout check rejects a case, the effort spent in other (previous) checks becomes overprocessing waste. Traditional process redesign methods propose to order knockout checks according to their mean effort and rejection rate. This paper presents a more fine-grained approach where knockout checks are ordered at runtime based on predictive machine learning models. Experiments on two real-life processes show that this predictive approach outperforms traditional methods while incurring minimal runtime overhead.

Ilya Verenich, Marlon Dumas, Marcello La Rosa, Fabrizio Maria Maggi, Chiara Di Francescomarino

Requirements Engineering

Frontmatter
Formalizing and Modeling Enterprise Architecture (EA) Principles with Goal-Oriented Requirements Language (GRL)

Enterprise Architecture (EA) principles are normally written in natural language which makes them informal, hard to evaluate and complicates tracing them to the actual goals of the organization. In this paper, we present a set of requirements for improving the clarity of definitions and develop a framework to formalize EA principles with a semi-formal language, namely the Goal-oriented Requirements Language (GRL). We introduce an extension of the language with the required constructs and establish modeling rules and constraints. This allows us to automatically reason about the soundness, completeness and consistency of a set of EA principles. We demonstrate our methodology with a case study from a governmental organization. Moreover, we extend an Eclipse-based tool.

Diana Marosin, Marc van Zee, Sepideh Ghanavati
Engineering Requirements with Desiree: An Empirical Evaluation

The requirements elicited from stakeholders suffer from various afflictions, including informality, vagueness, incompleteness, ambiguity, inconsistencies, and more. It is the task of the requirements engineering process to derive from these a formal specification that truly captures stakeholder needs. The Desiree requirements engineering framework supports a rich collection of refinement operators through which an engineer can iteratively transform stakeholder requirements into a specification. The framework includes an ontology, a formal representation for requirements, as well as a tool and a systematic process for conducting requirements engineering. This paper reports the results of a series of empirical studies intended to evaluate the effectiveness of Desiree. The studies consist of three controlled experiments, where students were invited to conduct requirements analysis using textbook techniques or our framework. The results of the experiments offer strong evidence that with sufficient training, our framework indeed helps users conduct more effective requirements analysis.

Feng-Lin Li, Jennifer Horkoff, Lin Liu, Alex Borgida, Giancarlo Guizzardi, John Mylopoulos
A Modelling Language for Transparency Requirements in Business Information Systems

Transparency is a requirement of businesses and their information systems. It is typically linked to positive ethical and economic attributes, such as trust and accountability. Despite its importance, transparency is often studied as a secondary concept and viewed through the lenses of adjacent concepts such as security, privacy and regulatory requirements. This has led to a reduced ability to manage transparency and deal with its peculiarities as a first-class requirement. Ad-hoc introduction of transparency may have adverse effects, such as information overload and reduced collaboration. We propose a modelling language for capturing and analysing transparency requirements amongst stakeholders in a business information system. Our language is based on four reference models which are, in turn, based on our extensive multi-disciplinary analysis of the literature on transparency. As a proof of concept, we apply our modelling language and the analysis enabled by it on a case study of marking exam papers.

Mahmood Hosseini, Alimohammad Shahri, Keith Phalp, Raian Ali

Process Mining

Frontmatter
The ROAD from Sensor Data to Process Instances via Interaction Mining

Process mining is a rapidly developing field that aims at automated modeling of business processes based on data coming from event logs. In recent years, advances in tracking technologies, e.g., Real-Time Locating Systems (RTLS), put forward the ability to log business process events as location sensor data. To apply process mining techniques to such sensor data, one needs to overcome an abstraction gap, because location data recordings do not relate to the process directly. In this work, we solve the problem of mapping sensor data to event logs based on process knowledge. Specifically, we propose interactions as an intermediate knowledge layer between the sensor data and the event log. We solve the mapping problem via optimal matching between interactions and process instances. An empirical evaluation of our approach shows its feasibility and provides insights into the relation between ambiguities and deviations from process knowledge, and accuracy of the resulting event log.

Arik Senderovich, Andreas Rogge-Solti, Avigdor Gal, Jan Mendling, Avishai Mandelbaum
Correlating Unlabeled Events from Cyclic Business Processes Execution

Event logs are invaluable sources about the actual execution of processes. Most of process mining and postmortem analysis techniques depend on logs. All these techniques require the existence of the case ID to correlate the events. Real life logs are rarely originating from a centrally orchestrated process execution. Hence, case ID is missing, known as unlabeled logs. Correlating unlabeled events is a challenging problem that has received little attention in literature. Moreover, the few approaches addressing this challenge support acyclic business processes only. In this paper, we build on our previous work and propose an approach to deduce case ID for unlabeled event logs produced from cyclic business processes. As a result, a set of ranked labeled logs are generated. We evaluate our approach using real life logs.

Dina Bayomie, Ahmed Awad, Ehab Ezat
Efficient and Customisable Declarative Process Mining with SQL

Flexible business processes can often be modelled more easily using a declarative rather than a procedural modelling approach. Process mining aims at automating the discovery of business process models. Existing declarative process mining approaches either suffer from performance issues with real-life event logs or limit their expressiveness to a specific set of constaint types. Lately, RelationalXES, a relational database architecture for storing event log data, has been introduced. In this paper, we introduce a mining approach that directly works on relational event data by querying the log with conventional SQL. By leveraging database performance technology, the mining procedure is fast without limiting itself to detecting certain control-flow constraints. Queries can be customised and cover process perspectives beyond control flow, e.g., organisational aspects. We evaluated the performance and the capabilities of our approach with regard to several real-life event logs.

Stefan Schönig, Andreas Rogge-Solti, Cristina Cabanillas, Stefan Jablonski, Jan Mendling

Conceptual Modeling

Frontmatter
Using a Well-Founded Multi-level Theory to Support the Analysis and Representation of the Powertype Pattern in Conceptual Modeling

Multi-level conceptual modeling addresses the representation of subject domains dealing with multiple classification levels. In such domains, the occurrence of situations in which instances of a type are specializations of another type is recurrent. This recurrent phenomenon is known in the conceptual modeling community as the powertype pattern. The relevance of the powertype pattern has led to its adoption in many important modeling initiatives, including the UML. To address the challenge of multi-level modeling, we have proposed an axiomatic well-founded theory called MLT. In this paper, we demonstrate how MLT can be used as a reference theory for capturing a number of nuances related to the modeling of the powertype pattern in conceptual modeling. Moreover, we show how this theory can be used to analyze, expose limitations and redesign the UML support for modeling this pattern.

Victorio Albani Carvalho, João Paulo A. Almeida, Giancarlo Guizzardi
Mutation Operators for UML Class Diagrams

Mutation Testing is a well-established technique for assessing the quality of test cases by checking how well they detect faults injected into a software artefact (mutant). Using this technique, the most critical activity is the adequate design of mutation operators so that they reflect typical defects of the artefact under test. This paper presents the design of a set of mutation operators for Conceptual Schemas (CS) based on UML Class Diagrams (CD). In this paper, the operators are defined in accordance with an existing defects classification for UML CS and relevant elements identified from the UML-CD meta-model. The operators are subsequently used to generate first order mutants for a CS under test. Finally, in order to analyse the usefulness of the mutation operators, we measure some basic characteristics of mutation operators with three different CSs under test.

Maria Fernanda Granda, Nelly Condori-Fernández, Tanja E. J. Vos, Oscar Pastor
Automated Clustering of Metamodel Repositories

Over the last years, several model repositories have been proposed in response to the need of the MDE community for advanced systems supporting the reuse of modeling artifacts. Modelers can interact with MDE repositories with different intents ranging from merely repository browsing, to searching specific artifacts satisfying precise requirements. The organization and browsing facilities provided by current repositories is limited since they do not produce structured overviews of the contained artifacts, and the ategorization mechanisms (if any) are based on manual activities. When dealing with large numbers of modeling artifacts, such limitations increase the effort for managing and reusing artifacts stored in model repositories. By focusing on metamodel repositories, in this paper we propose the application of clustering techniques to automatically organize stored metamodels and to provide users with overviews of the application domains covered by the available metamodels. The approach has been implemented in the MDEForge repository.

Francesco Basciani, Juri Di Rocco, Davide Di Ruscio, Ludovico Iovino, Alfonso Pierantonio

Mining and Decision Support

Frontmatter
Predictive Business Process Monitoring Framework with Hyperparameter Optimization

Predictive business process monitoring exploits event logs to predict how ongoing (uncompleted) traces will unfold up to their completion. A predictive process monitoring framework collects a range of techniques that allow users to get accurate predictions about the achievement of a goal for a given ongoing trace. These techniques can be combined and their parameters configured in different framework instances. Unfortunately, a unique framework instance that is general enough to outperform others for every dataset, goal or type of prediction is elusive. Thus, the selection and configuration of a framework instance needs to be done for a given dataset. This paper presents a predictive process monitoring framework armed with a hyperparameter optimization method to select a suitable framework instance for a given dataset.

Chiara Di Francescomarino, Marlon Dumas, Marco Federici, Chiara Ghidini, Fabrizio Maria Maggi, Williams Rizzi
Decision Mining Revisited - Discovering Overlapping Rules

Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules, which only allow one out of multiple activities to be performed. These methods assume that decision making is fully deterministic, and all factors influencing decisions are recorded. In case the underlying decision rules are overlapping due to non-determinism or incomplete information, the rules returned by existing methods do not fit the recorded data well. This paper proposes a new technique to discover overlapping decision rules, which fit the recorded data better at the expense of precision, using decision tree learning techniques. An evaluation of the method on two real-life data sets confirms this trade off. Moreover, it shows that the method returns rules with better fitness and precision in under certain conditions.

Felix Mannhardt, Massimiliano de Leoni, Hajo A. Reijers, Wil M. P. van der Aalst
An Adaptability-Driven Model and Tool for Analysis of Service Profitability

Profitability of adopting Software-as-a-Service (SaaS) solutions for existing applications is currently analyzed mostly in informal way. Informal analysis is unreliable because of the many conflicting factors that affect costs and benefits of offering applications on the cloud. We propose a quantitative economic model for evaluating profitability of migrating to SaaS that enables potential service providers to evaluate costs and benefits of various migration strategies and choices of target service architectures. In previous work, we presented a rudimentary conceptual SaaS economic model enumerating factors that have to do with service profitability, and defining qualitative relations among them. A quantitative economic model presented in this paper extends the conceptual model with equations that quantify these relations, enabling more precise reasoning about profitability of various SaaS implementation strategies, helping potential service providers to select the most suitable strategy for their business situation.

Ouh Eng Lieh, Stan Jarzabek

Cloud and Services

Frontmatter
Optimizing Monitorability of Multi-cloud Applications

When adopting a multi-cloud strategy, the selection of cloud providers where to deploy VMs is a crucial task for ensuring a good behaviour for the developed application. This selection is usually focused on the general information about performances and capabilities offered by the cloud providers. Less attention has been paid to the monitoring services although, for the application developer, is fundamental to understand how the application behaves while it is running. In this paper we propose an approach based on a multi-objective mixed integer linear optimization problem for supporting the selection of the cloud providers able to satisfy constraints on monitoring dimensions associated to VMs. The balance between the quality of data monitored and the cost for obtaining these data is considered, as well as the possibility for the cloud provider to enrich the set of monitored metrics through data analysis.

Edoardo Fadda, Pierluigi Plebani, Monica Vitali
CloudMap: A Visual Notation for Representing and Managing Cloud Resources

With the vast proliferation of cloud computing technologies, DevOps are inevitably faced with managing large amounts of complex cloud resource configurations. This involves being able to proficiently understand and analyze cloud resource attributes and relationships, and make decisions on demand. However, a majority of cloud tools encode resource descriptions and monitoring and control scripts in tedious textual formats. This presents complex and overwhelming challenges for DevOps to manually read, and iteratively build a mental representation especially when it involves a large number of cloud resources. To alleviate these frustrations we propose a model-driven notation to visually represent, monitor and control cloud resource configurations; managed underneath by existing cloud resource orchestration tools such as Docker. We propose a mindmap-based interface and set of visualization patterns. We have employed an extensive user-study to base design decisions, and validate our work based on experimentation with real-world scenarios. The results show significant productivity and efficiency improvements.

Denis Weerasiri, Moshe Chai Barukh, Boualem Benatallah, Cao Jian
Keep Calm and Wait for the Spike! Insights on the Evolution of Amazon Services

Web services are black box dependency magnets. Hence, studying how they evolve is both important and challenging. In this paper, we focus on one of the most successful stories of the service-oriented paradigm in industry, i.e., the Amazon services. We perform a principled empirical study, that detects evolution patterns and regularities, based on Lehman’s laws of software evolution. Our findings indicate that service evolution comes with spikes of change, followed by calm periods where the service is internally enhanced. Although spikes come with unpredictable volume, developers can count in the near certainty of the calm periods following them to allow their absorption. As deletions rarely occur, both the complexity and the exported functionality of a service increase over time (in fact, predictably). Based on the above findings, we provide recommendations that can be used by the developers of Web service applications for service selection and application maintenance.

Apostolos V. Zarras, Panos Vassiliadis, Ioannis Dinos

Variability and Configuration

Frontmatter
Comprehensive Variability Analysis of Requirements and Testing Artifacts

Analyzing variability of software artifacts is important for increasing reuse and improving development of similar software products, as is the case in the area of Software Product Line Engineering (SPLE). Current approaches suggest analyzing the variability of certain types of artifacts, most notably requirements. However, as the specification of requirements may be incomplete or generalized, capturing the differences between the intended software behaviors may be limited, neglecting essential parts, such as behavior preconditions. Thus, we suggest in this paper utilizing testing artifacts in order to comprehensively analyze the variability of the corresponding requirements. The suggested approach, named SOVA R-TC, which is based on Bunge’s ontological model, uses the information stored and managed in Application Lifecycle Management (ALM) environments. It extracts the behavior transformations from the requirements and the test cases and presents them in the form of initial states (preconditions) and final states (post-conditions or expected results). It further compares the behavior transformations of different software products and proposes how to analyze their variability based on cross-phase artifacts.

Michal Steinberger, Iris Reinhartz-Berger
Comprehensibility of Variability in Model Fragments for Product Configuration

The ability to manage variability in software has become crucial to overcome the complexity and variety of systems. To this end, a comprehensible representation of variability is important. Nevertheless, in previous works, difficulties have been detected to understand variability in an industrial environment. Specifically, domain experts had difficulty understanding variability in model fragments to produce the software for their products. Hence, the aim of this paper is to further investigate these difficulties by conducting an experiment in which participants deal with variability in order to achieve their desired product configurations. Our results show new insights into product configuration which suggest next steps to improve general variability modeling approaches, and therefore promoting the adoption of these approaches in industry.

Jorge Echeverría, Francisca Pérez, Carlos Cetina, Óscar Pastor
Static Analysis of Dynamic Database Usage in Java Systems

Understanding the links between application programs and their database is useful in various contexts such as migrating information systems towards a new database platform, evolving the database schema, or assessing the overall system quality. In the case of Java systems, identifying which portion of the source code accesses which portion of the database may prove challenging. Indeed, Java programs typically access their database in a dynamic way. The queries they send to the database server are built at runtime, through String concatenations, or Object-Relational Mapping frameworks like Hibernate and JPA. This paper presents a static analysis approach to program-database links recovery, specifically designed for Java systems. The approach allows developers to automatically identify the source code locations accessing given database tables and columns. It focuses on the combined analysis of JDBC, Hibernate and JPA invocations. We report on the use of our approach to analyse three real-life Java systems.

Loup Meurice, Csaba Nagy, Anthony Cleve

Open Source Software

Frontmatter
A Longitudinal Study of Community-Oriented Open Source Software Development

End-users are often argued to be the source of innovation in Open Source Software (OSS). However, most of the existing empirical studies about OSS projects have been restricted to developer sub-communities only. In this paper, we address the question, if and under which conditions the requirements and ideas from end-users indeed influence the development processes in OSS. We present an approach for automated requirements elicitation process discovery in OSS communities. The empirical basis are three large-scale interdisciplinary OSS projects in bioinformatics, focusing on communication in the mailing lists and source code histories over ten years. Our study results in preliminary guidelines for the organization of community-oriented software development.

Kateryna Neulinger, Anna Hannemann, Ralf Klamma, Matthias Jarke
OSSAP – A Situational Method for Defining Open Source Software Adoption Processes

Organizations are increasingly becoming Open Source Software (OSS) adopters, either as a result of a strategic decision or just as a consequence of technological choices. The strategy followed to adopt OSS shapes organizations’ businesses; therefore methods to assess such impact are needed. In this paper, we propose OSSAP, a method for defining OSSAdoption business Processes, built using a Situational Method Engineering (SME) approach. We use SME to combine two well-known modelling methods, namely goal-oriented models (using i*) and business process models (using BPMN), with a pre-existing catalogue of goal-oriented OSS adoption strategy models. First, we define a repository of reusable method chunks, including the guidelines to apply them. Then, we define OSSAP as a composition of those method chunks to help organizations to improve their business processes in order to integrate the best fitting OSS adoption strategy. We illustrate it with an example of application in a telecommunications company.

Lidia López, Dolors Costal, Jolita Ralyté, Xavier Franch, Lucía Méndez, Maria Carmela Annosi

Business Process Management

Frontmatter
Narrowing the Business-IT Gap in Process Performance Measurement

To determine whether strategic goals are met, organizations must monitor how their business processes perform. Process Performance Indicators (PPIs) are used to specify relevant performance requirements. The formulation of PPIs is typically a managerial concern. Therefore, considerable effort has to be invested to relate PPIs, described by management, to the exact operational and technical characteristics of business processes. This work presents an approach to support this task, which would otherwise be a laborious and time-consuming endeavor. The presented approach can automatically establish links between PPIs, as formulated in natural language, with operational details, as described in process models. To do so, we employ machine learning and natural language processing techniques. A quantitative evaluation on the basis of a collection of 173 real-world PPIs demonstrates that the proposed approach works well.

Han van der Aa, Adela del-Río-Ortega, Manuel Resinas, Henrik Leopold, Antonio Ruiz-Cortés, Jan Mendling, Hajo A. Reijers
A Configurable Resource Allocation for Multi-tenant Process Development in the Cloud

Cloud computing has become an important infrastructure for outsourcing service-based business processes in a multi-tenancy way. Configurable process models enable the sharing of a reference process among different tenants that can be customized according to specific needs. While concepts for specifying the control flow of such processes are well understood, there is a lack of support for cloud-specific resource configuration where different allocation alternatives need to be explicitly defined. In this paper, we address this research gap by extending configurable process models with the required configurable cloud resource allocation. Our proposal allows different tenants to customize the selection of the needed resources taking into account two important properties elasticity and shareability. Our prototypical implementation demonstrates the feasibility and the results of our experiments highlight the effectiveness of our approach.

Emna Hachicha, Nour Assy, Walid Gaaloul, Jan Mendling
Context-Aware Analysis of Past Process Executions to Aid Resource Allocation Decisions

The allocation of resources to process tasks can have a significant impact on the performance (such as cost, time) of those tasks, and hence of the overall process. Past resource allocation decisions, when correlated with process execution histories annotated with quality of service (or performance) measures, can be a rich source of knowledge about the best resource allocation decisions. The optimality of resource allocation decisions is not determined by the process instance alone, but also by the context in which these instances are executed. This phenomenon turns out to be even more compelling when the resources in question are human resources. Human workers with same the organizational role and capabilities can have heterogeneous behaviors based on their operational context. In this work, we propose an approach to supporting resource allocation decisions by extracting information about the process context and process performance from past process executions. The information extracted is analyzed using exploratory data mining techniques to discover resource allocation decisions. The knowledge thus acquired can be used to guide resource allocations in new process instances. Experiments performed on synthetic and real-world execution logs demonstrate the effectiveness of the proposed approach.

Renuka Sindhgatta, Aditya Ghose, Hoa Khanh Dam
Backmatter
Metadaten
Titel
Advanced Information Systems Engineering
herausgegeben von
Selmin Nurcan
Pnina Soffer
Marko Bajec
Johann Eder
Copyright-Jahr
2016
Electronic ISBN
978-3-319-39696-5
Print ISBN
978-3-319-39695-8
DOI
https://doi.org/10.1007/978-3-319-39696-5