Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 24th International Conference on Advanced Information Systems Engineering, CAiSE 2012, held in Gdansk, Poland, in June 2012. The 42 revised full papers, 2 full-length invited papers and 4 short tutorial papers, were carefully reviewed and selected from 297 submissions. The contributions have been grouped into the following topical sections: business process model analysis; service and component composition; language and models; system variants and configuration; process mining; ontologies; requirements and goal models; compliance; monitoring and prediction; services; case studies; business process design; feature models and product lines; and human factors.

Inhaltsverzeichnis

Frontmatter

Keynotes

The Future of Enterprise Systems in a Fully Networked Society

The industrialised countries are living in an crucial phase for their economies and social systems. Computers and digital technologies have traditionally played a central role for the development of the socio-economic systems, and of enterprises in particular. But if the socio-economic scenario undergoes profound changes, also the enterprises need to change and, consequently, enterprise computing. In the next decade, enterprise software systems cannot continue to evolve along the beaten paths, there is an urgent need for new directions in the ways enterprise software is conceived, built, deployed and evolved. In this paper we illustrate the main outcome of the Future Internet Enterprise Systems (

FInES

) Research Roadmap 2025, a study on the future research lines of enterprise systems, promoted by the

FInES

Cluster of the European Commission (DG Information Society and Media.)

Michele Missikoff

Challenges for Future Platforms, Services and Networked Applications

The keynote will address various ICT aspects related to future platforms, applications and services that may impact and hopefully improve the way various business and enterprise processes will be organized in the future. New development tools, easy-to-program powerful mobile devices, interactive panels, etc. enable users to prototype new IT systems with much better interfaces, to quickly setup collaborative environments and easily deploy online applications. However, many people have not realized how quickly data, computational and networking requirements of these new information systems have increased and what constraints are behind existing underlying e-Infrastructures. Therefore, the main aim of this keynote will be to share with the audience example scenarios and lessons learned. Additionally, various challenges encountered in practice for future information systems and software development methods will be briefly discussed. The keynote will be summarized by an updated report on existing research e-Infrastructures in Poland and worldwide.

Krzysztof Kurowski

Business Process Model Analysis

Understanding Business Process Models: The Costs and Benefits of Structuredness

Previous research has put forward various metrics of business process models that are correlated with understandability. Two such metrics are size and degree of (block-)structuredness. What has not been sufficiently appreciated at this point is that these desirable properties may be at odds with one another. This paper presents the results of a two-pronged study aimed at exploring the trade-off between size and structuredness of process models. The first prong of the study is a comparative analysis of the complexity of a set of unstructured process models from industrial practice and of their corresponding structured versions. The second prong is an experiment wherein a cohort of students was exposed to semantically equivalent unstructured and structured process models. The key finding is that structuredness is not an absolute desideratum vis-a-vis for process model understandability. Instead, subtle trade-offs between structuredness and other model properties are at play.

Marlon Dumas, Marcello La Rosa, Jan Mendling, Raul Mäesalu, Hajo A. Reijers, Nataliia Semenenko

Aggregating Individual Models of Decision-Making Processes

When faced with a difficult decision, it would be nice to have access to a model that shows the essence of what others did in the same situation. Such a model should show what, and in which sequence, needs to be done so that alternatives can be correctly determined and criterions can be carefully considered. To make it trustworthy, the model should be mined from a large number of previous instances of similar decisions. Our decision-process mining framework aims to capture, in logs, the processes of large numbers of individuals and extract meaningful models from those logs. This paper shows how individual decision data models can be aggregated into a single model and how less frequent behavior can be removed from the aggregated model. We also argue that main process mining algorithms perform poorly on decision logs.

Razvan Petrusel

Generating Natural Language Texts from Business Process Models

Process Modeling is a widely used concept for understanding, documenting and also redesigning the operations of organizations. The validation and usage of process models is however affected by the fact that only business analysts fully understand them in detail. This is in particular a problem because they are typically not domain experts. In this paper, we investigate in how far the concept of verbalization can be adapted from object-role modeling to process models. To this end, we define an approach which automatically transforms BPMN process models into natural language texts and combines different techniques from linguistics and graph decomposition in a flexible and accurate manner. The evaluation of the technique is based on a prototypical implementation and involves a test set of 53 BPMN process models showing that natural language texts can be generated in a reliable fashion.

Henrik Leopold, Jan Mendling, Artem Polyvyanyy

Service and Component Composition

Towards Conflict-Free Composition of Non-functional Concerns

In component-based software development, applications are decomposed, e.g., into functional and non-functional components which have to be composed to a working system. The composition of non-functional behavior from different non-functional domains such as security, reliability, and performance is particularly complex. Finding a valid composition is challenging because there are different types of interdependencies between concerns, e.g. mutual exclusion, conflicts, and ordering restrictions, which should not be violated.

In this paper we formalize a set of interdependency types between non-functional actions realizing non-functional behavior. These interdependencies can either be specified explicitly or implicitly by taking action properties into account. This rich set of interdependencies can then be used to ease the task of action composition by validating compositions against interdependency constraints, proposing conflict resolution strategies, and by applying our guided composition procedure. This procedure proposes next valid modeling steps leading to conflict-free compositions.

Benjamin Schmeling, Anis Charfi, Marko Martin, Mira Mezini

Fuzzy Verification of Service Value Networks

Service Value Networks (

SVNs

) represent a flexible design for service suppliers to offer attractive value propositions to final customers. Furthermore, networked services can satisfy more complex customer needs than single services acting on their own. Although,

SVNs

can cover complex needs, there is usually a mismatch between what the

SVNs

offer and what the customer needs. We present a framework to achieve

SVN

composition by means of the propose-critique-modify (PCM) problem-solving method and a Fuzzy Inference System (FIS). Whereas the PCM method composes alternative

SVNs

given some customer need, the FIS verifies the fitness of the composed

SVNs

for the given need. Our framework offers not only an interactive dialogue in which the customer can refine the composed

SVNs

but also visualizes the final composition, by making use of

e

3

-value

models. Finally, the applicability of our approach is shown by means of a case study in the educational service sector.

Iván S. Razo-Zapata, Pieter De Leenheer, Jaap Gordijn, Hans Akkermans

Cooperative Service Composition

Traditional service composition approaches are top-down(using domain knowledge to break-down the desired functionality), or bottom-up (using planning techniques). The former rely on available problem decomposition knowledge, whilst the latter rely on the availability of a known set of services, otherwise automatic composition has been considered impossible. We address this by proposing a third approach: Cooperative Service Composition (CSC),inspired by the way organisations come together in consortia to deliver services. CSC considers each service provider as proactive in service composition, and provides a semantics-based mechanism allowing innovative service compositions to emerge as result of providers’ interactions. The key challenges we resolve are how to determine if a contribution brings the composition closer to its goal, and how to limit the number of possible solutions. In this paper we describe the approach and the solutions to the two key challenges, and demonstrate their application to the composition of financial web services.

Nikolay Mehandjiev, Freddy Lécué, Martin Carpenter, Fethi A. Rabhi

Language and Models

Abstracting Modelling Languages: A Reutilization Approach

Model-Driven Engineering automates the development of information systems. This approach is based on the use of Domain-Specific Modelling Languages (DSMLs) for the description of the relevant aspects of the systems to be built. The increasing complexity of the target systems has raised the need for abstraction techniques able to produce simpler versions of the models, but retaining certain properties of interest. However, developing such abstractions for each DSML from scratch is a time and resource consuming activity.

Our solution to this situation is a number of techniques to build reusable abstractions that are defined once and can be reused over families of modelling languages sharing certain requirements. As a proof of concept, we present a catalogue of reusable abstractions, together with an implementation in the

MetaDepth

multi-level meta-modelling tool.

Juan de Lara, Esther Guerra, Jesús Sánchez-Cuadrado

Logical Invalidations of Semantic Annotations

Semantic annotations describe the semantics of artifacts like documents, web-pages, schemas, or web-services with concepts of a reference ontology. Application interoperability, semantic query processing, semantic web services, etc. rely on a such a description of the semantics. Semantic annotations need to be created and maintained. We present a technique to detect logical errors in semantic annotations and provide information for their repair. In semantically rich ontology formalisms such as OWL-DL the identification of the cause of logical errors can be a complex task. We analyze how the underlying annotation method influences the types of invalidations and propose efficient algorithms to detect, localize and explain different types of logical invalidations in annotations.

Julius Köpke, Johann Eder

Uniform Access to Non-relational Database Systems: The SOS Platform

Non-relational databases (often termed as NoSQL) have recently emerged and have generated both interest and criticism. Interest because they address requirements that are very important in large-scale applications, criticism because of the comparison with well known relational achievements. One of the major problems often mentioned is the heterogeneity of the languages and the interfaces they offer to developers and users. Different platforms and languages have been proposed, and applications developed for one system require significant effort to be migrated to another one. Here we propose a common programming interface to NoSQL systems (and also to relational ones) called SOS (Save Our Systems). Its goal is to support application development by hiding the specific details of the various systems. It is based on a metamodeling approach, in the sense that the specific interfaces of the individual systems are mapped to a common one. The tool provides interoperability as well, since a single application can interact with several systems at the same time.

Paolo Atzeni, Francesca Bugiotti, Luca Rossi

System Variants and Configuration

Variability as a Service: Outsourcing Variability Management in Multi-tenant SaaS Applications

In order to reduce the overall application expenses and time to market, SaaS (

Software as a Service

) providers tend to outsource several parts of their IT resources to other services providers. Such outsourcing helps SaaS providers in reducing costs and concentrating on their core competences: software domain expertises, business-processes modeling, implementation technologies and frameworks etc. However, when a SaaS provider offers a single application instance for multiple customers following the multi-tenant model, these customers (or

tenants

) requirements may differ, generating an important variability management concern. We believe that variability management should also be outsourced and considered as a service. The novelty of our work is to introduce the new concept of

Variability as a Service (VaaS) model

. It induces the appearance of VaaS providers. The objective is to relieve the SaaS providers looking forward to adopt such attractive multi-tenant solution, from developing a completely new and expensive variability solution beforehand. We present in this paper the first stage of our work: the VaaS meta-model and the VariaS component.

Ali Ghaddar, Dalila Tamzalit, Ali Assaf, Abdalla Bitar

Configurable Process Models for the Swedish Public Sector

Process orientation and e-services have become essential in revitalizing local government. Although most municipalities offer similar services there is little reuse of e-services or underlying process models among municipalities. Configurable process models represent a promising solution to this challenge by integrating numerous variations of a process in one general model. In this study, design science is used to develop a configurable process model to capture the variability of a number of different processes. The results include a validated configurable process model for social services, a benefits analysis and directions for future development. Although the results are perceived useful by municipal officials, there are several challenges to be met before the benefits of configurable process models are fully utilized.

Carl-Mikael Lönn, Elin Uppström, Petia Wohed, Gustaf Juell-Skielse

Aligning Software Configuration with Business and IT Context

An important activity to maximize Business/IT alignment is selecting a software configuration that fits a given context. Feature models represent the space of software configurations in terms of distinguished characteristics (features). However, they fall short in representing the effect of context on the adoptability and operability of features and, thus, of configurations. Capturing this effect helps to minimize the dependency on analysts and domain experts when deriving a software for a given business and IT environment. In this paper, we propose

contextual feature models

as a means to explicitly represent and reason about the interplay between the variability of both features and context. We devise a formal framework and automated analyses which enable to systematically derive products aligned with an organizational context. We also propose FM-Context, a support tool for modeling and analysis.

Fabiano Dalpiaz, Raian Ali, Paolo Giorgini

Process Mining

Mining Inter-organizational Business Process Models from EDI Messages: A Case Study from the Automotive Sector

Traditional standards for

Electronic Data Interchange

(EDI), such as EDIFACT and ANSI X12, have been employed in Business-to-Business (B2B) e-commerce for decades. Due to their wide industry coverage and long-standing establishment, they will presumably continue to play an important role for some time. EDI systems are typically not “process-aware”, i.e., messages are standardized but processes simply “emerge”. However, to improve performance and to enhance the control, it is important to understand and analyze the “real” processes supported by these systems. In the case study presented in this paper we uncover the inter-organizational business processes of an automotive supplier company by analyzing the EDIFACT messages that it receives from its business partners. We start by transforming a set of observed messages to an event log, which requires that the individual messages are correlated to process instances. Thereby, we make use of the specific structure of EDIFACT messages. Then we apply process mining techniques to uncover the inter-organizational business processes. Our results show that inter-organizational business process models can be derived by analyzing EDI messages that are exchanged in a network of organizations.

Robert Engel, Wil M. P. van der Aalst, Marco Zapletal, Christian Pichler, Hannes Werthner

Data Transformation and Semantic Log Purging for Process Mining

Existing process mining approaches are able to tolerate a certain degree of noise in the process log. However, processes that contain infrequent paths, multiple (nested) parallel branches, or have been changed in an ad-hoc manner, still pose major challenges. For such cases, process mining typically returns “spaghetti-models”, that are hardly usable even as a starting point for process (re-)design. In this paper, we address these challenges by introducing data transformation and pre-processing steps that improve and ensure the quality of mined models for existing process mining approaches. We propose the concept of semantic log purging, the cleaning of logs based on domain specific constraints utilizing semantic knowledge which typically complements processes. Furthermore we demonstrate the feasibility and effectiveness of the approach based on a case study in the higher education domain. We think that semantic log purging will enable process mining to yield better results, thus giving process (re-)designers a valuable tool.

Linh Thao Ly, Conrad Indiono, Jürgen Mangler, Stefanie Rinderle-Ma

Improved Artificial Negative Event Generation to Enhance Process Event Logs

Process mining is the research area that is concerned with knowledge discovery from event logs. Process mining faces notable difficulties. One is that process mining is commonly limited to the harder setting of unsupervised learning, since negative information about state transitions that were prevented from taking place (i.e. negative events) is often unavailable in real-life event logs. We propose a method to enhance process event logs with artificially generated negative events, striving towards the induction of a set of negative examples that is both correct (containing no false negative events) and complete (containing all, non-trivial negative events). Such generated sets of negative events can advantageously be applied for discovery and evaluation purposes, and in auditing and compliance settings.

Seppe K. L. M. vanden Broucke, Jochen De Weerdt, Bart Baesens, Jan Vanthienen

Efficient Discovery of Understandable Declarative Process Models from Event Logs

Process mining techniques often reveal that real-life processes are more variable than anticipated. Although declarative process models are more suitable for less structured processes, most discovery techniques generate conventional procedural models. In this paper, we focus on discovering

Declare

models based on event logs. A Declare model is composed of temporal constraints. Despite the suitability of declarative process models for less structured processes, their discovery is far from trivial. Even for smaller processes there are many potential constraints. Moreover, there may be many constraints that are trivially true and that do not characterize the process well. Naively checking all possible constraints is computationally intractable and may lead to models with an excessive number of constraints. Therefore, we have developed an Apriori algorithm to reduce the search space. Moreover, we use new metrics to prune the model. As a result, we can quickly generate understandable Declare models for real-life event logs.

Fabrizio M. Maggi, R. P. Jagadeesh Chandra Bose, Wil M. P. van der Aalst

Ontologies

OtO Matching System: A Multi-strategy Approach to Instance Matching

In this paper we describe an ontology to ontology (OtO) matching system which implements a novel instance matching algorithm. The proposed multi-strategy matching system is domain independent and fully customizable at any level. It optimizes the instance matching process by leveraging (a) the rich semantic knowledge from the schema matching results, (b) the implicit knowledge of the domain expert by capturing the identification power of the properties and (c) the probability estimation of the result’s validity, in order to accurately detect the ontology instances that represent the same real-world entity. Furthermore we evaluate the system with the ISLab Instance Matching Benchmark in the Ontology Alignment Evaluation Initiative 2009 campaign and report the results.

Evangelia Daskalaki, Dimitris Plexousakis

SCIMS: A Social Context Information Management System for Socially-Aware Applications

Social Context Information has been used with encouraging results in developing socially-aware applications in different domains. However, users’ social information is distributed over the web and managed by many different proprietary applications, which is a challenge for application developers as they must collect information from different sources and wade through a lot of irrelevant information to obtain the social context information of interest. Combining the social information from the diverse sources and incorporating richer semantics could greatly assist the developers and enrich the applications.

In this paper, we introduce

SCIMS

, a social context information management system. It includes the ability to acquire raw social data from multiple sources; an ontology based model for classifying, inferring and storing social context information, in particular, social relationships and status; an ontology based policy model and language for owners to control access to their information; a query interface for accessing and utilizing social context information. We evaluate the performance and scalability of SCIMS using real data from Facebook, LinkedIn, Twitter and Google calendar, and demonstrate its applicability through a socially-aware phone call application.

Muhammad Ashad Kabir, Jun Han, Jian Yu, Alan Colman

Ontological Meta-properties of Derived Object Types

In this paper, we revisit a number of classical formal meta-properties that have been used in the conceptual modeling and ontology engineering literature to provide finer-grained distinctions among the category of Object Types. These distinctions constitute an essential part of relevant existing approaches, in particular, the ontology-driven conceptual modeling language OntoUML, and the ontology and taxonomy evaluation methodology OntoClean. The idea in this paper is to investigate the interaction between these meta-properties and Derived Object Types, i.e., Object Types which extensions are dynamically inferred via Derivation Rules. The contributions here are two-fold: firstly, we revisit two classical Derivation Patterns and prove a number of results that can be used to infer the modal meta-properties of Derived Types from those of the types participating in the associated derivation rules; secondly, we demonstrate how these results can be applied in the automated support for model construction in OntoUML.

Giancarlo Guizzardi

An Automatic Approach for Mapping Product Taxonomies in E-Commerce Systems

The recent explosion of Web shops has made the user task of finding the desired products an increasingly difficult one. One way to solve this problem is to offer an integrated access to product information on the Web, for which an important component is the mapping of product taxonomies. In this paper, we introduce CMAP, an algorithm that can be used to map one product taxonomy to another product taxonomy. CMAP employs word sense disambiguation techniques and lexical and structural similarity measures in order to find the best matching categories. The performance on precision, recall, and the

F

1

-measure is compared favourably with state-of-the-art algorithms for taxonomy mapping.

Lennart J. Nederstigt, Steven S. Aanen, Damir Vandić, Flavius Frăsincar

Requirements and Goal Models

Requirements-Driven Root Cause Analysis Using Markov Logic Networks

Root cause analysis for software systems is a challenging diagnostic task, due to the complexity emanating from the interactions between system components and the sheer size of logged data. This diagnostic task is usually assisted by human experts who create mental models of the system-at-hand, in order to generate hypotheses and conduct the analysis. In this paper, we propose a root cause analysis framework based on requirement goal models. We consequently use these models to generate a Markov Logic Network that serves as a diagnostic knowledge repository. The network can be trained and used to provide inferences as to why and how a particular failure observation may be explained by collected logged data. The proposed framework improves over existing approaches by handling uncertainty in observations, using natively generated log data, and by providing ranked diagnoses. The framework is illustrated using a test environment based on commercial off-the-shelf software components.

Hamzeh Zawawy, Kostas Kontogiannis, John Mylopoulos, Serge Mankovskii

Validation of User Intentions in Process Models

Goal models and business process models are complementary artifacts for capturing the requirements and their execution flow in software engineering. Usually, goal models serve as input for designing business process models, and it requires mappings between both types of models. Due to the large number of possible configurations of elements from both goal models and business process models, developers struggle with the challenge of maintaining consistent configurations of both models and their mappings. Managing these mappings manually is error-prone. In our work, we propose an automated solution that relies on Description Logics and automated reasoners for validating mappings that describe the realization of goals by activities in business process models. The results are the identification of two inconsistency patterns – strong inconsistency and potential inconsistency, and the development of the corresponding algorithms for detecting inconsistencies.

Gerd Gröner, Mohsen Asadi, Bardia Mohabbati, Dragan Gašević, Fernando Silva Parreiras, Marko Bošković

Agile Requirements Evolution via Paraconsistent Reasoning

Innovative companies need an agile approach for the engineering of their product requirements, to rapidly respond to and exploit changing conditions. The agile approach to requirements must nonetheless be systematic, especially with respect to accommodating legal and nonfunctional requirements. This paper examines how to support a combination of lightweight, agile requirements which can still be systematically modeled, analyzed and changed. We propose a framework,

RE-KOMBINE

, which is based on a propositional language for requirements modeling called

Techne

. We define operations on

Techne

models which tolerate the presence of inconsistencies in the requirements. This paraconsistent reasoning is vital for supporting delayed commitment to particular design solutions. We evaluate these operations with an industry case study using two well-known formal analysis tools. Our evaluations show that the proposed framework scales to industry-sized requirements models, while still retaining (via propositional logic) the informality that is so useful during early requirements analysis.

Neil A. Ernst, Alexander Borgida, John Mylopoulos, Ivan J. Jureta

Compliance

On Analyzing Process Compliance in Skin Cancer Treatment: An Experience Report from the Evidence-Based Medical Compliance Cluster (EBMC2)

Process mining has proven itself as a promising analysis technique for processes in the health care domain. The goal of the EBMC

2

project is to analyze skin cancer treatment processes regarding their compliance with relevant guidelines. For this, first of all, the actual treatment processes have to be discovered from the available data sources. In general, the L

*

life cycle model has been suggested as structured methodology for process mining projects. In this experience paper, we describe the challenges and lessons learned when realizing the L

*

life cycle model in the EBMC

2

context. Specifically, we provide and discuss different approaches to empower data of low maturity levels, i.e., data that is not already available in temporally ordered event logs, including a prototype for structured data acquisition. Further, first results on how process mining techniques can be utilized for data screening are presented.

Michael Binder, Wolfgang Dorda, Georg Duftschmid, Reinhold Dunkl, Karl Anton Fröschl, Walter Gall, Wilfried Grossmann, Kaan Harmankaya, Milan Hronsky, Stefanie Rinderle-Ma, Christoph Rinner, Stefanie Weber

Compliance Evaluation Featuring Heat Maps (CE-HM): A Meta-Modeling-Based Approach

The recent global economic crisis and the growing reliance on information technology pile pressure on organizations to comply with regulations, legal rules and laws. Organizations tend to struggle with paper work for the mere purpose of proving compliance. Compliance Management should be subject to existing management frameworks. Otherwise, ineffective procedures will evolve, and the organization will fail to combine Compliance Management with continuous improvement measures. We advocate for a generic compliance evaluation method, which builds upon existing enterprise modeling frameworks, for three reasons: First, synergies in data acquisition will arise. Second, reusing organization-wide accepted viewpoints will create trust among stakeholders and ease communication of the compliance status. Third, taking steps to improve compliance will be part of daily operations based on institutionalized processes geared to the established management frameworks. In addition, the prototypical implementation of the ‘Compliance Evaluation Featuring Heat Maps (CE-HM)’ method based on a meta-modeling platform is presented.

Dimitris Karagiannis, Christoph Moser, Arash Mostashari

A Compliance Management Ontology: Developing Shared Understanding through Models

Managing regulatory compliance is increasingly challenging and costly for organizations world-wide. Due to the diversity of stakeholders in compliance management initiatives, any effort towards providing compliance management solutions demands a common understanding of compliance management concepts and practice. This paper reports on research undertaken to develop an ontology to create a shared conceptualization of the compliance management domain, namely CoMOn (Compliance Management Ontology). The ontology concepts are extracted from interviews and surveys of compliance management experts and practitioners, and refined through synthesis with leading academic literature related to compliance management. A semiotic framework was utilized to conduct a rigorous evaluation of CoMOn through a series of eight case studies spanning a number of industry sectors. The consensus achieved through the evaluation has positioned CoMOn as a comprehensive domain ontology for Compliance Management.

Norris Syed Abdullah, Shazia Sadiq, Marta Indulska

Monitoring and Prediction

Patterns to Enable Mass-Customized Business Process Monitoring

Mass-customization challenges the one-size-fits-all assumption of mass production, allowing customers to specify the options that best fit their requirements when choosing a product or a service. In business process management, to achieve mass-customization, providers offer to their customers the opportunity to customize the way in which a process will be enacted. We focus on monitoring as a specific customization aspect. We propose a multi-dimensional classification of modeling patterns for customized monitoring infrastructures. Patterns enable the provider to offer a set of customizable options to customers and design a monitoring infrastructure that fits the preferences specified by customers on such options. An example in the online advertising industry demonstrates how our framework can improve the services currently offered by providers.

Marco Comuzzi, Samuil Angelov, Jochem Vonk

Predicting QoS in Scheduled Crowdsourcing

Crowdsourcing has emerged as a new paradigm for outsourcing simple for humans yet hard to automate tasks to an undefined network of people. Crowdsourcing platforms like Amazon Mechanical Turk provide scalability and flexibility for customers that need to get manifold similar independent jobs done. However, such platforms do not provide certain guarantees for their services regarding the expected job quality and the time of processing, although such guarantees are advantageous from the perspective of Business Process Management. In this paper, we consider an alternative architecture of a crowdsourcing platform, where the workers are assigned to tasks by the platform according to their availability and skills. We propose the technique for estimating accomplishable guarantees and negotiating Service Level Agreements in such an environment.

Roman Khazankin, Daniel Schall, Schahram Dustdar

Services

Towards Proactive Web Service Adaptation

With the rapid development of web service technology, next generations of web service applications need to be able to predict problems, such as potential degradation scenarios, future erroneous behaviors and deviations from expected behaviors, and move towards resolving those problems not just reactively, but even proactively, i.e., before the problems occur. Service oriented applications are thus driven by the requirements that bring the concepts of decentralization, dynamism, adaptation, and automation to an extreme. In this paper, an approach is proposed that depends mainly on the concept of proactive adaptation by the use of reinforcement learning to achieve an autonomous dynamic behavior of web service composition considering potential degradation and emergence in

QoS

values. Experimental results show the effectiveness of the proposed approach in dynamic web service composition environments.

Ahmed Moustafa, Minjie Zhang

Clouding Services for Linked Data Exploration

Exploration of linked data aims at providing tools and techniques that enable to effectively explore a dataset through concepts, relationships, and properties by means of SPARQL endpoints and visual interfaces. In this paper, we present a set of clouding services for linked data exploration, to enable the end-user to personalize and focus her/his exploration by interactively configuring high-level conceptual structures called

in

Clouds.

Silvana Castano, Alfio Ferrara, Stefano Montanelli

Case Studies

Business Intelligence Modeling in Action: A Hospital Case Study

Business Intelligence (BI) projects are long and painful endeavors that employ a variety of design methodologies, inspired mostly by software engineering and project management lifecycle models. In recent BI research, new design methodologies are emerging founded on conceptual business models that capture business objectives, strategies, and more. Their claim is that they facilitate the description of the problem-at-hand, its analysis towards a solution, and the implementation of that solution. The key question explored in this work is:

Are such models actually useful to BI design practitioners?

To answer this question, we conducted an in situ empirical evaluation based on an on-going BI project for a Toronto hospital. The lessons learned from the study include:

confirmation

that the BI implementation is well-supported by models founded on business concepts;

evidence

that these models enhance communication within the project team and business stakeholders; and,

evidence

that there is a need for business modeling to capture BI requirements and, from those, derive and implement BI designs.

Daniele Barone, Thodoros Topaloglou, John Mylopoulos

RadioMarché: Distributed Voice- and Web-Interfaced Market Information Systems under Rural Conditions

Despite its tremendous success, the World Wide Web is still inaccessible to 4.5 billion people - mainly in developing countries - who lack a proper internet infrastructure, a reliable power supply, and often the ability to read and write. Hence, alternative or complementary technologies are needed to make the Web accessible to all, given the limiting conditions. These technologies must serve a large audience, who then may start contributing to the Web by creating content and services. In this paper we propose RadioMarché, a voice- and web-based market information system aimed at stimulating agricultural trade in Sahel countries. To overcome interfacing and infrastructural issues, RadioMarché has a mobile-voice interface and is easy to deploy. Furthermore, we will show how data from regionally distributed instances of RadioMarché, can be aggregated and exposed using Linked Data approaches, so that new opportunities for product and service innovation in agriculture and other domains can be unleashed.

Victor de Boer, Pieter De Leenheer, Anna Bon, Nana Baah Gyan, Chris van Aart, Christophe Guéret, Wendelien Tuyp, Stephane Boyera, Mary Allen, Hans Akkermans

Publication of Geodetic Documentation Center Resources on Internet

Geodetic Documentation Centers collect geodetic and cartographic resources. The resources include spatial data and their metadata. European Union INSPIRE directive imposes an obligation on GDC to publish selected data in the Internet. In this paper, an adequate form of publication is discussed on the base of iGeoMap application.

The Internet application iGeoMap merges data from various resources. Depending on the data to present, different types of resources are used. The application can publish spatial data from files (text or binary), a database specialized in spatial data service (PostgreSQL, ORACLE), or web services (Web Map Service, Web Feature Service). Utilization of various data sources by the application is presented in this paper.

As a part of the subject, searches of the most popular data (parcels, address points, and control points) are discussed. Various data sources and searching mechanisms involved by the searches in iGeoMap are presented in use cases.

Marcin Luckner, Waldemar Izdebski

Business Process Design

Business Process Design from Virtual Organization Intentional Models

Virtual Organizations (VO) have emerged as a new type of inter-organizational relationship for dealing with emerging challenges. The information system support for a VO poses new challenges to system design. Recent works define three levels of abstraction, namely an intentional level, an organizational level, and an operational level. For such a staged design, it is fundamental that artifacts defined on the different layers are consistent with each other. We address this problem based on a transformation approach. We illustrate the transformation from the intentional level towards the organizational level based on the 360° VisiOn and the BPMN process modeling language. The approach has been implemented in a prototype and validated using a case study from a regional stockbreeder union in Mexico.

Luz María Priego-Roche, Lucinéia Heloisa Thom, Agnès Front, Dominique Rieu, Jan Mendling

A Novel Approach to Modeling Context-Aware and Social Collaboration Processes

Companies strive to retain the knowledge about their business processes by modeling them. However, non-routine people-intensive processes, such as distributed collaboration, are hard to model due to their unpredictable nature. Often such processes involve advanced activities, such as discovery of socially coherent teams or unbiased experts, or complex coordination towards reaching a consensus. Modeling such activities requires an expressive formal representation of process context, i.e. related actors and artifacts. Existing modeling approaches do not provide the necessary level of expressiveness to capture it. We therefore propose a novel modeling approach and a graphical notation, demonstrate their applicability and expressivity via several use cases, and discuss their strengths and weaknesses.

Vitaliy Liptchinsky, Roman Khazankin, Hong-Linh Truong, Schahram Dustdar

Process Redesign for Liquidity Planning in Practice: An Empirical Assessment

The financial crisis has kept the world busy since 2007. The resulting difficulties in accessing liquidity and low interest rates on deposits strengthened the importance of proper liquidity planning. These challenges are even greater for globally spread enterprises in which currency-specific liquidity planning implies decentralized processes. These have to be coordinated within the local partitions such that proper and consistent overall financial planning is eventually ensured. Although extensive research has been conducted in the field of process redesign, most models lack applicability, either because of strict process restrictions or because they are too complex and, hence, hard to realize and communicate. To close this gap and to demonstrate the potential of business process redesign in practice, we

(i)

analyze the requirements of the financial planning domain to identify an appropriate redesign framework, and

(ii)

evaluate the impact of an industrially implemented process redesign with respect to process runtime and quality.

Jochen Martin, Tobias Conte, Athanasios Mazarakis

Feature Models and Product Lines

Building Information System Variants with Tailored Database Schemas Using Features

Database schemas are an integral part of many information systems (IS). New software-engineering methods, such as

software product lines

, allow engineers to create a high number of different programs tailored to the customer needs from a common code base. Unfortunately, these engineering methods usually do not take the database schema into account. Particularly, a tailored client program requires a tailored database schema as well to form a consistent IS. In this paper, we show the challenges of tailoring relational database schemas in software product lines. Furthermore, we present an approach to treat the client and database part of an IS in the same way using a variable database schema. Additionally, we show the benefits and discuss disadvantages of the approach during the evolution of an industrial case study, covering a time span of more than a year.

Martin Schäler, Thomas Leich, Marko Rosenmüller, Gunter Saake

Evolutionary Search-Based Test Generation for Software Product Line Feature Models

Product line-based software engineering is a paradigm that models the commonalities and variabilities of different applications of a given domain of interest within a unique framework and enhances rapid and low cost development of new applications based on reuse engineering principles. Despite the numerous advantages of software product lines, it is quite challenging to comprehensively test them. This is due to the fact that a product line can potentially represent many different applications; therefore, testing a single product line requires the test of its various applications. Theoretically, a product line with

n

software features can be a source for the development of 2

n

application. This requires the test of 2

n

applications if a brute-force comprehensive testing strategy is adopted. In this paper, we propose an evolutionary testing approach based on Genetic Algorithms to explore the configuration space of a software product line feature model in order to automatically generate test suites. We will show through the use of several publicly-available product line feature models that the proposed approach is able to generate test suites of

O

(

n

) size complexity as opposed to

O

(2

n

) while at the same time form a suitable tradeoff balance between error coverage and feature coverage in its generated test suites.

Faezeh Ensan, Ebrahim Bagheri, Dragan Gašević

Feature Model Differences

Feature models are a widespread means to represent commonality and variability in software product lines. As is the case for other kinds of models, computing and managing feature model differences is useful in various real-world situations. In this paper, we propose a set of novel differencing techniques that combine syntactic and semantic mechanisms, and automatically produce meaningful differences. Practitioners can exploit our results in various ways: to understand, manipulate, visualize and reason about differences. They can also combine them with existing feature model composition and decomposition operators. The proposed automations rely on satisfiability algorithms. They come with a dedicated language and a comprehensive environment. We illustrate and evaluate the practical usage of our techniques through a case study dealing with a configurable component framework.

Mathieu Acher, Patrick Heymans, Philippe Collet, Clément Quinton, Philippe Lahire, Philippe Merle

Human Factor

Wiki Refactoring as Mind Map Reshaping

Wikis’ organic growth inevitably leads to wiki degradation and the need for regular wiki refactoring. So far, wiki refactoring is a manual, time-consuming and error-prone activity. We strive to ease wiki refactoring by using mind maps as a graphical representation of the wiki structure, and mind map manipulations as a way to express refactoring. This paper (

i

) defines the semantics of common refactoring operations based on

Wikipedia

best practices, (

ii

) advocates for the use of mind maps as a visualization of wikis for refactoring, and (

iii

) introduces a DSL for wiki refactoring built on top of

FreeMind

, a mind mapping tool. Thus, wikis are depicted as

FreeMind

maps, and map manipulations are interpreted as refactoring operations over the wiki. The rationales for the use of a DSL are based not only on reliability grounds but also on facilitating end-user participation.

Gorka Puente, Oscar Díaz

Purpose Driven Competency Planning for Enterprise Modeling Projects

Much of the success of projects using Enterprise Modeling (EM) depends more on the quality of the process of modeling rather than on the method used. One important influence on the quality of the modeling process is the competency level of the experts responsible for the EM approach. Each EM project is, however, specific depending on the purpose of modeling, such as developing the business, ensuring the quality of business operations, and using EM as a problem solving tool. The objective of this paper is to discuss the core competency needs for the EM practitioner and to relate those needs to different purposes of EM.

Janis Stirna, Anne Persson

Work Experience in PAIS – Concepts, Measurements and Potentials

Process-Aware Information Systems consider various characteristics of resources, such as capabilities, as a driver for allocating task to humans. Work experience has been discussed as a possible variable of history-based allocation. However, work experience has been considered in a limited extend, reducing the perspective on measurements to single aspects such as years of working in an organization, ore amount of performed tasks. Further, the allocation has mainly been oriented towards the best possible fit of humans to the requirements of the task and the process. This contribution is a first step towards an human-centric work experience allocation. It concentrates on the question how experience collected by individuals working with business processes may be measured in PAIS. A collection of work experience measurements at various organizational levels is provided. The measurement collection resulted from a literature review of PAIS theory, selected psychological literature and an qualitative analysis of job offers.

Sonja Kabicher-Fuchs, Stefanie Rinderle-Ma

Tutorials

Ontological Foundations for Conceptual Modeling with Applications

The main objective of this tutorial is to introduce researchers to the theory and practice of advanced conceptual modeling through the application of a new emerging discipline named Ontology-Driven Conceptual Modeling. In this discipline, theories coming from areas such as Formal Ontology in philosophy, but also Cognitve Science, Philosophical Logics and Linguistics are employed to derive engineering tools (e.g., modeling languages, methodologies, design patterns, model compilers and simulators) for improving the theory and practice of Conceptual Modeling. In particular, here, the expressiveness and relevance of these theories and derived tools are demonstrated through their application to solve some classical and recurrent conceptual modeling problems concerning the well-founded representation of: classification and taxonomic structures, part-whole relations, intrinsic and relational properties, formal and material associations, association specialization, attribute value spaces and roles.

Giancarlo Guizzardi

Designing Technical Action Research and Generalizing from Real-World Cases

This tutorial presents a sound methodology for technical action research, which consist of testing a new artifact by using it to solve a real problem. Such a test would be useless if we could not generalize from it, and the tutorial introduces architectural inference as a way of supporting generalizations by technical action research.

Roel Wieringa

Improvisational Theater for Information Systems: An Agile, Experience-Based, Prototyping Technique

Collaborative creativity is key to innovative software development. It is however not easy to master, and few techniques have focused on those aspects. Games have recently started to receive serious attention to fill this gap. In this context, this tutorial proposes participants a fun and refreshing learning moment. Through actually playing improvisational theatre (improv) games in group, participants will learn new ways to generate scenarios through a collaborative, cheap, rapid, experience-based, design technique.

Martin Mahaux, Patrick Heymans

Full Model-Driven Practice: From Requirements to Code Generation

A crucial success factor in information systems development is the alignment of the system with business goals, business semantics and business processes. Developers should be freed from programming concerns and be able to concentrate on these alignment problems. Model-driven system development (MDD) does not only provide a structured and systematic approach to systems development, but also offers developers the possibility of using model-transformation technologies to derive models of a lower abstraction level that can be further refined, and even generating software code automatically. This tutorial shows how to successfully integrate business process modelling (BPM), requirements engineering (RE) and object-oriented conceptual modelling with the objective of leveraging MDD capabilities. Participants work with state of the art modelling methods and code generation tools to explore different ways to match an information system with business requirements.

Óscar Pastor, Sergio España

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise