Skip to main content

2010 | Buch

Advanced Information Systems Engineering

22nd International Conference, CAiSE 2010, Hammamet, Tunisia, June 7-9, 2010. Proceedings

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Keynotes

Search Computing Systems

Search Computing defines a new class of applications, which enable

end users

to perform exploratory search processes over multi-domain data sources available on the Web. These applications exploit suitable software frameworks and models that make it possible for

expert users

to configure the data sources to be searched and the interfaces for query submission and result visualization. We describe some usage scenarios and the reference architecture for Search Computing systems.

Stefano Ceri, Marco Brambilla
The Influence of IT Systems on the Use of the Earth’s Resources

Information technology is so pervasive that estimates of its power usage show that data centers, locations where computing systems are densely packed consume some 2% of available power in mature economies. Other estimates show that an additional 8% or so of power is consumed by IT systems which reside in offices and homes, while assessments of power used consumer devices such as mobile phones and PDA’s combined with other household appliances while charging or not deployed can add another 6-8% of total power consumption. As computing power densities have increased from a few tens of watts per square foot to hundreds of watts per square foot considerable attention has been applied to the management of power in IT systems.

Jurij Paraszczak

Session 1: Business Process Modeling

Design and Verification of Instantiable Compliance Rule Graphs in Process-Aware Information Systems

For enterprises it has become crucial to check compliance of their business processes with certain rules such as medical guidelines or financial regulations. When automating compliance checks on process models, existing approaches have mainly addressed

process-specific

compliance rules so far, i.e., rules that correspond to a particular process model. However, in practice, we will rather find

process-independent

compliance rules that are nevertheless to be checked over process models. Thus, in this paper, we present an approach that enables the

instantiation

and verification of process-independent compliance rules over process models using domain models. For this, we provide an intuitive visualization of compliance rules and compliance rule instances at user level and show how rules and instances can be formalized and verified at system level. The overall approach is validated by a pattern-based comparison to existing approaches and by means of a prototypical implementation.

Linh Thao Ly, Stefanie Rinderle-Ma, Peter Dadam
Success Factors of e-Collaboration in Business Process Modeling

We identify the success factors of collaborative modeling of business processes by a qualitative analysis of the experiences of participants in group modeling sessions. The factors and their relations form a preliminary theoretical model of collaboration in modeling that extends existing models. The insights from this guided the improvement of a group modeling method and tool support which are in turn relevant outcomes of the design part of this study. We show in field experiments that the new method outperforms the conventional one.

Peter Rittgen
Beyond Process Mining: From the Past to Present and Future

Traditionally, process mining has been used to extract models from event logs and to check or extend existing models. This has shown to be useful for improving processes and their IT support. Process mining techniques analyze historic information hidden in event logs to provide surprising insights for managers, system developers, auditors, and end users. However, thus far,

process mining is mainly used in an offline fashion and not for operational decision support

. While existing process mining techniques focus on the process as a whole, this paper focuses on individual process instances (cases) that have not yet completed. For these running cases, process mining can used to

check conformance

,

predict the future

, and

recommend appropriate actions

. This paper presents a framework for operational support using process mining and details a coherent set of approaches that focuses on time information.

Time-based operational support

can be used to detect deadline violations, predict the remaining processing time, and recommend activities that minimize flow times. All of this has been implemented in

ProM

and initial experiences using this toolset are reported in this paper.

Wil M. P. van der Aalst, Maja Pesic, Minseok Song

Session 2: Information Systems Quality

Dependency Discovery in Data Quality

A conceptual framework for the automatic discovery of dependencies between data quality dimensions is described. Dependency discovery consists in recovering the dependency structure for a set of data quality dimensions measured on attributes of a database. This task is accomplished through the data mining methodology, by learning a Bayesian Network from a database. The Bayesian Network is used to analyze dependency between data quality dimensions associated with different attributes. The proposed framework is instantiated on a real world database. The task of dependency discovery is presented in the case when the following data quality dimensions are considered; accuracy, completeness, and consistency. The Bayesian Network model shows how data quality can be improved while satisfying budget constraints.

Daniele Barone, Fabio Stella, Carlo Batini
Rationality of Cross-System Data Duplication: A Case Study

Duplication of data across systems in an organization is a problem because it wastes effort and leads to inconsistencies. Researchers have proposed several technical solutions but duplication still occurs in practice. In this paper we report on a case study of how and why duplication occurs in a large organization, and discuss generalizable lessons learned from this. Our case study research questions are why data gets duplicated, what the size of the negative effects of duplication is, and why existing solutions are not used. We frame our findings in terms of design rationale and explain them by providing a causal model. Our findings suggest that next to technological factors, organizational and project factors have a large effect on duplication. We discuss the implications of our findings for technical solutions in general.

Wiebe Hordijk, Roel Wieringa
Probabilistic Models to Reconcile Complex Data from Inaccurate Data Sources

Several techniques have been developed to extract and integrate data from web sources. However, web data are inherently imprecise and uncertain. This paper addresses the issue of characterizing the uncertainty of data extracted from a number of inaccurate sources. We develop a probabilistic model to compute a probability distribution for the extracted values, and the accuracy of the sources. Our model considers the presence of sources that copy their contents from other sources, and manages the misleading consensus produced by copiers. We extend the models previously proposed in the literature by working on several attributes at a time to better leverage all the available evidence. We also report the results of several experiments on both synthetic and real-life data to show the effectiveness of the proposed approach.

Lorenzo Blanco, Valter Crescenzi, Paolo Merialdo, Paolo Papotti

Session 3: Service Modelling

Monitoring and Analyzing Service-Based Internet Systems through a Model-Aware Service Environment

As service-based Internet systems get increasingly complex they become harder to manage at design time as well as at runtime. Nowadays, many systems are described in terms of precisely specified models, e.g., in the context of model-driven development. By making the information in these models accessible at runtime, we provide better means for analyzing and monitoring the service-based systems. We propose a model-aware repository and service environment (

MORSE

) to support model access and evolution at both design time and runtime.

MORSE

focuses on enabling us to monitor, interpret, and analyze the monitored information. In an industrial case study, we demonstrate how compliance monitoring can benefit from

MORSE

to monitor violations at runtime and how

MORSE

can ease the root cause analysis of such violations. Performance and scalability evaluations show the applicability of our approach for the intended use cases and that models can be retrieved during execution at low cost.

Ta’id Holmes, Uwe Zdun, Florian Daniel, Schahram Dustdar
Modeling and Reasoning about Service-Oriented Applications via Goals and Commitments

Service-oriented applications facilitate the exchange of business services among participants. Existing modeling approaches either apply at a lower of abstraction than required for such applications or fail to accommodate the autonomous and heterogeneous nature of the participants. We present a business-level conceptual model that addresses the above shortcomings. The model gives primacy to the participants in a service-oriented application. A key feature of the model is that it cleanly decouples the specification of an application’s architecture from the specification of individual participants. We formalize the connection between the two—the reasoning that would help a participant decide if a specific application is suitable for his needs. We implement the reasoning in datalog and apply it to a case study involving car insurance.

Amit K. Chopra, Fabiano Dalpiaz, Paolo Giorgini, John Mylopoulos
Conceptualizing a Bottom-Up Approach to Service Bundling

Offering service bundles to the market is a promising option for service providers to strengthen their competitive advantages, cope with dynamic market conditions and deal with heterogeneous consumer demand. Although the expected positive effects of bundling strategies and pricing considerations for bundles are covered well by the available literature, limited guidance can be found regarding the identification of potential bundle candidates and the actual process of bundling. The contribution of this paper is the positioning of bundling based on insights from both business and computer science and the proposition of a structured bundling method, which guides organizations with the composition of bundles in practice.

Thomas Kohlborn, Christian Luebeck, Axel Korthaus, Erwin Fielt, Michael Rosemann, Christoph Riedl, Helmut Krcmar

Session 4: Security and Management

Dynamic Authorisation Policies for Event-Based Task Delegation

Task delegation presents one of the business process security leitmotifs. It defines a mechanism that bridges the gap between both workflow and access control systems. There are two important issues relating to delegation, namely allowing task delegation to complete, and having a secure delegation within a workflow. Delegation completion and authorisation enforcement are specified under specific constraints. Constraints are defined from the delegation context implying the presence of a fixed set of delegation events to control the delegation execution.

In this paper, we aim to reason about delegation events to specify delegation policies dynamically. To that end, we present an event-based task delegation model to monitor the delegation process. We then identify relevant events for authorisation enforcement to specify delegation policies. Moreover, we propose a technique that automates delegation policies using event calculus to control the delegation execution and increase the compliance of all delegation changes in the global policy.

Khaled Gaaloul, Ehtesham Zahoor, François Charoy, Claude Godart
A New Approach for Pattern Problem Detection

Despite their advantages in design quality improvement and rapid software development, design patterns remain difficult to reuse for inexperienced designers. The main difficulty consists in how to recognize the applicability of an appropriate design pattern for a particular application. Design problems can be found in a design with different shapes, unfortunately, often through poor solutions. To deal with this situation, we propose an approach that recognizes pattern problems in a design and that assists in transforming them into their corresponding design patterns. Our approach adapts an XML document retrieval technique to detect the situation necessitating a pattern usage. Unlike current approaches, ours accounts for both the structural and semantic aspects of a pattern problem. In addition, it tolerates design alterations of pattern problems.

Nadia Bouassida, Hanêne Ben-Abdallah
Comparing Safety Analysis Based on Sequence Diagrams and Textual Use Cases

Safety is of growing importance for information systems due to increased integration with embedded systems. Discovering potential hazards as early as possible in the development is key to avoid costly redesign later. This implies that hazards should be identified based on the requirements, and it is then useful to compare various specification techniques to find out the strengths and weaknesses of each with respect to finding and documenting hazards. This paper reports on two experiments in hazards identification – one experiment based on textual use cases and one based on systems sequence diagrams. The comparison of the experimental results reveal that use cases are better for identifying hazards related to the operation of the system while system sequence diagrams are better for the identification of hazards related to the system itself. The combination of these two techniques is therefore likely to uncover more hazards than one technique alone.

Tor Stålhane, Guttorm Sindre, Lydie du Bousquet

Session 5: Matching and Mining

Feature-Based Entity Matching: The FBEM Model, Implementation, Evaluation

Entity matching or resolution is at the heart of many integration tasks in modern information systems. As with any core functionality, good quality of results is vital to ensure that upper-level tasks perform as desired. In this paper we introduce the FBEM algorithm and illustrate its usefulness for general-purpose use cases. We analyze its result quality with a range of experiments on heterogeneous data sources, and show that the approach provides good results for entities of different types, such as persons, organizations or publications, while posing minimal requirements to input data formats and requiring no training.

Heiko Stoermer, Nataliya Rassadko, Nachiket Vaidya
Dealing with Matching Variability of Semantic Web Data Using Contexts

Goal of this paper is to propose a reference modeling framework to explicitly identify and formalize the different levels of variability that can arise along all the involved dimensions of a matching execution. The proposed framework is based on the notion of

knowledge chunk

,

context

, and

mapping

to abstract the variability levels and related operations along the

source-dataset

, the

matching-dataset

, and the

mapping-set

dimensions, respectively. An application of the proposed framework with instantiation in the

HMatch 2.0

systems is illustrated.

Silvana Castano, Alfio Ferrara, Stefano Montanelli
GRUVe: A Methodology for Complex Event Pattern Life Cycle Management

With the rising importance of recognizing a complex situation of interest near real-time, many industrial applications have adopted Complex Event Processing as their backbone. In order to remain useful it is important that a Complex Event Processing system evolves according to the changes in its business environment. However, today’s management tasks in a Complex Event Processing system related to the management of complex event patterns are performed purely manually without any systematic methodology. This can be time consuming and error-prone.

In this paper we present a methodology and an implementation for the complex event pattern life cycle management. The overall goal is the efficient generation, maintenance and evolution of complex event patterns. Our approach is based on a semantic representation of events and complex event patterns combined with complex event pattern execution statistics. This representation enables an improved definition of relationships between patterns using semantic descriptions of patterns and events.

Sinan Sen, Nenad Stojanovic
Supporting Semantic Search on Heterogeneous Semi-structured Documents

This paper presents

SHIRI-Querying

, an approach for semantic search on semi-structured documents. We propose a solution to tackle incompleteness and imprecision of semantic annotations of semistructured documents at querying time. We particularly introduce three elementary reformulations that rely on the notion of aggregation and on the document structure. We present the Dynamic Reformulation and Execution of Queries algorithm (DREQ) which combines these elementary transformations to construct reformulated queries w.r.t. a defined order relation. Experiments on two real datasets show that these reformulations greatly increase the recall and that returned answers are effectively ranked according to their precision.

Yassine Mrabet, Nacéra Bennacer, Nathalie Pernelle, Mouhamadou Thiam
Query Ranking in Information Integration

Given the growing number of structured and evolving online repositories, the need for lightweight information integration has increased in the past years. We have developed an integration approach which relies on partial mappings for query rewriting and combines them with a controlled way of relaxation. In this paper we propose a novel approach for ranking results of such rewritten and relaxed queries over different sources, by punishing lack of confidence in mappings used for rewriting, as well as punishing higher degrees of controlled relaxation introduced, and present the performed evaluation.

Rodolfo Stecher, Stefania Costache, Claudia Niederée, Wolfgang Nejdl

Session 6: Case Studies and Experiences

Validity of the Documentation Availability Model: Experimental Definition of Quality Interpretation

System and software documentation is a necessity when making selection and acquisition decisions, when developing new and/or improving existing system and software functionality. A proper documentation becomes even more crucial for open source systems (OSS), where, typically, stakeholders from different communities are involved. However there exist only limited or no methodology to assess documentation quality. In this paper we present a quality model and a comprehensive method to assess quality of the OSS documentation availability (

DA

). Our contribution is threefold. Firstly, based on the criteria defined by Kitchenham

et al.

we illustrate the theoretical validity of the

DA

model. Secondly, we execute the first step towards the empirical validity of the

DA

model. Finally, our work results in a comprehensive and empirically grounded interpretation model of the documentation quality.

Raimundas Matulevičius, Naji Habra, Flora Kamseu
Emerging Challenges in Information Systems Research for Regulatory Compliance Management

Managing regulatory compliance is increasingly challenging and costly for organizations world-wide. While such efforts are often supported by information technology (IT) and information systems (IS) tools, there is evidence that the current solutions are inadequate and do not fully address the needs of organizations. Often such discrepancy stems from a lack of alignment between the needs of the industry and the focus of academic research efforts. In this paper, we present the results of an empirical study that investigates challenges in managing regulatory compliance, derived from expert professionals in the Australian compliance industry. The results provide insights into problematic areas within the compliance management domain, as related to regulatees, regulations and IT compliance management solutions. By relating the identified challenges to existing activity in IS research, this exploratory paper highlights the inadequacy of current research and presents the first industry-relevant compliance management research agenda for IS researchers.

Norris Syed Abdullah, Shazia Sadiq, Marta Indulska
Experience-Based Approach for Adoption of Agile Practices in Software Development Projects

The agile approach for software development has attracted a great deal of interest in both academic and industry communities in the last decade. Nevertheless the wide adoption of agile methods in ever growing number of software development projects, shifting the development process of an organization to an agile one is not straightforward. Certain considerations for the applicability of agile practices should be taken into account when this transition is performed. In this paper, an approach for situational engineering of agile methods is proposed. The approach is based on the experience gained in adopting agile practices in both internal and external projects of organizations. A knowledge-base supporting the selection of agile practices that are suitable for certain project is introduced. Automated generation of appropriate software development process is included as well. Particular realization of the approach supported by SPEM-based tools is also presented in the paper.

Iva Krasteva, Sylvia Ilieva, Alexandar Dimov
Coordinating Global Virtual Teams: Building Theory from a Case Study of Software Development

Global Virtual Teams (GVTs) enable organizations to operate across national, economic and social, and cultural boundaries. However, this form of teamwork presents issues for traditional project management coordination mechanisms. There is a significant body of research on these challenges. However, relatively little attention has been paid to the specific impact these issues may have on coordination mechanisms in GVTs. This paper seeks to address this gap by applying a theoretical model drawn from extant research to explore the coordination mechanisms used by a software development GVT in a Fortune 100 telecommunications manufacturer. The study employs a mixed methodology grounded theory approach to examine the effect that specific virtual team issues have on the effectiveness of team coordination mechanisms. It then develops a refined conceptual model to guide future research on GVTs involved in software development. The findings also inform practice on the problems encountered in ensuring the effective coordination of such teams.

Gaye Kiely, Tom Butler, Patrick Finnegan
Information Systems Evolution over the Last 15 Years

The information systems we see around us today are at first sight very different from those that were developed 15 years ago and more. On the other hand, it seems that we are still struggling with many of the same problems. To understand how we can evolve future ISs, we should have good understanding of the existing application portfolios. In this article we present selected data from survey investigations performed in 1993, 1998, 2003 and 2008 among Norwegian organizations on how they conduct information systems development and evolution. A major finding is that even if we witness large changes in the underlying implementation technology and approaches used, a number of aspects such as the overall percentage of time used for maintaining and evolving systems in production compared to time used for development is stable, and should be taken into account in the planning of information systems evolution for the future.

Magne Davidsen, John Krogstie

Session 7: Conceptual Modelling

From Web Data to Entities and Back

We present the Entity Name System (ENS), an enabling infrastructure, which can host descriptions of named entities and provide unique identifiers, on large-scale. In this way, it opens new perspectives to realize entity-oriented, rather than keyword-oriented, Web information systems. We describe the architecture and the functionality of the ENS, along with tools, which all contribute to realize the Web of entities.

Zoltán Miklós, Nicolas Bonvin, Paolo Bouquet, Michele Catasta, Daniele Cordioli, Peter Fankhauser, Julien Gaugaz, Ekaterini Ioannou, Hristo Koshutanski, Antonio Maña, Claudia Niederée, Themis Palpanas, Heiko Stoermer
Transformation-Based Framework for the Evaluation and Improvement of Database Schemas

Data schemas are primary artefacts for the development and maintenance of data intensive software systems. As for the application code, one way to improve the quality of the models is to ensure that they comply with best design practices. In this paper, we redefine the process of schema quality evaluation as the identification of specific schema constructs and their comparison with best practices. We provide an overview of a framework based on the use of semantics-preserving transformations as a way to identify, compare and suggest improvement for the most significant best design practices. The validation and the automation of the framework are discussed and some clarifying examples are provided.

Jonathan Lemaitre, Jean-Luc Hainaut
Reverse Engineering User Interfaces for Interactive Database Conceptual Analysis

The first step of most database design methodologies consists in eliciting part of the user requirements from various sources such as user interviews and corporate documents. These requirements formalize into a conceptual schema of the application domain, that has proved to be difficult to validate, especially since the visual representation of the ER model has shown understandability limitations from the end-users standpoint. In contrast, we claim that prototypical user interfaces can be used as a two-way channel to efficiently express, capture and validate data requirements. Considering these interfaces as a possibly populated physical view on the database to be developed, reverse engineering techniques can be applied to derive their underlying conceptual schema. We present an interactive tool-supported approach to derive data requirements from user interfaces. This approach, based on an intensive user involvement, addresses a significant subset of data requirements, especially when combined with other requirement elicitation techniques.

Ravi Ramdoyal, Anthony Cleve, Jean-Luc Hainaut
Towards Automated Inconsistency Handling in Design Models

The increasing adoption of MDE (Model Driven Engineering) favored the use of large models of different types. It turns out that when the modeled system gets larger, simply computing a list of inconsistencies (as provided by existing techniques for inconsistency handling) gets less and less effective when it comes to actually fixing them. In fact, the inconsistency handling task (i.e. deciding what needs to be done in order to restore consistency) remains largely manual. This work is a step towards its automatization. We propose a method for the generation of

repair plans

for an inconsistent model. In our approach, the depth of the explored search space is configurable in order to cope with the underlying combinatorial characteristic of this problem and to avoid overwhelming the designer with large plans that can not be fully checked before being applied.

Marcos Aurélio Almeida da Silva, Alix Mougenot, Xavier Blanc, Reda Bendraou

Session 8: Adaptation

Dynamic Metamodel Extension Modules to Support Adaptive Data Management

Databases are now used in a wide variety of settings resulting in requirements which may differ substantially from one application to another, even to the point of conflict. Consequently, there is no database product that can support all forms of information systems ranging from enterprise applications to personal information systems running on mobile devices. Further, domains such as the Web have demonstrated the need to cope with rapidly evolving requirements. We define dynamic metamodel extension modules that support adaptive data management by evolving a system in the event of changing requirements and show how this technique was applied to cater for specific application settings.

Michael Grossniklaus, Stefania Leone, Alexandre de Spindler, Moira C. Norrie
Supporting Runtime System Evolution to Adapt to User Behaviour

Using a context-aware approach, we deal with the automation of user routines. To do this, these routines, or user behaviour patterns, are described using a context model and a context-adaptive task model, and are automated by an engine that executes the patterns as specified. However, user behavior patterns defined at design time may become obsolete and useless since users needs may change. To avoid this, it is essential that the system supports the evolution of these patterns. In this work, we focus on supporting this evolution by confronting an important challenge in evolution research: raise the level in which evolution is applied to the modelling level. We develop mechanisms to support the pattern evolution by updating the models at runtime. Also, we provide end-users with a tool that allows them to carry out the pattern evolution by using user-friendly interfaces.

Estefanía Serral, Pedro Valderas, Vicente Pelechano
Interaction-Driven Self-adaptation of Service Ensembles

The emergence of large-scale online collaboration requires current information systems to be apprehended as service ensembles comprising human and software service entities. The software services in such systems cannot adapt to user needs based on autonomous principles alone. Instead system requirements need to reflect global interaction characteristics that arise from the overall collaborative effort. Interaction monitoring and analysis, therefore, must become a central aspect of system self-adaptation. We propose to dynamically evaluate and update system requirements based on interaction characteristics. Subsequent reconfiguration and replacement of services enables the ensemble to mature in parallel with the evolution of its user community. We evaluate our approach in a case study focusing on adaptive storage services.

Christoph Dorn, Schahram Dustdar

Session 9: Requirements

On the Semantics of the Extend Relationship in Use Case Models: Open-Closed Principle or Clairvoyance?

A use case is a description of the interactions of a system with the actors that use it. The Achilles’ heel of use cases is the unclear UML semantics, in particular the definition of the extend relationship. This article is an attempt to clarify the semantics of the extension mechanism. In particular, we advocate for the application of the open-closed principle, adding modification details in the extending use case, instead of in the base case. A revision of the UML standard would be impractical, but a disciplined reinterpretation of the

extend

and

extension point

concepts could represent a great improvement. Textual and graphical approaches (based in the UML Behavior meta-model) are considered. Using these recommendations, the base use cases can be independently described, while the extending use cases will be self-contained.

Miguel A. Laguna, José M. Marqués, Yania Crespo
Situational Evaluation of Method Fragments: An Evidence-Based Goal-Oriented Approach

Despite advances in situational method engineering, many software organizations continue to adopt an ad-hoc mix of method fragments from well-known development methods such as Scrum or XP, based on their perceived suitability to project or organizational needs. With the increasing availability of empirical evidence on the success or failure of various software development methods and practices under different situational conditions, it now becomes feasible to make this evidence base systematically accessible to practitioners so that they can make informed decisions when creating situational methods for their organizations. This paper proposes a framework for evaluating the suitability of candidate method fragments prior to their adoption in software projects. The framework makes use of collected knowledge about how each method fragment can contribute to various project objectives, and what requisite conditions must be met for the fragment to be applicable. Pre-constructed goal models for the selected fragments are retrieved from a repository, merged, customized with situational factors, and then evaluated using a qualitative evaluation procedure adapted from goal-oriented requirements engineering.

Hesam Chiniforooshan Esfahani, Eric Yu, Jordi Cabot
Incorporating Modules into the i* Framework

When building large-scale goal-oriented models using the

i*

framework, the problem of scalability arises. One of the most important causes for this problem is the lack of modularity constructs in the language: just the concept of actor boundary allows grouping related model elements. In this paper, we present an approach that incorporates modules into the

i*

framework with the purpose of ameliorating the scalability problem. We explore the different types of modules that may be conceived in the framework, define them in terms of an

i*

metamodel, and introduce different model operators that support their application.

Xavier Franch
Ahab’s Leg: Exploring the Issues of Communicating Semi-formal Requirements to the Final Users

In this paper, we present our experience in using narrative scenarios as a tool to communicate and validate semi-formal requirements with the stakeholders in a large software project. The process of translating the semi-formal language of Tropos into the narrative form of scenarios is introduced and some unintended implications of this process are discussed. In particular, we define the notion of

Ahab’s leg

to describe the necessity to introduce new constraints or features in a description when moving to a different representational language. Starting from the lessons learned with this specific case study, we derive some general implications concerning the issue of requirement translation for validation tasks and we propose some methodological guidelines to address the Ahab’s leg dilemma.

Chiara Leonardi, Luca Sabatucci, Angelo Susi, Massimo Zancanaro
The Brave New World of Design Requirements: Four Key Principles

Despite its undoubted success, Requirements Engineering (RE) needs a better alignment between its research focus and its grounding in practical needs as these needs have changed significantly recently. We explore changes in the environment, targets, and the process of requirements engineering (RE) that influence the nature of fundamental RE questions. Based on these explorations we propose four key principles that underlie current requirements processes: (1) intertwining of requirements with implementation and organizational contexts, (2) dynamic evolution of requirements, (3) architectures as a critical stabilizing force, and (4) high levels of design complexity. We make recommendations to refocus RE research agenda as to meet new challenges based on the review and analysis of these four key themes. We note several managerial and practical implications.

Matthias Jarke, Pericles Loucopoulos, Kalle Lyytinen, John Mylopoulos, William Robinson

Session 10: Process Analysis

The ICoP Framework: Identification of Correspondences between Process Models

Business process models can be compared, for example, to determine their consistency. Any comparison between process models relies on a mapping that identifies which activity in one model corresponds to which activity in another. Tools that generate such mappings are called matchers. This paper presents the ICoP framework, which can be used to develop such matchers. It consists of an architecture and re-usable matcher components. The framework enables the creation of matchers from the re-usable components and, if desired, newly developed components. It focuses on matchers that also detect complex correspondences between groups of activities, where existing matchers focus on 1:1 correspondences. We evaluate the framework by applying it to find matches in process models from practice. We show that the framework can be used to develop matchers in a flexible and adaptable manner and that the resulting matchers can identify a significant number of complex correspondences.

Matthias Weidlich, Remco Dijkman, Jan Mendling
Process Compliance Measurement Based on Behavioural Profiles

Process compliance measurement is getting increasing attention in companies due to stricter legal requirements and market pressure for operational excellence. On the other hand, the metrics to quantify process compliance have only been defined recently. A major criticism points to the fact that existing measures appear to be unintuitive. In this paper, we trace back this problem to a more foundational question: which notion of behavioural equivalence is appropriate for discussing compliance? We present a quantification approach based on behavioural profiles, which is a process abstraction mechanism. Behavioural profiles can be regarded as weaker than existing equivalence notions like trace equivalence, and they can be calculated efficiently. As a validation, we present a respective implementation that measures compliance of logs against a normative process model. This implementation is being evaluated in a case study with an international service provider.

Matthias Weidlich, Artem Polyvyanyy, Nirmit Desai, Jan Mendling
Business Trend Analysis by Simulation

Business processes are constantly affected by the environment in which they execute. The environment can change due to seasonal and financial trends. For organisations it is crucial to understand their processes and to be able to estimate the effects of these trends on the processes. Business process simulation is a way to investigate the performance of a business process and to analyse the process response to injected trends. However, existing simulation approaches assume a steady state situation. Until now correlations and dependencies in the process have not been considered in simulation models, which can lead to wrong estimations of the performance. In this work we define an adaptive simulation model with a history-dependent mechanism that can be used to propagate changes in the environment through the model. In addition we focus on the detection of dependencies in the process based on the executions of the past. We demonstrate the application of adaptive simulation models by means of an experiment.

Helen Schonenberg, Jingxian Jian, Natalia Sidorova, Wil van der Aalst
Workflow Soundness Revisited: Checking Correctness in the Presence of Data While Staying Conceptual

A conceptual workflow model specifies the control flow of a workflow together with abstract data information. This model is later on refined to be executed on an information system. It is desirable that correctness properties of the conceptual workflow would be transferrable to its refinements. In this paper, we present

classical workflow nets extended with data operations

as a conceptual workflow model. For these nets we develop a novel technique to verify

soundness

. This technique allows us to conclude whether at least one or any refinement of a conceptual workflow model is sound.

Natalia Sidorova, Christian Stahl, Nikola Trčka

Panel

Octavian Panel on Intentional Perspectives on Information Systems Engineering

Octavian panel

is a round table-table discussion with 8 (Octavian) people on the scene and an audience. Only those people can talk who are on the scene. The talk time is restricted to 3 minutes. Panelists may leave the scene at any time to join the audience, thus opening for a new panelist on the scene. Questions from outside can be forwarded through the moderator. The moderator cannot leave the scene. The moderator’s contribution is limited to 1 minute. The first two rounds of the panelists are pre-selected what does not mean that this will be the order of making contributions.

Arne Sølvberg
Backmatter
Metadaten
Titel
Advanced Information Systems Engineering
herausgegeben von
Barbara Pernici
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-13094-6
Print ISBN
978-3-642-13093-9
DOI
https://doi.org/10.1007/978-3-642-13094-6

Premium Partner