Skip to main content
Top

2009 | Book

Advanced Information Systems Engineering

21st International Conference, CAiSE 2009, Amsterdam, The Netherlands, June 8-12, 2009. Proceedings

Editors: Pascal van Eck, Jaap Gordijn, Roel Wieringa

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 21st International Conference on Advanced Information Systems Engineering, CAiSE 2009, held in Amsterdam, The Netherlands, on June 8-12, 2009. The 36 papers presented in this book together with 6 keynote papers were carefully reviewed and selected from 230 submissions. The topics covered are model driven engineering, conceptual modeling, quality and data integration, goal-oriented requirements engineering, requirements and architecture, service orientation, Web service orchestration, value-driven modeling, workflow, business process modeling, and requirements engineering.

Table of Contents

Frontmatter

Keynotes

The Science of the Web

Since its inception, the World Wide Web has changed the ways people communicate, collaborate, and educate. There is, however, a growing realization among many researchers that a clear research agenda aimed at understanding the current, evolving, and potential Web is needed. A comprehensive set of research questions is outlined, together with a sub-disciplinary breakdown, emphasising the multi-faceted nature of the Web, and the multi-disciplinary nature of its study and development. These questions and approaches together set out an agenda for Web Science — a science that seeks to develop, deploy, and understand distributed information systems, systems of humans and machines, operating on a global scale.

When we discuss an agenda for a science of the Web, we use the term “science” in two ways. Physical and biological science analyzes the natural world, and tries to find microscopic laws that, extrapolated to the macroscopic realm, would generate the behaviour observed. Computer science, by contrast, though partly analytic, is principally synthetic: it is concerned with the construction of new languages and algorithms in order to produce novel desired computer behaviours. Web science is a combination of these two features. The Web is an engineered space created through formally specified languages and protocols. However, because humans are the creators of Web pages and links between them, their interactions form emergent patterns in the Web at a macroscopic scale. These human interactions are, in turn, governed by social conventions and laws. Web science, therefore, must be inherently interdisciplinary; its goal is to both understand the growth of the Web and to create approaches that allow new powerful and more beneficial patterns to occur. Finally, the Web as a technology is essentially socially embedded; therefore various issues and requirements for Web use and governance are also reviewed.

Nigel Shadbolt
TomTom for Business Process Management (TomTom4BPM)

Navigation systems have proven to be quite useful for many drivers. People increasingly rely on the devices of TomTom and other vendors and find it useful to get directions to go from A to B, know the expected arrival time, learn about traffic jams on the planned route, and be able to view maps that can be customized in various ways (zoom-in/zoom-out, show fuel stations, speed limits, etc.). However, when looking at business processes, such information is typically lacking. Good and accurate “maps” of business process are often missing and, if they exist, they tend to be restrictive and provide little information. For example, very few business process management systems are able to predict when a case will complete. Therefore, we advocate more TomTom-like functionality for business process management (TomTom4BPM). Process mining will play an essential role in providing TomTom4BPM as it allows for process discovery (generating accurate maps), conformance checking (comparing the real processes with the modeled processes), and extension (augmenting process models with additional/dynamic information).

Wil M. P. van der Aalst
Computer-Centric Business Operating Models vs. Network-Centric Ones

For the first time ever, the centricity of the computer based application model is being challenged by a fast emerging new model fueled by 6 important changes which have now reached an intensity which is impossible to deny.

Mark de Simone
The IT Dilemma and the Unified Computing Framework

One of the main challenges of the IT industry is the portion of the IT budget spent on maintaining the IT systems. For most companies this is around 80-85% of the total budget. This leaves very little room for new functionality and innovation. One of the ways to save money is to make more effective use of the underlying hardware like disks and processors. With the freed up budget the real IT issue can be addressed: the cost of application maintenance. With the use of publish-subscribe and agent models the changes in policies and business models can be supported more quickly, but it requires the right underlying infrastructure. I will discuss a Unified Computing framework that will enable these savings and will have the required capabilities to support model based programming.

Edwin Paalvast
Tutorial: How to Value Software in a Business, and Where Might the Value Go?

This tutorial consists of two parts. In the first part we motivate why businesses should determine the value of the Intellectual Property (IP) inherent in software and present a method to determine that value in a simple business model: Enterprise software for sale. In the second part we consider the same software as marketed in a service model. A service model imposes a maintenance obligation. Maintenance is often outsourced so we also consider the risk to IP, especially when work is performed offshore. Finally we present the consequences to the original creators when IP is segregated into a tax haven.

Gio Wiederhold
Towards the Next Generation of Service-Based Systems: The S-Cube Research Framework

Research challenges for the next generation of service-based systems often cut across the functional layers of the Service-Oriented Architecture (SOA). Providing solutions to those challenges requires coordinated research efforts from various disciplines, including service engineering, service composition, software engineering, business process management, and service infrastructure. The FP7 Network of Excellence on Software Services and Systems (S-Cube) performs cross-discipline research to develop solutions for those challenges. Research in S-Cube is organised around the S-Cube research framework, which we briefly introduce in this paper. Moreover, we outline the envisioned types of interactions between the key elements of the S-Cube research framework, which facilitate the specification and design as well as the operation and adaptation of future service-based systems.

Andreas Metzger, Klaus Pohl

Model Driven Engineering

An Extensible Aspect-Oriented Modeling Environment

AspectM is an aspect-oriented modeling language for providing not only basic modeling constructs but also an extension mechanism called metamodel access protocol (MMAP) that allows a modeler to modify the metamodel. This paper proposes a concrete implementation for constructing an aspect-oriented modeling environment in terms of extensibility. The notions of edit-time structural reflection and extensible model weaving are introduced.

Naoyasu Ubayashi, Genya Otsubo, Kazuhide Noda, Jun Yoshida
Incremental Detection of Model Inconsistencies Based on Model Operations

Due to the increasing use of models, and the inevitable model inconsistencies that arise during model-based software development and evolution, model inconsistency detection is gaining more and more attention. Inconsistency checkers typically analyze entire models to detect undesired structures as defined by inconsistency rules. The larger the models become, the more time the inconsistency detection process takes. Taking into account model evolution, one can significantly reduce this time by providing an incremental checker. In this article we propose an incremental inconsistency checker based on the idea of representing models as sequences of primitive construction operations. The impact of these operations on the inconsistency rules can be computed to analyze and reduce the number of rules that need to be re-checked during a model increment.

Xavier Blanc, Alix Mougenot, Isabelle Mounier, Tom Mens
Reasoning on UML Conceptual Schemas with Operations

A conceptual schema specifies the relevant information about the domain and how this information changes as a result of the execution of operations. The purpose of reasoning on a conceptual schema is to check whether the conceptual schema is correctly specified. This task is not fully formalizable, so it is desirable to provide the designer with tools that assist him or her in the validation process. To this end, we present a method to translate a conceptual schema with operations into logic, and then propose a set of validation tests that allow assessing the (un)correctness of the schema. These tests are formulated in such a way that a generic reasoning method can be used to check them. To show the feasibility of our approach, we use an implementation of an existing reasoning method.

Anna Queralt, Ernest Teniente

Conceptual Modelling 1

Towards the Industrialization of Data Migration: Concepts and Patterns for Standard Software Implementation Projects

When a bank replaces its core-banking information system, the bank must migrate data like accounts from the old into the new system. Migrating data is necessary but not a catalyst for new business opportunities. The consequence is cost pressure to be addressed by an efficient software development process together with an industrialization of the development. Industrialization requires defining the deliverables. Therefore, our data migration architecture extends the ETL process by migration objectives to be reached in each step. Industrialization also means standardizing the implementation, e.g. with patterns. We present data migration patterns describing the typical transformations found in the data migration application domain. Finally, testing is an important issue because test-case based testing cannot guarantee that not a single customer gets lost. Reconciliation can do so by checking whether each object in the old and new system has a counterpart in the other system.

Klaus Haller
Defining and Using Schematic Correspondences for Automatically Generating Schema Mappings

Mapping specification has been recognised as a critical bottleneck to the large scale deployment of data integration systems. A mapping is a description using which data structured under one schema are transformed into data structured under a different schema, and is central to data integration and data exchange systems. In this paper, we argue that the classical approach of correspondence identification followed by (manual) mapping generation can be simplified through the removal of the second step by judicious refinement of the correspondences captured. As a step in this direction, we present in this paper a model for schematic correspondences that builds on and extends the classification proposed by Kim

et al.

to cater for the automatic derivation of mappings, and present an algorithm that shows how correspondences specified in the model proposed can be used for deriving schema mappings. The approach is illustrated using a case study from integration in proteomics.

Lu Mao, Khalid Belhajjame, Norman W. Paton, Alvaro A. A. Fernandes
The Problem of Transitivity of Part-Whole Relations in Conceptual Modeling Revisited

Parthood is a relation of fundamental importance in a number of disciplines including cognitive science, linguistics and conceptual modeling. However, one classical problem for conceptual modeling theories of parthood is deciding on the transitivity of these relations. This issue is of great importance since transitivity plays a fundamental role both conceptually (e.g., to afford inferences in problem-solving) and computationally (e.g., to afford propagations of properties and events in a transitive chain). In this article we address this problem by presenting a solution to the case of part-whole relations between functional complexes, which are the most common types of entities represented in conceptual models. This solution comes in two parts. Firstly, we present a formal theory founded on results from formal ontology and linguistics. Secondly, we use this theory to provide a number of visual patterns that can be used to isolate scopes of transitivity in part-whole relations represented in diagrams.

Giancarlo Guizzardi

Conceptual Modelling 2

Using UML as a Domain-Specific Modeling Language: A Proposal for Automatic Generation of UML Profiles

Nowadays, there are several MDD approaches that have defined Domain-Specific Modeling Languages (DSML) that are oriented to representing their particular semantics. However, since UML is the standard language for software modeling, many of these MDD approaches are trying to integrate their semantics into UML in order to use UML as DSML. The use of UML profiles is a recommended strategy to perform this integration allowing, among other benefits, the use of the existent UML modeling tools. However, in the literature related to UML profile construction; it is not possible to find a standardized UML profile generation process. Therefore, a process that integrates a DSML into UML through the automatic generation of a UML profile is presented in this paper. This process facilitates the correct use of UML in a MDD context and provides a solution to take advantage of the benefits of UML and DSMLs.

Giovanni Giachetti, Beatriz Marín, Oscar Pastor
Verifying Action Semantics Specifications in UML Behavioral Models

MDD and MDA approaches require capturing the behavior of UML models in sufficient detail so that the models can be automatically implemented/executed in the production environment. With this purpose, Action Semantics (AS) were added to the UML specification as the fundamental unit of behavior specification. Actions are the basis for defining the fine-grained behavior of operations, activity diagrams, interaction diagrams and state machines. Unfortunately, current proposals devoted to the verification of behavioral schemas tend to skip the analysis of the actions they may include. The main goal of this paper is to cover this gap by presenting several techniques aimed at verifying AS specifications. Our techniques are based on the static analysis of the dependencies between the different actions included in the behavioral schema. For incorrect specifications, our method returns a meaningful feedback that helps repairing the inconsistency.

Elena Planas, Jordi Cabot, Cristina Gómez
Using Macromodels to Manage Collections of Related Models

The creation and manipulation of multiple related models is common in software development, however there are few tools that help to manage such collections of models. We propose a framework in which different types of model relationships – such as submodelOfand refinementOf – can be formally defined and used with a new type of model, called a

macromodel

, to express the required relationships between models at a high-level of abstraction. Macromodels can be used to support the development, comprehension, consistency management and evolution of sets of related models. We illustrate the framework with a detailed example from the telecommunications industry and describe a prototype implementation.

Rick Salay, John Mylopoulos, Steve Easterbrook

Quality and Data Integration

A Case Study of Defect Introduction Mechanisms

It is well known that software production organizations spend a sizeable amount of their project budget to rectify the defects introduced into the software systems during the development process. An in depth understanding of the mechanisms that give rise to defects is an essential step towards the reduction of defects in software systems. In line with this objective, we conducted a case study of defect introduction mechanisms on three major components of an industrial enterprise resource planning software system, and observed that external factors including incomplete requirements specifications, adopting new, unfamiliar technologies, lack of requirements traceability, and the lack of proactive and explicit definition and enforcement of user interface consistency rules account for 59% of the defects. These findings suggest areas where effort should be directed.

Arbi Ghazarian
Measuring and Comparing Effectiveness of Data Quality Techniques

Poor quality data may be detected and corrected by performing various quality assurance activities that rely on techniques with different efficacy and cost. In this paper, we propose a quantitative approach for measuring and comparing the effectiveness of these data quality (DQ) techniques. Our definitions of effectiveness are inspired by measures proposed in Information Retrieval. We show how the effectiveness of a DQ technique can be mathematically estimated in general cases, using formal techniques that are based on probabilistic assumptions. We then show how the resulting effectiveness formulas can be used to evaluate, compare and make choices involving DQ techniques.

Lei Jiang, Daniele Barone, Alex Borgida, John Mylopoulos
Improving Model Quality Using Diagram Coverage Criteria

Every model has a purpose and the quality of a model ultimately measures its fitness relative to this purpose. In practice, models are created in a piecemeal fashion through the construction of many diagrams that structure a model into parts that together offer a coherent presentation of the content of the model. Each diagram also has a purpose – its role in the presentation of the model - and this determines what part of the model the diagram is intended to present. In this paper, we investigate what is involved in formally characterizing this intended content of diagrams as

coverage criteria

and show how doing this helps to improve model quality and support automation in the modeling process. We illustrate the approach and its benefits with a case study from the telecommunications industry.

Rick Salay, John Mylopoulos

Goal-Oriented Requirements Engineering

A Method for the Definition of Metrics over i* Models

The

i

* framework has been widely adopted by the information systems community for goal- and agent-oriented modeling and analysis. One of its potential benefits is the assessment of the properties of the modeled socio-technical system. In this respect, the definition and evaluation of metrics may play a fundamental role. We are interested in porting to the

i

* framework metrics that have been already defined and validated in other domains. After some experimentation with

i

* metrics in this context, the complexity inherent to their definition has driven us to build a method for defining them. In this paper, we present the resulting method,

i

MDF

M

, which is structured into 4 steps: domain analysis, domain metrics analysis, metrics formulation and framework update. We apply our approach to an existing suite of metrics for measuring business processes performance and drive some observations from this experiment.

Xavier Franch
Preference Model Driven Services Selection

Service, as a computing and business paradigm, is gaining daily growing attention, which is being recognized and adopted by more and more people. For all involved players, it is inevitable to face service selection situations where multiple qualities of services criteria needs to be taken into account, and complex interrelationships between different impact factors and actors need to be understood and traded off. In this paper, we propose using goal and agent-based preference models, represented with annotated NFR/

i*

framework to drive these decision making activities. Particularly, we present how we enhance the modeling language with quantitative preference information based on input from domain experts and end users, how softgoals interrelationships graph can be used to group impact factors with common focus, and how actor dependency models can be used to represent and evaluate alternative services decisions. We illustrate the proposed approach with an example scenario of provider selection for logistics.

Wenting Ma, Lin Liu, Haihua Xie, Hongyu Zhang, Jinglei Yin
Secure Information Systems Engineering: Experiences and Lessons Learned from Two Health Care Projects

In CAiSE 2006, we had presented a framework to support development of secure information systems. The framework was based on the integration of two security-aware approaches, the Secure Tropos methodology, which provides an approach for security requirements elicitation, and the UMLsec approach, which allows one to include the security requirements into design models and offers tools for security analysis. In this paper we reflect on the usage of this framework and we report our experiences of applying it to two different industrial case studies from the health care domain. However, due to lack of space we only describe in this paper one of the case studies. Our findings demonstrate that the support of the framework for the consideration of security issues from the early stages and throughout the development process can result in a substantial improvement in the security of the analysed systems.

Haralambos Mouratidis, Ali Sunyaev, Jan Jurjens

Requirements and Architecture

An Architecture for Requirements-Driven Self-reconfiguration

Self-reconfiguration is the capability of a system to autonomously switch from one configuration to a better one in response to failure or context change. There is growing demand for software systems able to self-reconfigure, and specifically systems that can fulfill their requirements in dynamic environments. We propose a conceptual architecture that provides systems with self-reconfiguration capabilities, enacting a model-based adaptation process based on requirements models. We describe the logical view on our architecture for self-reconfiguration, then we detail the main mechanisms to monitor for and diagnose failures. We present a case study where a self-reconfiguring system assists a patient perform daily tasks, such as getting breakfast, within her home. The challenge for the system is to fulfill its mission regardless of the context, also to compensate for failures caused by patient inaction or other omissions in the environment of the system.

Fabiano Dalpiaz, Paolo Giorgini, John Mylopoulos
Automated Context-Aware Service Selection for Collaborative Systems

Service-Oriented Architecture (SOA) can provide a paradigm for constructing context-aware collaboration systems. Particularly, the promise of inexpensive context-aware collaboration devices and context-awareness for supporting the selection of suitable services at run-time have provoked growing adoptation of SOA in collaborative systems. In this paper, we introduce an approach for selecting the most suitable service within a SOA based collaboration system, where suitability depends on the user’s context. The approach includes context modelling, generation of context-aware selection criteria and a suitable service selection methodology.

Hong Qing Yu, Stephan Reiff-Marganiec
Development Framework for Mobile Social Applications

Developments in mobile phone technologies have opened the way for a new generation of mobile social applications that allow users to interact and share information. However, current programming platforms for mobile phones provide limited support for information management and sharing, requiring developers to deal with low-level issues of data persistence, data exchange and vicinity sensing. We present a framework designed to support the requirements of mobile social applications based on a notion of P2P data collections and a flexible event model that controls how and when data is exchanged. We show how the framework can be used by describing the development of a mobile application for collaborative filtering based on opportunistic information sharing.

Alexandre de Spindler, Michael Grossniklaus, Moira C. Norrie

Service Orientation

Evolving Services from a Contractual Perspective

In an environment of constant change, driven by competition and innovation, a service can rarely remain stable - especially when it depends on other services to fulfill its functionality. However, uncontrolled changes can easily break the existing relationships between a service and its environment (its customers and providers). In this paper we present an approach that allows for the controlled evolution of a service by leveraging the loosely-coupled nature of the SOA paradigm. More specifically, we formalize the notion of contracts between interacting services that enable their independent evolution and we investigate under which criteria can changes to a contract-bound service, or even to the contract itself, be transparent to the environment of the service.

Vasilios Andrikopoulos, Salima Benbernou, Mike P. Papazoglou
Efficient IR-Style Search over Web Services

In service-based systems, one of the most important problems is how to discover desired web services. In this paper, we propose a novel IR-Style mechanism for discovering and ranking web services automatically. In particular, we introduce the notion of preference degree for web services and then we define service relevance and service importance as two desired properties for measuring the preference degree. Furthermore, various algorithms are given for computing the relevance and importance of services, respectively. Experimental results show the proposed IR-style search strategy is efficient and practical.

Yanan Hao, Jinli Cao, Yanchun Zhang
Towards a Sustainable Services Innovation in the Construction Sector

In this paper, we report on a business case in the construction sector where we have designed and prototyped an innovative Web-based distributed document management application. It supports various exchange and sharing of information services between the different stakeholders involved in a construction project. The development of the application is based on a service-oriented architecture and follows a systematic model-driven engineering approach. Besides the application itself, the paper also reports on a Sustainable Services Innovation Process (S2IP) guiding our activities related to the valorization and the successful technology transfer of a demonstrator into an innovative product. We illustrate how this innovation process has been applied to this business case in the construction sector where a networked value constellation has been identified and realized with professionals of the construction sector (including a standardization body), software houses and our technology transfer centre.

Sylvain Kubicki, Eric Dubois, Gilles Halin, Annie Guerriero

Web Service Orchestration

P2S: A Methodology to Enable Inter-organizational Process Design through Web Services

With the advent of Service Oriented Architecture organizations have experienced services as a platform-independent technology to develop and use simple internal applications or outsource activities by searching for external services, thus enabling inter-organizational interactions. In this scenario, services are units of work provided by service providers and offered to the other organizations involved in a collaborative business process. Collaboration should be facilitated by guaranteeing a homogeneous description of services at the right level of granularity. We propose a methodology to support the designer of a business process in the identification of services that compose the process itself. The methodology should allow collaborative partners to standardize process modelling through component services, enabling effective inter-organizational service discovery. The methodology is presented by means of a running example in a real case scenario.

Devis Bianchini, Cinzia Cappiello, Valeria De Antonellis, Barbara Pernici
Composing Time-Aware Web Service Orchestrations

Workflow time management deals with the calculation of temporal thresholds for process activities, which allows forecasts about looming deadline violations. We present a novel approach to transform a web service orchestration into a time-aware orchestration, that contains temporal assessment and intervention logic. During process execution intervention strategies are triggered pro-actively to speed up a late process and to avoid upcoming violations of temporal constraints.

Horst Pichler, Michaela Wenger, Johann Eder
Asynchronous Timed Web Service-Aware Choreography Analysis

Web services are the main pillar of the Service Oriented Computing (SOC) paradigm which enables applications integration within and across business organizations. One of the important features of the Web services is the

choreography

aspect which allows to capture collaborative processes involving multiple services. In this context, one of the important investigations is the

choreography compatibility analysis

. We mean by the choreography compatibility the capability of a set of Web services of actually interacting by exchanging messages in a proper manner. Whether a set of services are compatible depends not only on their sequences of messages but also on

quantitative properties

such as

timed properties

. In this paper, we investigate an approach that deals with checking the

timed compatibility of a choreography

in which the Web services support

asynchronous timed communications

.

Nawal Guermouche, Claude Godart

Value-Driven Modelling

Evaluation Patterns for Analyzing the Costs of Enterprise Information Systems

Introducing

enterprise information systems

(EIS) is usually associated with high costs. It is therefore crucial to understand those factors that determine or influence these costs. Existing cost analysis methods are difficult to apply. Particularly, these methods are unable to cope with the dynamic interactions of the many technological, organizational and project-driven cost factors, which specifically arise in the context of EIS. Picking up this problem, in previous work we introduced the EcoPOST framework to investigate the complex cost structures of EIS engineering projects through qualitative cost evaluation models. This paper extends this framework and introduces a pattern-based approach enabling the reuse of EcoPOST evaluation models. Our patterns do not only simplify the design of EcoPOST evaluation models, but also improve the quality and comparability of cost evaluations. Therewith, we further strengthen our EcoPOST framework as an important tool supporting EIS engineers in gaining a better understanding of those factors that determine the costs of EIS engineering projects.

Bela Mutschler, Manfred Reichert
Using the REA Ontology to Create Interoperability between E-Collaboration Modeling Standards

E-collaboration modeling standards like ISO/IEC 15944 and the UN/CEFACT Modeling Methodology (UMM) provide techniques, terms and reference models for modeling collaborative business processes. They offer a standardized approach for business partners to codify the business conventions, agreements and rules that govern business collaborations and to share business process information. Although effective in creating interoperability between organizations at the business process level, prospective business partners are required to commit to the same modeling standard. In this paper we show how the REA enterprise ontology can be used to semantically relate the ISO/IEC 15944 and UMM e-collaboration standards. Using the REA ontology as a shared business collaboration ontology, business partners can create interoperability between their respective business process models without having to use the same modeling standard.

Frederik Gailly, Geert Poels
Value-Based Service Modeling and Design: Toward a Unified View of Services

Service-oriented architectures are the upcoming business standard for realizing enterprise information systems, thus creating a need for analysis and design methods that are truly service-oriented. Most research on this topic so far takes a software engineering perspective. For a proper alignment between business and IT, a service perspective at the business level is needed as well. In this paper, a unified view of services is introduced by means of a service ontology, service classification and service layer architecture. On the basis of these service models, a service design method is proposed and applied to a case from the literature. The design method capitalizes on existing value modeling approaches.

Hans Weigand, Paul Johannesson, Birger Andersson, Maria Bergholtz

Workflow

Data-Flow Anti-patterns: Discovering Data-Flow Errors in Workflows

Despite the abundance of analysis techniques to discover control-flow errors in workflow designs, there is hardly any support for data-flow verification. Most techniques simply abstract from data, while data dependencies can be the source of all kinds of errors. This paper focuses on the discovery of data-flow errors in workflows. We present an analysis approach that uses so-called “anti-patterns” expressed in terms of a temporal logic. Typical errors include accessing a data element that is not yet available or updating a data element while it may be read in a parallel branch. Since the anti-patterns are expressed in terms of temporal logic, the well-known, stable, adaptable, and effective model-checking techniques can be used to discover data-flow errors. Moreover, our approach enables a seamless integration of control flow and data-flow verification.

Nikola Trčka, Wil M. P. van der Aalst, Natalia Sidorova
Process Algebra-Based Query Workflows

In this paper we combine ideas from workflow processing and database query answering. Tailoring process algebras like Milner’s

Calculus of Communicating Systems (CCS)

to relational dataflow makes them a natural candidate for specifying data-oriented workflows in a declarative way. In addition to the classical evaluation of relational operator trees, the combination with the CCS control structures provides (guarded) alternatives and test-based iterations using recursive process fragment definitions. For the actual atomic constituents of the process, language concepts from the relational world, like queries, but also the use of abstract datatypes, e.g., graphs, can be embedded.

We illustrate the advantages of the approach by an application scenario with remote, heterogeneous sources and Web Services that return their results asynchronously. The presented approach has been implemented in a prototype.

Thomas Hornung, Wolfgang May, Georg Lausen
ETL Workflow Analysis and Verification Using Backwards Constraint Propagation

One major contribution of data warehouses is to support better decision making by facilitating data analysis, and therefore data quality is of primary importance. ETL is the process that extracts, transforms, and ultimately loads data into target warehouses. Although ETL workflows can be designed by ETL tools, data exceptions are largely left to human analysis and handled inadequately. Early detection of exceptions helps to improve the stability and efficiency of ETL workflows. To achieve this goal, a novel approach, Backwards Constraint Propagation (BCP), is proposed that automatically analyzes ETL workflows and verifies the target-end restrictions at their earliest points. BCP builds an ETL graph out of a given ETL workflow, encodes the target-end restrictions as integrity constraints, and propagates them backwards from target to sources through the ETL graph by applying constraint projection rules. It is showed that BCP supports most relational algebra operators and data transformation functions.

Jie Liu, Senlin Liang, Dan Ye, Jun Wei, Tao Huang

Business Process Modelling

The Declarative Approach to Business Process Execution: An Empirical Test

Declarative approaches have been proposed to counter the limited flexibility of the traditional imperative modeling paradigm, but little empirical insights are available into their actual strengths and usage. In particular, it is unclear whether end-users are really capable of adjusting a particular plan to execute a business process when using a declarative approach. Our paper addresses this knowledge gap by describing the design, execution, and results of a controlled experiment in which varying levels of constraints are imposed on the way a group of subjects can execute a process. The results suggest that our subjects can effectively deal with increased levels of constraints when relying on a declarative approach. This outcome supports the viability of this approach, justifying its further development and application.

Barbara Weber, Hajo A. Reijers, Stefan Zugal, Werner Wild
Configurable Process Models: Experiences from a Municipality Case Study

Configurable process models integrate different variants of a business process into a single model. Through configuration users of such models can then combine the variants to derive a process model optimally fitting their individual needs. While techniques for such models were suggested in previous research, this paper presents a case study in which these techniques were extensively tested on a real-world scenario. We gathered information from four Dutch municipalities on registration processes executed on a daily basis. For each process we identified variations among municipalities and integrated them into a single, configurable process model, which can be executed in the YAWL workflow environment. We then evaluated the approach through interviews with organizations that support municipalities in organizing and executing their processes. The paper reports on both the feedback of the interviewed partners and our own observations during the model creation.

Florian Gottschalk, Teun A. C. Wagemakers, Monique H. Jansen-Vullers, Wil M. P. van der Aalst, Marcello La Rosa
Business Process Modeling: Current Issues and Future Challenges

Business process modeling has undoubtedly emerged as a popular and relevant practice in Information Systems. Despite being an actively researched field, anecdotal evidence and experiences suggest that the focus of the research community is not always well aligned with the needs of industry. The main aim of this paper is, accordingly, to explore the current issues and the future challenges in business process modeling, as perceived by three key stakeholder groups (academics, practitioners, and tool vendors). We present the results of a global Delphi study with these three groups of stakeholders, and discuss the findings and their implications for research and practice. Our findings suggest that the critical areas of concern are standardization of modeling approaches, identification of the value proposition of business process modeling, and model-driven process execution. These areas are also expected to persist as business process modeling roadblocks in the future.

Marta Indulska, Jan Recker, Michael Rosemann, Peter Green

Requirements Engineering

Deriving Information Requirements from Responsibility Models

This paper describes research in understanding the requirements for complex information systems that are constructed from one or more generic COTS systems. We argue that, in these cases, behavioural requirements are largely defined by the underlying system and that the goal of the requirements engineering process is to understand the information requirements of system stakeholders. We discuss this notion of information requirements and propose that an understanding of how a socio-technical system is structured in terms of responsibilities is an effective way of discovering this type of requirement. We introduce the idea of responsibility modelling and show, using an example drawn from the domain of emergency planning, how a responsibility model can be used to derive information requirements for a system that coordinates the multiple agencies dealing with an emergency.

Ian Sommerville, Russell Lock, Tim Storer, John Dobson
Communication Analysis: A Requirements Engineering Method for Information Systems

Developing Information Systems (ISs) is a hard task for which Requirements Engineering (RE) offers a good starting point. ISs can be viewed as a support for organisational communication. Therefore, we argue in favour of communication-oriented RE methods. This paper presents Communication Analysis, a method for IS development and computerisation. The focus is put on requirements modelling techniques. Two novel techniques are described; namely, Communicative Event Diagram and Communication Structures. These are based on sound theory, they are accompanied by prescriptive guidelines (such as unity criteria) and they are illustrated by means of a practical example.

Sergio España, Arturo González, Óscar Pastor
Spectrum Analysis for Quality Requirements by Using a Term-Characteristics Map

Quality requirements are scattered over a requirements specification, thus it is hard to measure and trace such quality requirements to validate the specification against stakeholders’ needs. We have already proposed a technique called “spectrum analysis for quality requirements” which enables analysts to sort a requirements specification to measure and track quality requirements in the specification. However current spectrum analysis largely depends on expertise of each analyst, thus it takes a lot of efforts to perform the analysis and is hard to reuse experiences for such analysis. We introduce domain knowledge called term-characteristic map (TCM) to improve current spectrum analysis for quality requirements. Through several experiments, we evaluated the improved spectrum analysis.

Haruhiko Kaiya, Masaaki Tanigawa, Shunichi Suzuki, Tomonori Sato, Kenji Kaijiri
Backmatter
Metadata
Title
Advanced Information Systems Engineering
Editors
Pascal van Eck
Jaap Gordijn
Roel Wieringa
Copyright Year
2009
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-02144-2
Print ISBN
978-3-642-02143-5
DOI
https://doi.org/10.1007/978-3-642-02144-2

Premium Partner