Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Research Papers

Service and Business Process Modeling (1)

Business Process Model Abstraction Based on Behavioral Profiles

A variety of drivers for process modeling efforts, from low-level service orchestration to high-level decision support, results in many process models describing one business process. Depending on the modeling purpose, these models differ with respect to the model granularity. Business process model abstraction (BPMA) emerged as a technique that given a process model delivers a high-level process representation containing more coarse-grained activities and overall ordering constraints between them. Thereby, BPMA reduces the number of models capturing the same business process on different abstraction levels. In this paper, we present an abstraction approach that derives control flow dependencies for activities of an abstract model, once the groups of related activities are selected for aggregation. In contrast to the existing work, we allow for arbitrary activity groupings. To this end, we employ the behavioral profile notion that captures behavioral characteristics of a process model. Based on the original model and the activity grouping, we compute a new behavioral profile used for synthesis of the abstract process model.

Sergey Smirnov, Matthias Weidlich, Jan Mendling

Root-Cause Analysis of Design-Time Compliance Violations on the Basis of Property Patterns

Today’s business environment demands a high degree of compliance of business processes with business rules, policies, regulations and laws. Compliance regulations, such Sarbanes-Oxley force enterprises to continuously review their business processes and service-enabled applications and ensure that they satisfy the set of relevant compliance constraints. Compliance management should be considered from the very early stages of the business process design. In this paper, a taxonomy of compliance constraints for business processes is introduced based on property specification patterns, where patterns can be used to facilitate the formal specification of compliance constraints. This taxonomy serves as the backbone of the root-cause analysis, which is conducted to reason about and eventually resolve design-time compliance violations. Based on the root-cause analysis, appropriate guidelines and instructions can be provided as remedies to alleviate design-time compliance deviations in service-enabled business processes.

Amal Elgammal, Oktay Turetken, Willem-Jan van den Heuvel, Mike Papazoglou

Artifact-Centric Choreographies

Classical notations for service collaborations focus either on the control flow of participating services (interacting models) or the order in which messages are exchanged (interaction models). None of these approaches emphasizes the evolution of data involved in the collaboration. In contrast, artifact-centric models pursue the converse strategy and begin with a specification of data objects.

This paper extends existing concepts for artifact-centric business process models with the concepts of

agents

and

locations

. By making explicit

who

is accessing an artifact and

where

the artifact is located, we are able to automatically generate an interaction model that can serve as a contract between the agents and by construction makes sure that specified global goal states on the involved artifacts are reached.

Niels Lohmann, Karsten Wolf

Service and Business Process Modeling (2)

Resolving Business Process Interference via Dynamic Reconfiguration

For business processes supported by service-oriented information systems, concurrent execution of business processes still may yield undesired business outcomes as a result of process interference. For instance, concurrent processes may partially depend on a semantically identical process variable, causing inconsistencies during process execution. Current design-time verification of service-based processes is not always sufficient to identify these issues. To identify and resolve potentially erroneous situations, run-time handling of interference is required. In this paper, dependency scopes are defined to represent the dependencies between processes and data sources. In addition, intervention patterns are developed to repair inconsistencies using dynamic reconfiguration during execution of the pro-cess. These concepts are implemented on top of a BPMS platform and tested on a real case study, based on the implementation of a Dutch Law in e-Government.

Nick R. T. P. van Beest, Pavel Bulanov, Hans Wortmann, Alexander Lazovik

Linked Data and Service Orientation

Linked Data

has become a popular term and method of how to expose structured data on the Web. There currently are two school of thought when it comes to defining what

Linked Data

actually is, with one school of thought defining it more narrowly as a set of principles describing of how to publish data based on

Semantic Web

technologies, whereas the other school more generally defines it as any form of properly linked data that follows the

Representational State Transfer (REST)

architectural style of the Web. In this paper, we describe and compare these two schools of thoughts with a particular emphasis on how well they support principles of service orientation.

Erik Wilde

Risk Sensitive Value of Changed Information for Selective Querying of Web Services

A key challenge associated with compositions is that they must often function in volatile environments, where the parameters of the component Web services may change during execution. Failure to adapt to such changes may result in sub-optimal compositions. Value of changed information (VOC) offers a principled and recognized approach for selectively querying component services for their revised information. It does so in a rational (risk neutral) way. However, risk preferences often constitute an important part of the organization’s decision analysis cycle and determine its desired business goals. We show how VOC may be generalized to consider preferences such as risk seeking and aversion using a utility based approach. Importantly, considerations of risk preferences lead to different services being used in the compositions and selected for querying for revised information. This is intuitive and provides evidence toward the validity of our approach for modeling risk preferences in VOC.

John Harney, Prashant Doshi

Service Management (1)

Adaptive Service Composition Based on Reinforcement Learning

The services on the Internet are evolving. The various properties of the services, such as their prices and performance, keep changing. To ensure user satisfaction in the long run, it is desirable that a service composition can automatically adapt to these changes. To this end, we propose a mechanism for adaptive service composition. The mechanism requires no prior knowledge about services’ quality, while being able to achieve the optimal composition solution by leveraging the technology of reinforcement learning. In addition, it allows a composite service to dynamically adjust itself to fit a varying environment, where the properties of the component services continue changing. We present the design of our mechanism, and demonstrate its effectiveness through an extensive experimental evaluation.

Hongbing Wang, Xuan Zhou, Xiang Zhou, Weihong Liu, Wenya Li, Athman Bouguettaya

A Service Execution Control Framework for Policy Enforcement

Service-oriented collective intelligence, which creates new value by combining various programs and data as services, requires many participants. Therefore it is crucial for an infrastructure for service-oriented collective intelligence to satisfy various policies of service providers. Some previous works have proposed methods for service selection and adaptation which are required to satisfy service providers’ policies. However, they do not show how to check if the selected services and adaptation processes certainly satisfy service providers’ policies. In this paper, we propose an execution control framework which realizes service selection and adaptation in order to satisfy service providers’ policies. On the framework, the behaviors of composite services are verified against service providers’ policies based on model checking. We also formally defined the effect of the proposed execution control APIs. This enabled us to update models for verification at runtime and reduce the search space for verification.

Masahiro Tanaka, Yohei Murakami, Donghui Lin

An Integrated Solution for Runtime Compliance Governance in SOA

In response to recent financial scandals (e.g. those involving Enron, Fortis, Parmalat), new regulations for protecting the society from financial and operational risks of the companies have been introduced. Therefore, companies are required to assure compliance of their operations with those new regulations as well as those already in place. Regulations are only one example of compliance sources modern organizations deal with every day. Other sources of compliance include licenses of business partners and other contracts, internal policies, and international standards. The diversity of compliance sources introduces the problem of compliance governance in an organization. In this paper, we propose an integrated solution for runtime compliance governance in Service-Oriented Architectures (SOAs). We show how the proposed solution supports the whole cycle of compliance management: from modeling compliance requirements in domain-specific languages through monitoring them during process execution to displaying information about the current state of compliance in dashboards. We focus on the runtime part of the proposed solution and describe it in detail. We apply the developed framework in a real case study coming from EU FP7 project COMPAS, and this case study is used through the paper to illustrate our solution.

Aliaksandr Birukou, Vincenzo D’Andrea, Frank Leymann, Jacek Serafinski, Patricia Silveira, Steve Strauch, Marek Tluczek

Service Management (2)

A Differentiation-Aware Fault-Tolerant Framework for Web Services

Late binding to services in business-to-business operations pose a serious problem for dependable system operation and trust. If third party services are to be trusted they need to be dependable. One way to address the problem is by adding fault tolerance (FT) support to service-oriented systems. However, FT techniques are yet to be adopted in a systematic way within service oriented computing. Current FT frameworks for service-oriented computing are largely protocol-specific, have poor service quality differentiation and poor support for the FT process model. This paper describes a service differentiation-aware, FT framework based on the FT process model that can be used to support service-oriented computing.

Gerald Kotonya, Stephen Hall

Repair vs. Recomposition for Broken Service Compositions

Service composition supports the automatic construction of value-added distributed applications. However, this is nowadays mainly a static affair, with compositions being built once and for all. Moving from a static to a dynamic world, where both available services and needs may change, requires automated techniques to correct broken compositions. Recomposition is a working solution but it requires to rebuild composition models from scratch. With graph planning as the service composition framework, we propose repair as an alternative to recomposition. Rather than discarding broken compositions, repair reuses and corrects them for fast generating new service compositions. Our approach is completely tool-supported. This enables us to compare repair and recomposition using both a case study and a data set from a service composition benchmark framework.

Yuhong Yan, Pascal Poizat, Ludeng Zhao

Interoperation, Composition and Simulation of Services at Home

Pervasive computing environments such as our future homes are the prototypical example of a dynamic, complex system where Service-Oriented Computing techniques will play an important role. A home equipped with heterogeneous devices, whose services and location constantly change, needs to behave as a coherent system supporting its inhabitants. In this paper, we present a fully implemented architecture for domotic applications which uses the concept of a service as its fundamental abstraction. The architecture distinguishes between a pervasive layer where devices and their basic internetworking live, and a composition layer where services can be dynamically composed as a reaction to user desires or home events. Next to the architecture, we also illustrate a visualization and simulation environment to test home coordination scenarios. From the technical point of view, the implementation uses UPnP as the basic device connection protocol and techniques from Artificial Intelligence planning for composing services at runtime.

Eirini Kaldeli, Ehsan Ullah Warriach, Jaap Bresser, Alexander Lazovik, Marco Aiello

Quality of Service

Efficient QoS-Aware Service Composition with a Probabilistic Service Selection Policy

Service-Oriented Architecture enables the composition of loosely coupled services provided with varying Quality of Service (QoS) levels. Given a composition, finding the set of services that optimizes some QoS attributes under given QoS constraints has been shown to be NP-hard. Until now the problem has been considered only for a single execution, choosing a single service for each workflow element. This contrasts with reality where services often are executed hundreds and thousands of times. Therefore, we modify the problem to consider repeated executions of services in the long-term. We also allow to choose multiple services for the same workflow element according to a probabilistic selection policy. We model this modified problem with Linear Programming, allowing us to solve it optimally in polynomial time. We discuss and evaluate the different applications of our approach, show in which cases it yields the biggest utility gains, and compare it to the original problem.

Adrian Klein, Fuyuki Ishikawa, Shinichi Honiden

Using Real-Time Scheduling Principles in Web Service Clusters to Achieve Predictability of Service Execution

Real-time scheduling algorithms enable applications to achieve predictability in request execution. This paper proposes several request dispatching algorithms based on real-time scheduling principles that enable clusters hosting web services to achieve predictability in service execution. Dispatching decisions are based on request properties (such as deadline, task size and laxity) and they are scheduled to achieve designated deadlines. All algorithms follow three important steps to achieve a high level of predictability. Firstly, requests are scheduled based on their hard deadlines. Secondly, requests are selected for execution based on their laxity. Thirdly, the underlying software infrastructure provides means of achieving predictability with high precision operations. The algorithms use various techniques to increase the number of deadlines met. One decreases the variance of task sizes at each executor while another increases the variance of laxity at an executor. The algorithms are implemented in a real-life cluster using real-time enabled Apache Synapse as the dispatcher and services hosted in real-time aware Apache Axis2 instances. The algorithms are compared with common algorithms used in clusters such as Round-Robin and Class-based dispatching. The empirical results show the proposed algorithms outperform the others by meeting at least 95% of the deadlines compared to less than 10% by the others.

Vidura Gamini Abhaya, Zahir Tari, Peter Bertok

Aggregate Quality of Service Computation for Composite Services

This paper addresses the problem of computing the aggregate QoS of a composite service given the QoS of the services participating in the composition. Previous solutions to this problem are restricted to composite services with well-structured orchestration models. Yet, in existing languages such as WS-BPEL and BPMN, orchestration models may be unstructured. This paper lifts this limitation by providing equations to compute the aggregate QoS for general types of irreducible unstructured regions in orchestration models. In conjunction with existing algorithms for decomposing business process models into single-entry-single-exit regions, these functions allow us to cover a larger set of orchestration models than existing QoS aggregation techniques.

Marlon Dumas, Luciano García-Bañuelos, Artem Polyvyanyy, Yong Yang, Liang Zhang

Service Science and Design

Creating Context-Adaptive Business Processes

As the dynamicity of today’s business environments keeps increasing, there is a need to continuously adapt business processes to respond to the changes in those environments and keep a competitive level. By using complex event processing, we can discover information that is relevant to our organization, which is usually hidden among the data generated in the environment, and use it to adapt the processes accordingly to respond to the changing conditions in an optimal way. Unfortunately, the static nature of business process definitions makes it impossible to adapt them at runtime and the redeployment of a modified process is required. By using a component-based approach, we can transform the existing business processes into dynamically bound components, adding the flexibility needed to adapt the processes at runtime. In this paper we present CEVICHE, a framework that combines the strengths of complex event processing and dynamic business process adaptation, which allows to respond to the needs of the rapidly changing environment, and its adaptation language called SBPL, an extension to BPEL which adds flexibility to business processes.

Gabriel Hermosillo, Lionel Seinturier, Laurence Duchien

Statistical Quality Control for Human-Based Electronic Services

Crowdsourcing in form of human-based electronic services (people services) provides a powerful way of outsourcing tasks to a large crowd of remote workers over the Internet. Research has shown that multiple redundant results delivered by different workers can be aggregated in order to achieve a reliable result. However, existing implementations of this approach are rather inefficient as they multiply the effort for task execution and are not able to guarantee a certain quality level. As a starting point towards an integrated approach for quality management of people services we have developed a quality management model that combines elements of statistical quality control (SQC) with group decision theory. The contributions of the workers are tracked and weighted individually in order to minimize the quality management effort while guaranteing a well-defined level of overall result quality. A quantitative analysis of the approach based on an optical character recognition (OCR) scenario confirms the efficiency and reach of the approach.

Robert Kern, Hans Thies, Gerhard Satzger

A Requirement-Centric Approach to Web Service Modeling, Discovery, and Selection

Service-Oriented Computing (SOC) has gained considerable popularity for implementing Service-Based Applications (SBAs) in a flexible and effective manner. The basic idea of SOC is to understand users’ requirements for SBAs first, and then discover and select relevant services (i.e., that fit closely functional requirements) and offer a high Quality of Service (QoS). Understanding users’ requirements is already achieved by existing requirement engineering approaches (e.g., TROPOS, KAOS, and MAP) which model SBAs in a requirement-driven manner. However, discovering and selecting relevant and high QoS services are still challenging tasks that require time and effort due to the increasing number of available Web services. In this paper, we propose a requirement-centric approach which allows: (i) modeling users’ requirements for SBAs with the MAP formalism and specifying required services using an Intentional Service Model (ISM); (ii) discovering services by querying the Web service search engine Service-Finder and using keywords extracted from the specifications provided by the ISM; and(iii) selecting automatically relevant and high QoS services by applying Formal Concept Analysis (FCA). We validate our approach by performing experiments on an e-books application. The experimental results show that our approach allows the selection of relevant and high QoS services with a high accuracy (the average precision is 89.41%) and efficiency (the average recall is 95.43%).

Maha Driss, Naouel Moha, Yassine Jamoussi, Jean-Marc Jézéquel, Henda Hajjami Ben Ghézala

Service Development and Run-Time Management

Spreadsheet as a Generic Purpose Mashup Development Environment

Mashup development is done using purposely created tools. Because each tool offers a different paradigm and syntax for wiring mashup components, users need to learn different tools for different tasks. We believe that there is a need for a generic purpose mashup environment catering for a wider range of mashup applications. In this paper we introduce MashSheet - a spreadsheet-based, generic purpose mashup tool. Using MashSheet, mashups can be built using spreadsheet paradigms that many users are already familiar with. We use a generic data model (XML-based) to represent mashup components and data produced by them, which enables the reuse of intermediate mashup results. We support three classes of mashup operations: data, process and visualization.

Dat Dac Hoang, Hye-Young Paik, Anne H. H. Ngu

Combining Enforcement Strategies in Service Oriented Architectures

Business regulations on enterprise applications cover both infrastructure and orchestration levels of the Service-Oriented Architecture(SOA) environment. Thus, for a correct and efficient enforcement of such requirements, full integration among different enforcement middleware is necessary. Based on previous work [1], we make a comparison between enforcement capabilities at business and infrastructure levels. Our contribution is to make a first step towards a policy enforcement model that combines the strengths of the orchestration level enforcement mechanisms with those of the message bus. The advantage of such a model is (1) that infrastructure and orchestration requirements are enforced by the most appropriate mechanisms, and (2) the ability to enforce regulations that would be otherwise impossible to enforce by a single mechanism. We present the architecture and a first prototype of such a model to show its feasibility.

Gabriela Gheorghe, Bruno Crispo, Daniel Schleicher, Tobias Anstett, Frank Leymann, Ralph Mietzner, Ganna Monakova

Fault Handling in the Web Service Stack

The Web services platform architecture consists of different layers for exchanging messages. There may be faults happening at each layer during the message exchange. First, the paper presents current standards employed in the different layers and shows their interrelation. Thereby, the focus is on the fault handling strategies. Second, current service middleware is reviewed whether and how it follows the fault handling strategies.

Oliver Kopp, Frank Leymann, Daniel Wutke

High-Level Description Languages

Conjunctive Artifact-Centric Services

Artifact-centric services are stateful service descriptions centered around “business artifacts”, which contain both a

data schema

holding all the data of interest for the service, and a

lifecycle schema

, which specifies the process that the service enacts. In this paper, the data schemas are full-fledged relational databases, and the lifecycle schemas are specified as sets of condition-action rules, where conditions are evaluated against the current snapshot of the artifact, and where actions are suitable updates to database. The main characteristic of this work is that conditions and actions are based on conjunctive queries. In particular, we exploit recent results in data exchange to specify through tuple-generating-dependencies (tgds) the effects of actions. Using such basis we develop sound and complete verification procedures, which, in spite of the fact that the number of states of an artifact-centric service can be infinite, reduce to the finite case through a suitable use of homomorphism induced by the conjunctive queries.

Piero Cangialosi, Giuseppe De Giacomo, Riccardo De Masellis, Riccardo Rosati

Diagnosis of Service Failures by Trace Analysis with Partial Knowledge

The identification of the source of a fault (“diagnosis”) of orchestrated Web service process executions is a task of growing importance, in particular in automated service composition scenarios. If executions fail because activities of the process do not behave as intended, repair mechanisms are envisioned that will try re-executing some activities to recover from the failure. We present a diagnosis method for identifying incorrect activities in service process executions. Our method is novel both in that it does not require exact behavioral models for the activities and that its accuracy improves upon dependency-based methods. Observations obtained from partial executions and re-executions of a process are exploited. We formally characterize the diagnosis problem and develop a symbolic encoding that can be solved using constraint solvers. Our evaluation demonstrates that the framework yields superior accuracy to classic dependency-based debugging methods on realistically-sized examples.

Wolfgang Mayer, Gerhard Friedrich, Markus Stumptner

Automatic Fragment Identification in Workflows Based on Sharing Analysis

In Service-Oriented Computing (SOC), fragmentation and merging of workflows are motivated by a number of concerns, among which we can cite design issues, performance, and privacy. Fragmentation emphasizes the application of design and runtime methods for clustering workflow activities into fragments and for checking the correctness of such fragment identification w.r.t. to some predefined policy. We present a fragment identification approach based on sharing analysis and we show how it can be applied to abstract workflow representations that may include descriptions of data operations, logical link dependencies based on logical formulas, and complex control flow constructs, such as loops and branches. Activities are assigned to fragments (to infer how these fragments are made up or to check their well-formedness) by interpreting the sharing information obtained from the analysis according to a set of predefined policy constraints.

Dragan Ivanović, Manuel Carro, Manuel Hermenegildo

Service Level Agreements

Preventing SLA Violations in Service Compositions Using Aspect-Based Fragment Substitution

In this paper we show how the application of the aspect-oriented programming paradigm to runtime adaptation of service compositions can be used to prevent SLA violations. Adaptations are triggered by predicted violations, and are implemented as substitutions of fragments in the service composition. Fragments are full-fledged standalone compositions, and are linked into the original composition via special activities, which we refer to as virtual activities. Before substitution we evaluate fragments with respect to their expected impact on the performance of the composition, and choose those fragments which are best suited to prevent a predicted violation. We show how our approach can be implemented using Windows Workflow Foundation technology, and discuss our work based on an illustrative case study.

Philipp Leitner, Branimir Wetzstein, Dimka Karastoyanova, Waldemar Hummer, Schahram Dustdar, Frank Leymann

Adaptive Management of Composite Services under Percentile-Based Service Level Agreements

We present a brokering service for the adaptive management of composite services. The goal of this broker is to dynamically adapt at runtime the composite service configuration, to fulfill the Service Level Agreements (SLAs) negotiated with different classes of requestors, despite variations of the operating environment. Differently from most of the current approaches, where the performance guarantees are characterized only in terms of bounds on average QoS metrics, we consider SLAs that also specify upper bounds on the percentile of the service response time, which are expected to better capture user perceived QoS. The adaptive composite service management is based on a service selection scheme that minimizes the service broker cost while guaranteeing the negotiated QoS to the different service classes. The optimal service selection is determined by means of a linear programming problem that can be efficiently solved. As a result, the proposed approach is scalable and lends itself to an efficient implementation.

Valeria Cardellini, Emiliano Casalicchio, Vincenzo Grassi, Francesco Lo Presti

BPMN Modelling of Services with Dynamically Reconfigurable Transactions

We promote the use of

transactional attributes

for modelling business processes in service-oriented scenarios. Transactional attributes have been introduced in Enterprise JavaBeans (EJB) to decorate the methods published in Java containers. Attributes describe “modalities” that discipline the reconfiguration of transactional scopes (i.e., of caller and callee) upon method invocation.

We define and study modelling and programming mechanisms to control dynamically reconfigurable transactional scopes in Service-Oriented Computing (SOC). On the one hand, we give evidence of the suitability of transactional attributes for modelling and programming SOC transactions. As a proof of concept, we show how BPMN can be enriched with a few annotations for transactional attributes. On the other hand, we show how the results of a theoretical framework enable us to make more effective the development of transactional service-oriented applications.

Laura Bocchi, Roberto Guanciale, Daniele Strollo, Emilio Tuosto

Service Engineering Methodologies

Programmable Fault Injection Testbeds for Complex SOA

The modularity of Service-oriented Architectures (SOA) allows to establish complex distributed systems comprising e.g., services, clients, brokers, and workflow engines. A growing complexity, however, automatically increases the number of potential fault sources which have effects on the whole SOA. Fault handling mechanisms must be applied in order to achieve a certain level of robustness. In this paper we do not deal with fault-tolerance itself but regard the problem from a different perspective: how can fault-tolerance be evaluated? We argue that this can be best done by testing the system at runtime and observing its reaction on occuring faults. Though, engineers are facing the problem of how to perform such tests in a realistic manner in order to get meaningful results. As our contribution to this issue we present an approach for generating fault injection testbeds for SOA. Our framework allows to model testbeds and program their behavior, to generate running instances out of it, and to inject diverse types of faults. The strength of our approach lies in the customizability of the testbeds and the ability to program the fault-injecting mechanisms in a convenient manner.

Lukasz Juszczyk, Schahram Dustdar

Abstracting and Applying Business Modeling Patterns from RosettaNet

RosettaNet is a leading industry effort that creates standards for business interactions among the participants in a supply chain. The RosettaNet standard defines over 100 Partner Interface Processes (PIPs) through which the participants can exchange business documents necessary to enact a supply chain. However, each PIP specifies the business interactions at a syntactic level, but fails to capture the business meaning of the interactions to which they apply.

In contrast, this paper takes as its point of departure a commitment-based approach for business modeling that gives central position to interactions captured in terms of their meaning. This paper defines commitment-based business patterns abstracted from RosettaNet PIPs. Doing so yields models that are clearer, more flexible to changing requirements, and potentially enacted through multiple operationalizations. This paper validates the patterns by applying them to model the Order-to-Cash business process from the RosettaNet eBusiness Process Scenario Library.

Pankaj R. Telang, Munindar P. Singh

Heuristic Approaches for QoS-Based Service Selection

In a Service Oriented Architecture (SOA) business processes are commonly implemented as orchestrations of web services, using the Web Services Business Process Execution Language (WS-BPEL). Business processes not only have to provide the required functionality, they also need to comply with certain Quality-of-Service (QoS) constraints which are part of a service-level agreement between the service provider and the client. Different service providers may offer services with the same functionality but different QoS properties, and clients can select from a large number of service offerings. However, choosing an optimal collection of services for the composition is known to be an NP-hard problem.

We present two different approaches for the selection of services within orchestrations required to satisfy certain QoS requirements. We developed two algorithms, OPTIM_HWeight and OPTIM_PRO, which perform a heuristic search on the candidate services. The OPTIM_HWeight algorithm is based on weight factors and the OPTIM_PRO algorithm is based on priority factors. We evaluate and compare the two algorithms with each other and also with a genetic algorithm.

Diana Comes, Harun Baraki, Roland Reichle, Michael Zapf, Kurt Geihs

Service Security, Privacy, and Trust

From Quality to Utility: Adaptive Service Selection Framework

We consider an approach to service selection wherein service consumers choose services with desired nonfunctional properties to maximize their utility. A consumer’s utility from using a service clearly depends upon the qualities offered by the service. Many existing service selection approaches support agents estimating trustworthiness of services based on their quality of service. However, existing approaches do not emphasize the relationship between a consumer’s interests and the utility the consumer draws from a service. Further, they do not properly support consumers being able to compose services with desired quality (and utility) profiles.

We propose an adaptive service selection framework that offers three major benefits. First, our approach enables consumers to select services based on their individual utility functions, which reflect their preferences, and learn the providers’ quality distributions. Second, our approach guides consumers to construct service compositions that satisfy their quality requirements. Third, an extension of our approach with contracts approximates Pareto optimality without the use of a market mechanism.

Chung-Wei Hang, Munindar P. Singh

Trust Assessment for Web Services under Uncertainty

We introduce a model for assessing the trust of providers in a service-oriented environment. Our model is cooperative in nature, such that Web services share their experiences of the service providers with their peers through ratings. The different ratings are aggregated using the “statistical cloud model” defined for uncertain situations. The model can uniformly describe the concepts of randomness, fuzziness, and their relationship in quantitative terms. By incorporating the credibility values of service raters in the model, we can assess a service provider’s trust. Experiment results show that our proposed model performs in a fairly accurate manner.

Zaki Malik, Brahim Medjahed

Incorporating Expectations as a Basis for Business Service Selection

The collaborative creation of value is the central tenet of services science. In particular, then, the quality of a service encounter would depend on the mutual expectations of the participants. Specifically, the quality of experience that a consumer derives from a service encounter would depend on how the consumer’s expectations are refined and how well they are met by the provider during the encounter. We postulate that incorporating expectations ought therefore be a crucial element of business service selection.

Unfortunately, today’s technical approaches to service selection disregard the above. They emphasize reputation measured via numeric ratings that consumers provide about service providers. Such ratings are easy to process computationally, but beg the question as to what the raters’ frames of reference, i.e., expectations. When the frames of reference are not modeled, the resulting reputation scores are often not sufficiently predictive of a consumer’s satisfaction.

We investigate the notion of expectations from a computational perspective. We claim that (1) expectations, despite being subjective, are a well-formed, reliably computable notion and (2) we can compute expectations and use them as a basis for improving the effectiveness of service selection. Our approach is as follows. First, we mine textual assessments of service encounters given by consumers to build a model of each consumer’s expectations along with a model of each provider’s ability to satisfy such expectations. Second, we apply expectations to predict a consumer’s satisfaction for engaging a particular provider. We validate our claims based on real data obtained from eBay.

Adel M. ElMessiry, Xibin Gao, Munindar P. Singh

Industry Papers

Enhancing Collaboration with IBM’s Rational Jazz tm

This paper describes our experience with IBM’s Rational Jazz

tm

platform for collaboration and for coordinating software development in the context of a medium sized service research and development project. We discuss the observed advantages of Jazz in systematizing the development process, especially when we are operating with extreme agility and the team is widely distributed around the world. We cover both narrative observations and quantitative measurements of Jazz usage. We demonstrate an objective measure of the value of such a software development management system. And we study the extent to which Jazz interfaces can replace ad hoc communication. While Jazz provides sufficient structure to replace all other communication within a geographically distributed research and development team, we conclude that redundant team communication in the forms of email and telephone meetings is necessary to maintain team motivation.

Laura Anderson, Bala Jegadeesan, Kenneth Johns, Mario Lichtsinn, Priti Mullan, James Rhodes, Akhilesh Sharma, Ray Strong, Ruoyi Zhou

Discovering Business Process Similarities: An Empirical Study with SAP Best Practice Business Processes

Large organizations tend to have hundreds of business processes. Discovering and understanding the similarities among these business processes are useful to organizations for a number of reasons: (a) business processes can be managed and maintained more efficiently, (b) business processes can be reused in new or changed implementations, and (c) investment guidance on which aspects of business processes to improve can be obtained. In this empirical paper, we present the results of our study on over five thousand business processes obtained from SAP’s standardized business process repository divided up into two groups: Industry-specific and Cross-industry. The results are encouraging. We found that 39% of cross-industry processes and 43% of SAP-industry processes have commonalities. Additionally, we found that 20% of all processes studied have at least 50% similarity with other processes. We use the notion of semantic similarity on process and process activity labels to determine similarity. These results indicate that there is enough similarity among business processes in organizations to take advantage of. While this is anecdotally stated, to our knowledge, this is the first attempt to empirically validate this hypothesis using real-world business processes of this size. We present the implications and future research directions on this topic and call for further empirical studies in this area.

Rama Akkiraju, Anca Ivan

A Scalable and Highly Available Brokering Service for SLA-Based Composite Services

The introduction of self-adaptation and self-management techniques in a service-oriented system can allow to meet in a changing environment the levels of service formally defined with the system users in a Service Level Agreement (SLA). However, a self-adaptive SOA system has to be carefully designed in order not to compromise the system scalability and availability. In this paper we present the design and performance evaluation of a brokering service that supports at runtime the self-adaptation of composite services offered to several concurrent users with different service levels. To evaluate the performance of the brokering service, we have carried out an extensive set of experiments on different implementations of the system architecture using workload generators that are based on open and closed system models. The experimental results demonstrate the effectiveness of the brokering service design in achieving scalability and high availability.

Alessandro Bellucci, Valeria Cardellini, Valerio Di Valerio, Stefano Iannucci

Short Papers

Business Service Modeling

Business Artifacts Discovery and Modeling

Changes in business conditions have forced enterprises to continuously re-engineer their business processes. Traditional business process modeling approaches, being activity-centric, have proven to be inadequate for handling this re-engineering. Recent research has focused on developing data-centric business process modeling approaches based on (business) artifacts. However, formal approaches for deriving artifacts out of business requirements currently do not exist. This paper describes a method for artifact discovery and modeling. The method is illustrated with an example in the purchase order domain.

Zakaria Maamar, Youakim Badr, Nanjangud C. Narendra

Carbon-Aware Business Process Design in Abnoba

A key element of any approach to meeting the climate change challenge is the ability to improve

operational efficiency

in a pervasive fashion. The notion of a

business process

is a particularly useful unit of analysis in this context. This article describes a subset of the

Abnoba framework

for green business process management and shows how an algebraic framework can be leveraged to enable an environmental assessment on multiple heterogeneous dimensions (of qualitative or quantitative nature). Furthermore, a machinery for process improvement is outlined.

Konstantin Hoesch-Klohe, Aditya Ghose

On Predicting Impacts of Customizations to Standard Business Processes

Adopting standard business processes and then customizing them to suit specific business requirements is a common business practice. However, often, organizations don’t fully know the impact of their customizations until after processes are implemented. In this paper, we present an algorithm for predicting the impact of customizations made to standard business processes by leveraging a repository of similar customizations made to the same standard processes. For a customized process whose impact needs to be predicted, similar impact trees are located in a repository using the notion of

impact nodes

. The algorithm returns a ranked list of impacts predicted for the customizations.

Pietro Mazzoleni, Aubrey Rembert, Rama Akkiraju, Rong (Emily) Liu

Extended WS-Agreement Protocol to Support Multi-round Negotiations and Renegotiations

WS-Agreement is a well-established and widely adopted protocol that helps service providers and consumers to agree on constraints under which a service is made available. However, the original protocol is limited to a simple interaction pattern for establishing agreements: the requester suggests the Quality of Service (QoS) details, the responder either accepts or declines. This is no longer sufficient when several rounds of negotiations are needed before both parties agree on the QoS level to be provided, or when an already established agreement needs to be changed based on mutual consent (renegotiation). This paper presents an extension to WS-Agreement which jointly addresses these limitations.

Christoph Langguth, Heiko Schuldt

Run-Time Service Management

Event-Driven Virtual Machine for Business Integration Middleware

Business integration middleware uses a variety of programming models to enable business process automation, business activity monitoring, business object state management, service mediation, etc. Different kinds of engines have been developed in support of these programming models. At their core however, all of these engines implement the same kind of behavior: formatted messages (or events) are received, processed in the context of managed objects, and new messages are emitted. These messages can represent service invocations and responses, tracking events, notifications, or point-to-point messages between applications. The managed objects can represent process instances, state machines, monitors, or service mediations. Developing separate engines for each programming model results in redundant implementation efforts, and may even cause an "integration problem" for the integration middleware itself. To address these issues, we propose to use an event-driven virtual machine that implements the fundamental behavior of all business integration middleware as the sole execution platform, and provide compilers for higher level programming models. Conceptually, this is similar to passing from CISC to RISC architecture in CPU design: efficiently implement a small instruction set, and support higher level languages via compilers.

Joachim H. Frank, Liangzhao Zeng, Henry Chang

Consistent Integration of Selection and Replacement Methods under Different Expectations in Service Composition and Partner Management Life-Cycle

Active efforts on Service-Oriented Computing have involved a variety of proposals on service selection and replacement methods to achieve quality assurance with adaptability in service composition. However, each method has specific expectations on when it is activated and how it affects service composition. Blind use of such methods can thus lead to inconsistency. This paper proposes a framework to integrate selection and replacement methods. The framework supports to clarify and analyze expectations in service composition and partner management life-cycle as well as to construct a consistent implementation accordingly. The proposed framework facilitates to test, introduce and replace service selection and replacement methods according to the environment and its change, for a variety of application domains.

Fuyuki Ishikawa

Optimizing the Configuration of Web Service Monitors

Service monitoring is required for meeting regulatory requirements and verifying compliance to Service Level Agreements (SLAs). As such, monitoring is an essential part of web service-based systems. However, service monitoring comes with a cost, including an impact on the quality of monitored services and systems. To deliver the best value to a service provider, it is important to balance meeting monitoring requirements and reducing monitoring impacts. We introduce a novel approach to configuring the web service monitors deployed in a system so that they provide an adequate level of monitoring but with minimized quality impacts, delivering the best value proposition in terms of monitoring benefits and costs. We use a prototype system to demonstrate that by optimizing a web service monitoring system, we can reduce the impact of a set of deployed web service monitors by up to two thirds.

Garth Heward, Jun Han, Ingo Müller, Jean-Guy Schneider, Steve Versteeg

Formal Methods

A Soft Constraint-Based Approach to QoS-Aware Service Selection

Service-based systems should be able to dynamically seek replacements for faulty or underperforming services, thus performing self-healing. It may however be the case that available services do not match all requirements, leading the system to grind to a halt. In similar situations it would be better to choose alternative candidates which, while not fulfilling all the constraints, allow the system to proceed.

Soft constraints

, instead of the traditional

crisp

constraints, can help naturally model and solve replacement problems of this sort. In this work we apply soft constraints to model SLAs and to decide how to rebuild compositions which may not satisfy all the requirements, in order not to completely stop running systems.

Mohamed Anis Zemni, Salima Benbernou, Manuel Carro

Timed Conversational Protocol Based Approach for Web Services Analysis

Choreography is one of the most important features of Web services. It allows to capture collaborative processes involving multiple services. In this paper, we are interested in analyzing the interoperability of Web services that support asynchronous communications which are constrained by data and timed constraints, using a model checking based approach. In particular, we deal with the compatibility problem. To do so, we have developed a set of abstractions and transformations on which we propose a set of primitives characterizing a set of compatibility classes of Web services. This paper is about the specification and implementation of this approach using the UPPAAL model checker.

Nawal Guermouche, Claude Godart

Service Discovery Using Communication Fingerprints

A request to a service registry must be answered with a service that fits in several regards, including semantic compatibility, non-functional compatibility, and interface compatibility. In the case of stateful services, there is the additional need to check behavioral (i.e. protocol) compatibility. This paper is concerned with the latter aspect. For speeding up compatibility checks which need to be performed on many candidate services, we propose an abstraction of the behavior of each published service that we call communication fingerprint. The technique is based on linear programming and is thus extremely efficient. We validate our approach on a large set of services that we cut out of real world business processes.

Olivia Oanea, Jan Sürmeli, Karsten Wolf

Quantifying Service Compatibility: A Step beyond the Boolean Approaches

Checking the compatibility of service interfaces allows one to avoid erroneous executions when composing services together. In this paper, we propose a flooding-based approach for measuring the compatibility degree of service interfaces specified using interaction protocols. This proposal is fully automated by a prototype tool we have implmented.

Meriem Ouederni, Gwen Salaün, Ernesto Pimentel

Quality of Service

Consistency Benchmarking: Evaluating the Consistency Behavior of Middleware Services in the Cloud

Cloud service providers such as Amazon Web Services offer a set of next-generation storage and messaging middleware services that can be utilized on-demand over the Internet. Outsourcing software into the cloud, however, confronts application developers with the challenge of understanding the behavior of distributed systems, which are out of their control. This work proposes an approach to benchmark the consistency behavior of services by example of Amazon Simple Queue Service (SQS), a hosted, Web-scale, distributed message queue that is exposed as a Web service. The data of our consistency benchmarking tests are evaluated with the metric

harvest

as described by Fox and Brewer (1999). Our tests with SQS indicate that the client-service interaction intensity has an influence on harvest.

Markus Klems, Michael Menzel, Robin Fischer

Service Composition with Pareto-Optimality of Time-Dependent QoS Attributes

Quality of Services (QoS) plays an essential role in realizing user tasks by service composition. Most QoS-aware service composition approaches have ignored the fact that QoS values can depend on the time of execution. Common QoS attributes such as response time may depend for instance on daytime, due to access tendency or conditional Service of Level Agreements. Application-specific QoS attributes often have tight relationships with the current state of resources, such as availability of hotel rooms. In response to these problems, this paper proposes an integrated multi-objective approach to QoS-aware service composition and selection.

Benjamin Klöpper, Fuyuki Ishikawa, Shinichi Honiden

QoS-Based Optimization of Service Compositions for Complex Workflows

In Service-oriented Architectures, business processes can be realized by composing loosely coupled services. If services in the Internet of Services with comparable functionalities but varying quality levels are available at different costs on service marketplaces, service requesters can decide, which services from which service providers to select. The work at hand addresses computing an optimal solution to this service-selection-problem considering complex workflow patterns. For this, a linear optimization problem is formulated, which can be solved by applying integer linear programming techniques.

Dieter Schuller, André Miede, Julian Eckert, Ulrich Lampe, Apostolos Papageorgiou, Ralf Steinmetz

Privacy-Aware Device Identifier through a Trusted Web Service

Device identifiers can be used to enhance authentication mechanisms for conducting online business. However, personal computers (PC) today are not equipped with standardized, privacy-aware hardware-based device identifiers for general use. This paper describes the implementation of privacy-aware device identifiers using the capabilities of the Trusted Platform Module (TPM) and extending the trust boundary of the device using a Web Service. It also describes a case study based on a device reputation service.

Marcelo da Cruz Pinto, Ricardo Morin, Maria Emilia Torino, Danny Varner

Service Applications

Towards Mitigating Human Errors in IT Change Management Process

IT service delivery is heavily dependent on skilled labor. This opens the scope for errors due to human mistakes. We propose a framework for minimizing errors due to human mistakes in Change Management process, focusing on change preparation and change execution. We developed a tool that brings better structure to the change plan, as well as, helps the change plan creator in developing a plan faster through use of knowledge-base and automatic guidance. At the change execution phase, we designed and implemented an architecture that intercepts and validates operator actions, thereby significantly reducing operator mistakes. The system can be tuned to vary the involvement of the operator. We have tested the system in a large IT delivery environment and report potential benefits.

Venkateswara R. Madduri, Manish Gupta, Pradipta De, Vishal Anand

A Service-Based Architecture for Multi-domain Search on the Web

Current search engines lack in support for multi-domain queries, i.e., queries that can be answered by combining information from two or more knowledge domains. Questions such as “Find a theater close to Times Square, NYC, showing a recent thriller movie, close to a pizza restaurant” have no answer unless the user individually queries different vertical search engines for each domain and then manually combines results. Therefore, the need arises for a special class of search applications that combine different search services. In this paper we propose an architecture aiming at answering multi-domain queries through composition of search services and we provide facilities for the execution of multi-domain queries and the visualization of their results, at the purpose of simplifying the access to the information. We describe our service-based architecture and the implemented optimization and distribution options, and we evaluate the feasibility and performance of our approach.

Alessandro Bozzon, Marco Brambilla, Francesco Corcoglioniti, Salvatore Vadacca

Natural Language Service Composition with Request Disambiguation

The aim of our research is to create a service composition system that is able to detect and deal with the imperfections of a natural language user request while keeping it as unrestricted as possible. Our solution consists in three steps: first, service prototypes are generated based on grammatical relations; second, semantic matching is used to discover actual services; third, the composed service is generated in the form of an executable plan. Experimental results have shown that inaccurately requested services can be matched in more than 95% of user queries. When missing service inputs are detected, the user is asked to provide more details.

Florin-Claudiu Pop, Marcel Cremene, Mircea Vaida, Michel Riveill

Posters

Families of SOA Migration

Migration of legacy systems to SOA constitutes a key challenge of service-oriented system engineering. Despite the many works around such migration, there is still little conceptual characterization on what SOA migration entails. To solve this problem, we conducted a systematic literature review that extracts main categories of SOA migration, called SOA migration families, from the approaches proposed in the research community. Based on the results of the systematic review, we describe eight distinct families along with their characteristics and goals.

Maryam Razavian, Patricia Lago

An Ontology Based Approach for Cloud Services Catalog Management

As more and more service providers choose to deliver services on common Cloud infrastructures, it becomes important to formally represent knowledge in a services catalog to enable automatic answering of user requests and sharing of building blocks across service offerings. In this paper, we propose an ontology-driven methodology for formal modeling of the service offerings and associated processes.

Yu Deng, Michael R. Head, Andrzej Kochut, Jonathan Munson, Anca Sailer, Hidayatullah Shaikh

A Scalable Cloud-Based Queueing Service with Improved Consistency Levels

Queuing, an asynchronous messaging paradigm, is used to connect loosely coupled components to form large-scale, highly-distributed, and fault-tolerant applications. As cloud computing continues to gain traction, a number of vendors currently operate cloud-based shared queuing services. These services provide high availability and network partition tolerance with reduced consistency–atleast once delivery (no-loss) with no effort in message order.

Han Chen, Fan Ye, Minkyong Kim, Hui Lei

Exploring Simulation-Based Configuration Decisions

As service compositions grow larger and more complex, so does the challenge of configuring the underlying hardware infrastructure on which the component services are deployed. With more configuration options (virtualized systems, cloud-based systems,

etc.

), the challenge grows more difficult. Configuring service-oriented systems involves balancing a competing set of priorities and choosing trade-offs to achieve high-priority goals. We describe a simulation-based methodology for supporting administrators in making these decisions by providing them with relevant information obtained using inexpensive simulation-generated data. From our existing services-aware simulation framework, we generated millions of performance metrics for a given system in varying configurations. We describe how we structured our simulation experiments to answer specific questions such as optimal service distribution across multiple servers; we relate a general methodology for assisting administrators in balancing trade-offs; and we present results establishing benchmarks for the cost and performance improvements we can expect from run-time configuration adaptation for this application.

Michael Smit, Eleni Stroulia

PhD Symposium Posters

Efficient, Failure-Resilient Semantic Web Service Planning

Over the past years service-oriented architectures have been widely adopted by stakeholders from research and industry. Since the number of services increases rapidly, effective methods are required to automatically discover and compose services according to user requirements. For this purpose, machine-understandable semantic annotations have to be applied in order to enable logical reasoning on the functional aspects of services. However, current approaches are not capable of composing workflows in reasonable time, except for planning tools that require domain-dependent heuristics or constrain the expressiveness of the description language. In addition to that, these tools neglect alternative plans, concealing the danger of creating a workflow having insufficient reliability. Therefore, we propose an approach to efficiently pre-cluster similar services according to their parameters. This way the search space is limited and vulnerable intermediate steps in the workflow can be effectively avoided.

Florian Wagner

Integrated Service Process Adaptation

With more automation in inter-organizational supply chains and proliferation of Web services technology, the need for organizations to link their business services and processes is becoming increasingly important. Using adapters to reconcile incompatibilities between multiple interacting business processes is an efficient and low-cost way to enable automatic and friction-free supply chain collaboration in an IT-enabled environment. My dissertation proposes a new framework and novel techniques for integrated service process adaptation. For the control flow adaptation, we propose an algorithm based on Message Interaction Graph to create an optimal adapter. For message adaptation, we identify a set of extendible message adaptation patterns to solve typical message mismatches. In addition new message adapter can be generated on the fly so as to integrate control flow considerations into message adaptation. Finally we design another algorithm to integrate individual message adaptation patterns with control flow adapters to create a full adapter for multiple processes. We implement all these algorithms in a Java-based prototype system and show the advantages of our methods by performance experiment and case study.

Zhe Shan

Contract Based, Non-invasive, Black-Box Testing of Web Services

The Web service standard represents a prominent conversion of the SOA paradigm increasingly used in practice. The (not so knew) technical aspects in combination with the practices introducded by the WEB, lead new challanges in testing Web services often stated in literature. Introduced in this paper is a not invasive functional testing approach for Web services based on the Design by Contract (DbC) paradigm. By using formal semantic specification in a consequent manner we can present a generic testing approach which enables us to introduce quality metric measurements not before viable in traditional testing in a practicable way. We present results of our first basic study at the Schweizer Bundesbahn (SBB) Informatik in Bern.

Michael Averstegge

Demonstration Papers

Managing Process Model Collections with AProMoRe

As organizations reach higher levels of Business Process Management maturity, they tend to collect numerous business process models. Such models may be linked with each other or mutually overlap, supersede one another and evolve over time. Moreover, they may be represented at different abstraction levels depending on the target audience and modeling purpose, and may be available in multiple languages (e.g. due to company mergers). Thus, it is common that organizations struggle with keeping track of their process models. This demonstration introduces AProMoRe (Advanced Process Model Repository) which aims to facilitate the management of (large) process model collections.

M. C. Fauvet, M. La Rosa, M. Sadegh, A. Alshareef, R. M. Dijkman, Luciano García-Bañuelos, H. A. Reijers, W. M. P. van der Aalst, Marlon Dumas, Jan Mendling

Managing Long-Tail Processes Using FormSys

Efforts and tools aiming to automate business processes promise the highest potential gains on business processes with a well-defined structure and high degree of repetition [1]. Despite successes in this area, the reality is that today many processes are in fact

not

automated. This is because, among other reasons, Business Process Management Suites (BPMSs) are not well suited for ad-hoc and human-centric processes [2]; and automating processes demands high cost and skills. This affects primarily the “long tail of processes” [3], i.e. processes that are less structured, or that do not affect many people uniformly, or that are not considered critical to an organization: those are rarely automated. One of the consequences of this state is that still today organisations rely on templates and paper-based forms to manage the long tail processes.

Ingo Weber, Hye-Young Paik, Boualem Benatallah, Corren Vorwerk, Zifei Gong, Liangliang Zheng, Sung Wook Kim

Managing Web Services: An Application in Bioinformatics

We propose to treat Web services as first-class objects that can be manipulated as if they were data in DBMS. We provide an integrated way for service users to query, compose and evaluate Web services. The resulting system is called Web Service Management System (WSMS). We deployed WSMS as the backbone of a bioinformatics experiment framework named Genome Tracker. In this demonstration, we show how WSMS helps biologists conduct experiments easily and effectively using a service-oriented architecture.

Athman Bouguettaya, Shiping Chen, Lily Li, Dongxi Liu, Qing Liu, Surya Nepal, Wanita Sherchan, Jemma Wu, Xuan Zhou

An Integrated Solution for Runtime Compliance Governance in SOA

Compliance governance in organizations has been recently gaining importance because of new regulations and the diversity of compliance sources. In this demo we will show an integrated solution for runtime compliance governance in Service-Oriented Architectures (SOAs). The proposed solution supports the whole cycle of compliance management and has been tested in a real world case study.

Aliaksandr Birukou, Agnieszka Betkowska Cavalcante, Fabio Casati, Soudip Roy Chowdhury, Vincenzo D’Andrea, Frank Leymann, Ernst Oberortner, Jacek Serafinski, Patricia Silveira, Steve Strauch, Marek Tluczek

Event-Driven Privacy Aware Infrastructure for Social and Health Systems Interoperability: CSS Platform

Assistive processes in healthcare and socio-assistive domains typically span multiple institutions which usually communicate manually with the exchange of documents. Despite the needs of cooperation it is difficult to provide an integrated solution to improve data exchange and allow comprehensive monitoring of the processes due to the complexity of the domains and the privacy issues derived by the use of sensitive data. In this demo we show how we approached the problem in designing and deploying a platform for the interoperability and monitoring of multi-organization healthcare processes in Italy. Our solution provides an event-based platform that assures privacy enforcement with a fine-grained control on the data that is distributed and minimizes the effort required to join the platform providing components that automates the data exchange.

Giampaolo Armellin, Dario Betti, Fabio Casati, Annamaria Chiasera, Gloria Martinez, Jovan Stevovic, Tefo James Toai

Mashups with Mashlight

Mashlight is a framework for creating process mashups out of Web 2.0 widgets. In our demo we show how Mashlight can be used to simplify patient management in a hospital. We illustrate the design-time tools for creating mashups, a desktop execution environment, and mobile environments for iPhones and Android smartphones.

Luciano Baresi, Sam Guinea

A Service Mashup Tool for Open Document Collaboration

Mashup technologies enable end-users to compose situational applications from Web-based services. A particular problem is to find high-level service composition models that a) are intuitive and expressive enough to be easily used by end-users and b) allow for efficient runtime support and integration on the Web. We propose a novel approach that leverages a metaphor of document collaboration: end-users declare the coordination and aggregation of peer contributions into a joint document. Peers provide contributions as Web-based services that allow a) integrating any Web-accessible resource and b) orchestrating the collaboration process. Thus, collaborative document mashups enable lightweight, situational collaboration that is not addressed by most BPM or CSCW systems. In this demo we present our document service infrastructure and collaboration RIA, which allows collaborators to declare and participate in document collaborations in an interactive, intuitive and dynamic way.

Nelly Schuster, Raffael Stein, Christian Zirpins, Stefan Tai

Panta Rhei: Optimized and Ranked Data Processing over Heterogeneous Sources

In the era of digital information, the value of data resides not only in its volume and quality, but also in the additional information that can be inferred from the combination (aggregation, comparison and join) of such data. There is a concrete need for data processing solutions that combine distributed and heterogeneous data sources, such as Web services, relational databases, and even search engines, that can all be modeled as services. In this demonstration, we show how our

Panta Rhei

model addresses the challenge of processing data over heterogeneous sources to provide feasible and ranked combinations of these services.

Daniele Braga, Francesco Corcoglioniti, Michael Grossniklaus, Salvatore Vadacca

RnR: A System for Extracting Rationale from Online Reviews and Ratings

In this paper we present a web based system as well as web service based application to summarise and extract the rationale that underpins online ratings and reviews. The web-based version of RnR system is available for testing from http://rnrsystem.com/RnRSystem. RnR system web service is available from http://rnrsystem.com/axis2/services/RnRData?wsdl.

Dwi A. P. Rahayu, Shonali Krishnaswamy, Cyril Labbe, Oshadi Alhakoon

Liquid Course Artifacts Software Platform

Liquid Course Artifacts Software Platform aims to improve social productivity and enhance interactive experience for teaching and collaborating by using suppliment materials such as slides, exercises, audios, videos and books.

Marcos Baez, Boualem Benatallah, Fabio Casati, Van M. Chhieng, Alejandro Mussi, Qamal Kosim Satyaputra

A Programmble Fault Injection Testbed Generator for SOA

In this demo paper we present the prototype of our fault injection testbed generator. Our tool empowers engineers to generate emulated SOA environments and to program fault injection behavior on diverse levels: at the network layer, at the execution level, and at the message layer. Engineers can specify the structure of testbeds via scripts, attach fault models, and generate running testbed instances from that. As a result, our tool facilitates the setup of customizable testbeds for evaluating fault-handling mechanisms of SOA-based systems.

Lukasz Juszczyk, Schahram Dustdar

BPEL’n’Aspects&Compensation: Adapted Service Orchestration Logic and Its Compensation Using Aspects

One of the main weaknesses of workflow management systems is their inflexibility regarding process changes. To address this drawback in our work on the BPEL’n’Aspects approach we developed a standards-based mechanism to adapt the control flow of BPEL processes [1]. It uses AOP techniques to non-intrusively weave Web service invocations in terms of aspects into BPEL processes. Aspects can be inserted before, instead or after BPEL elements and that way adaptation of running processes is enabled. In this work we want to present a novel extension of the BPEL’n’Aspects prototype that deals with the compensation of weaved-in aspects in a straight-forward manner. The extension enormously improves the applicability of the approach in real-world scenarios: processes in production need the means to compensate behavior that was inserted into the process in the course of adaptation steps. The ability to compensate weaved-in aspects distinguishes our approach from other existing concepts that introduce AOP techniques to business processes.

Mirko Sonntag, Dimka Karastoyanova

A Tool for Integrating Pervasive Services and Simulating Their Composition

As computation and services are pervading our working and living environments, it is important for researchers and developers to have tools to simulate and visualize possible executions of the services and their compositions. The major challenge for such tools is to integrate highly heterogeneous components and to provide a link with the physical environment. We extend our previous work on the RuG ViSi tool [4], in a number of ways: first, we provide a customizable and interactive middleware based on open standards (UPnP and OSGi) [3]; second, we allow any composition engine to guide the simulation and visualization (not only predefined compositions using BPEL) [3]; third, the interaction with simulated or physical devices is modular and bidirectional, i.e., a device can change the state of the simulation. In the demo, we use an AI planner to guide the simulation, a number of simulated UPnP devices, a real device running Java, and a two room apartment. The related video is available at

http://www.youtube.com/watch?v=2w_UIwRqtBY

.

Ehsan Ullah Warriach, Eirini Kaldeli, Jaap Bresser, Alexander Lazovik, Marco Aiello

BPEL4Pegasus: Combining Business and Scientific Workflows

Business and scientific workflow management systems (WfMS) offer different features to their users because they are developed for different application areas with different requirements. Research is currently being done to extend business WfMSs by functionality that meets requirements of scientists and scientific applications. The idea is to bring the strengths of business WfMSs to e-Science. This means great effort in re-implementing features already offered by scientific WfMSs. In our work, we investigated another approach, namely combining business and scientific workflows and thus harnessing the advantages of both. We demonstrate a prototype that implements this idea with BPEL as business workflow language and Pegasus as scientific WfMS. Our motivation is the fact that the manual work to correctly install and configure Pegasus can be supervised by a BPEL workflow to minimize sources of failures and automate the overall process of scientific experimenting.

Mirko Sonntag, Dimka Karastoyanova, Ewa Deelman

Tutorial Abstracts

Multidisciplinary Views of Business Contracts

Several major trends in the services industry drive toward an increasing importance of contracts. These include the formalization of business processes across the client and the provider organizations; resource administration in cloud computing environments; service-level agreements as they arise in infrastructure and networking services; and services viewed from the perspective of real-life engagements.

Munindar P. Singh, Nirmit Desai

Quantitative Service Analysis

Service Orientation has become popular due to dynamic market conditions and changing customer needs. A successful service oriented architecture implementation requires the need for right identification of services from business process models. Service identification is considered to be the main activity in the modeling of service oriented solution, as errors made during service identification flows down through detailed design and implementation of activities.

Though service orientation has been an important milestone in many enterprise transformation initiatives, there hasn’t been much work on identification of services. Services have been identified and are used in day to day transactions, but they are limited to exchange of information between partners (two different organizations) or infrastructure related. Functionalities that are widely used across all applications such as security, auditing has been considered for servicification. In some other cases, business processes have been considered as simple orchestrated set of web services with each activity mapping to a single web service.

Adopting any service identification approach for Service Orientation without verification would rather be impractical for the simple reason being that no common notion of service can be established among stakeholders. It is essential to assert if all services identified provide necessary value and exhibit acceptable technical health (flexibility, reuse etc). To be more effective, there is a need for a methodology that can quantitatively measure the candidature of services with respect to business process models. With such automation, a platform can be provided to bootstrap service analysis where stakeholders can continually model and refine services based on predefined criteria.

This tutorial is intended for researchers and industry practitioners who are interested in Service Oriented Architecture and Service Analysis. The tutorial gives a deeper insight on service analysis and service identification methodologies. Though our methodology follows the prescribed top down approach while recognizing the importance of starting with business models for service identification, it stands different as it is based on mathematical model rather than heuristics or questionnaire based. Our method adopts quantitative way of groping set of business activities and measuring the service candidacy of those groups based on well defined principles. It also demonstrates an automated tool for service analysis.

abstract

environment.

Naveen Kulkarni, Deepti Parachuri, Shashank Trivedi

Scalable Services: Understanding Architecture Trade-off

Creating Internet-scale services is a critical challenge for many organizations today. Data storage is a key component and factor to scalability, and data partitioning and replication along with loose coupling and simple service interfaces have become successful architecture guidelines to preventing scalability issues.

Partitioning realizes incremental scalability by splitting up large data sets and distributing smaller data shards across multiple servers. Replication and loose coupling help tolerate server failures and improve service availability. Replica synchronization, however, can produce substantial volumes of server-to-server traffic and delay service response time.

Distribution of storage infrastructure, consequently, provokes fundamental trade-off challenges, known as the strong CAP principle: only two out of the three properties of a distributed system, strong consistency (C), high availability (A), and partition-tolerance (P), can be achieved at the same time. For example, a transactional database on a single node provides CA without P, a distributed database system with pessimistic locking provides CP without A, and the Domain Name System provides AP without C. The weak CAP principle generalizes the strong CAP principle by characterizing the trade-off as a continuum instead of binary choices. In particular, relaxing consistency requirements and trading consistency for higher availability has become a successful modus operandi for Internet-scale systems.

Key-value data stores provide storage capabilities for a wide range of applications and services, from Amazon’s shopping carts to Zynga’s social gaming engine. We explore common mechanisms employed by Internet-scale key-value data stores, such as Dynamo, Cassandra, and Membase, and discuss how key-value data stores are used in support of representative Internet applications and services.

To evaluate and compare eventually consistent data stores, metrics and benchmarking tools are needed. We review metrics proposed by the distributed systems community and argue for a novel consistency benchmarking model as a systematic approach to measure relaxed consistency.

Markus Klems, Stefan Tai

Crowd-Driven Processes: State of the Art and Research Challenges

Over the past few years the crowdsourcing paradigm has evolved from its humble beginnings as isolated purpose-built initiatives, such as Wikipedia and Elance and Mechanical Turk to a growth industry employing over 2 million knowledge workers, contributing over half a billion dollars to the digital economy. Web 2.0 provides the technological foundations upon which the crowdsourcing paradigm evolves and operates, enabling networked experts to work collaboratively to complete a specific task.

Crowdsourcing has a potential to significantly transform the business processes, by incorporating the knowledge and skills of globally distributed experts to drive business objectives, at shorter cycles and lower cost. Many interesting and successful examples exist, such as GoldCorp, TopCoder, Threadless, etc. However, to fully adopt this mechanism enterprises, and benefit from appealing value propositions, in terms of reducing the time-to-value, a set of challenges remain, in order for enterprises to retain the brand, achieve high quality contributions, and deploy crowdsourcing at the minimum cost.

Enterprise crowdsourcing poses interesting challenges for both academic and industrial research along the social, legal, and technological dimensions. In this tutorial we present a landscape of existing crowdsourcing applications, targeted to the enterprise domain. We describe the challenges that researchers and practitioners face when thinking about various aspects of enterprise crowdsourcing. First, to establish technological foundations, what are the interaction models and protocols between the Enterprise and the crowd (including different types of crowd, such as internal, external and hybrid models). Secondly, how is crowdsourcing going to face the challenges in quality assurance, enabling Enterprises to optimally leverage the scalable workforce. Thirdly, what are the novel (Web) applications enabled by Enterprise crowdsourcing, and how can existing business processes be transformed for crowd consumption.

Maja Vukovic, Claudio Bartolini

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise