Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 11th International Conference on Service-Oriented Computing, ICSOC 2012, held in Berlin, Germany, in December 2013. The 29 full papers and 27 short papers presented were carefully reviewed and selected from 205 submissions. The papers are organized in topical sections on service engineering, service operations and management; services in the cloud; and service applications and implementations.




Data-Centricity and Services Interoperation

This position paper highlights three core areas in which persistent data will be crucial to the management of interoperating services, and highlights selected research and challenges in the area. Incorporating the data-centric perspective holds the promise of providing formal foundations for service interoperation that address issues such as providing a syntax-independent meta-model and semantics, and enabling faithful modeling of parallel interactions between multiple parties.

Richard Hull

Research Track

QoS-Aware Cloud Service Composition Using Time Series

Cloud service composition is usually long term based and economically driven. We propose to use multi-dimensional Time Series to represent the economic models during composition. Cloud service composition problem is then modeled as a similarity search problem. Next, a novel correlation-based search algorithm is proposed. Finally, experiments and their results are presented to show the performance of the proposed composition approach.

Zhen Ye, Athman Bouguettaya, Xiaofang Zhou

QoS Analysis in Heterogeneous Choreography Interactions

With an increasing number of services and devices interacting in a decentralized manner,


are an active area of investigation. The heterogeneous nature of interacting systems leads to choreographies that may not only include conventional services, but also sensor-actuator networks, databases and service feeds. Their middleware behavior within choreographies is captured through abstract interaction paradigms such as





tuple space

. In this paper, we study these heterogeneous interaction paradigms, connected through an

eXtensible Service Bus

proposed in the


project. As the functioning of such choreographies is dependent on the Quality of Service (QoS) performance of participating entities, an intricate analysis of interaction paradigms and their effect on QoS metrics is needed. We study the composition of QoS metrics in heterogeneous choreographies, and the subsequent tradeoffs. This produces interesting insights such as selection of a particular system and its middleware during design time or end-to-end QoS expectation/guarantees during runtime. Non-parametric hypothesis tests are applied to systems, where QoS dependent services may be replaced at runtime to prevent deterioration in performance.

Ajay Kattepur, Nikolaos Georgantas, Valérie Issarny

Improving Interaction with Services via Probabilistic Piggybacking

Modern service oriented applications increasingly include publicly released services that impose novel and compelling requirements in terms of scalability and support to clients with limited capabilities such as mobile applications. To meet these requirements, service oriented applications require a careful optimisation of their provisioning mechanisms. In this paper we investigate a novel technique that optimises the interactions between providers and clients called probabilistic piggybacking. In our approach we automatically infer a probabilistic model that captures the behaviour of clients and predicts the future service requests. The provider exploits this information by piggybacking each message toward clients with the response of the predicted next request, minimizing both the amount of exchanged messages and the client latency. The paper focuses on REST services and illustrates the technique with a case study based on a publicly available service currently in use.

Carlo Ghezzi, Mauro Pezzè, Giordano Tamburrelli

Runtime Enforcement of First-Order LTL Properties on Data-Aware Business Processes

This paper studies the following problem: given a relational data schema, a temporal property over the schema, and a process that modifies the data instances, how can we enforce the property during each step of the process execution? Temporal properties are defined using a first-order future time LTL (FO-LTL) and they are evaluated under finite and fixed domain assumptions. Under such restrictions, existing techniques for monitoring propositional formulas can be used, but they would require exponential space in the size of the domain. Our approach is based on the construction of a first-order automaton that is able to perform the monitoring incrementally and by using exponential space in the size of the property. Technically, we show that our mechanism captures the semantics of FO-LTL on finite but progressing sequences of instances, and it reports satisfaction or dissatisfaction of the property at the earliest possible time.

Riccardo De Masellis, Jianwen Su

QoS-Aware Service VM Provisioning in Clouds: Experiences, Models, and Cost Analysis

Recent studies show that service systems hosted in clouds can elastically scale the provisioning of pre-configured virtual machines (VMs) with workload demands, but suffer from performance variability, particularly from varying response times. Service management in clouds is further complicated especially when aiming at striking an optimal trade-off between cost (i.e., proportional to the number and types of VM instances) and the fulfillment of quality-of-service (QoS) properties (e.g., a system should serve at least 30 requests per second for more than 90% of the time). In this paper, we develop a QoS-aware VM provisioning policy for service systems in clouds with high capacity variability, using experimental as well as modeling approaches. Using a wiki service hosted in a private cloud, we empirically quantify the QoS variability of a single VM with different configurations in terms of capacity. We develop a Markovian framework which explicitly models the capacity variability of a service cluster and derives a probability distribution of QoS fulfillment. To achieve the guaranteed QoS at minimal cost, we construct theoretical and numerical cost analyses, which facilitate the search for an optimal size of a given VM configuration, and additionally support the comparison between VM configurations.

Mathias Björkqvist, Sebastiano Spicuglia, Lydia Chen, Walter Binder

Personalized Quality Prediction for Dynamic Service Management Based on Invocation Patterns

Recent service management needs, e.g., in the cloud, require services to be managed dynamically. Services might need to be selected or replaced at runtime. For services with similar functionality, one approach is to identify the most suitable services for a user based on an evaluation of the quality (QoS) of these services. In environments like the cloud, further personalisation is also paramount. We propose a personalized QoS prediction method, which considers the impact of the network, server environment and user input. It analyses previous user behaviour and extracts invocation patterns from monitored QoS data through pattern mining to predict QoS based on invocation QoS patterns and user invocation features. Experimental results show that the proposed method can significantly improve the accuracy of the QoS prediction.

Li Zhang, Bin Zhang, Claus Pahl, Lei Xu, Zhiliang Zhu

Open Source versus Proprietary Software in Service-Orientation: The Case of BPEL Engines

It is a long-standing debate, whether software that is developed as open source is generally of higher quality than proprietary software. Although the open source community has grown immensely during the last decade, there is still no clear answer. Service-oriented software and middleware tends to rely on highly complex and interrelated standards and frameworks. Thus, it is questionable if small and loosely coupled teams, as typical in open source software development, can compete with major vendors. Here, we focus on a central part of service-oriented software systems, i.e., process engines for service orchestration, and compare open source and proprietary solutions. We use the Web Services Business Process Execution Language (BPEL) and compare standard conformance and its impact on language expressiveness in terms of workflow pattern support of eight engines. The results show that, although the top open source engines are on par with their proprietary counterparts, in general proprietary engines perform better.

Simon Harrer, Jörg Lenhard, Guido Wirtz

Detection of SOA Patterns

The rapid increase of communications combined with the deployment of large scale information systems lead to the democratization of

Service Oriented Architectures

(SOA). However, systems based on these architectures (called SOA systems) evolve rapidly due to the addition of new functionalities, the modification of execution contexts and the integration of legacy systems. This evolution may hinder the maintenance of these systems, and thus increase the cost of their development. To ease the evolution and maintenance of SOA systems, they should satisfy good design quality criteria, possibly expressed using patterns. By


, we mean good practices to solve known and common problems when designing software systems. The goal of this study is to detect patterns in SOA systems to assess their design and their Quality of Service (QoS). We propose a three steps approach called SODOP (Service Oriented Detection Of Patterns), which is based on our previous work for the detection of antipatterns. As a first step, we define five SOA patterns extracted from the literature. We specify these patterns using “rule cards”, which are sets of rules that combine various metrics, static or dynamic, using a formal grammar. The second step consists in generating automatically detection algorithms from rule cards. The last step consists in applying concretely these algorithms to detect patterns on SOA systems at runtime. We validate SODOP on two SOA systems:




that contain respectively 13 and 91 services. This validation demonstrates that our proposed approach is precise and efficient.

Anthony Demange, Naouel Moha, Guy Tremblay

Optimal Strategy for Proactive Service Delivery Management Using Inter-KPI Influence Relationships

Service interactions now account for major source of revenue and employment in many modern economies, and yet service operations management remains extremely complex. To lower risks, every Service Delivery (SD) environment needs to define its own key performance indicators (KPIs) to evaluate the present state of operations and its business outcomes. Due to the over-use of performance measurement systems, a large number of KPIs have been defined, but their influence on each other is unknown. It is thus important to adopt data-driven approaches to demystify the service delivery KPIs inter-relationships and establish the critical ones that have a stronger influence on the business outcomes. Given a set of operational KPIs and SD outcomes, we focus on the problem of (a) extracting inter-relationships and impact delays among KPIs and outcomes, and building a regression-based KPI influence model to estimate the SD outcomes as functions of KPIs. (b) Based on the model we propose a schedule of action plans to transform the current service delivery system state. (c) We also build a visualization tool that enables validation of extracted KPIs influence model, and perform “what-if” analysis.

Gargi B. Dasgupta, Yedendra Shrinivasan, Tapan K. Nayak, Jayan Nallacherry

On-the-Fly Adaptation of Dynamic Service-Based Systems: Incrementality, Reduction and Reuse

On-the-fly adaptation is where adaptation activities are not explicitly represented at design time but are discovered and managed at run time considering all aspect of the execution environments. In this paper we present a comprehensive framework for the on-the-fly adaptation of highly dynamic service-based systems. The framework relies on advanced context-aware adaptation techniques that allow for i) incremental handling of complex adaptation problems by interleaving problem solving and solution execution, ii) reduction in the complexity of each adaptation problem by minimizing the search space according to the specific execution context, and iii) reuse of adaptation solutions by learning from past executions. We evaluate the applicability of the proposed approach on a real world scenario based on the operation of the Bremen sea port.

Antonio Bucchiarone, Annapaola Marconi, Claudio Antares Mezzina, Marco Pistore, Heorhi Raik

WT-LDA: User Tagging Augmented LDA for Web Service Clustering

Clustering Web services that groups together services with similar functionalities helps improve both the accuracy and efficiency of the Web service search engines. An important limitation of existing Web service clustering approaches is that they solely focus on utilizing WSDL (Web Service Description Language) documents. There has been a recent trend of using user-contributed tagging data to improve the performance of service clustering. Nonetheless, these approaches fail to completely leverage the information carried by the tagging data and hence only trivially improve the clustering performance. In this paper, we propose a novel approach that seamlessly integrates tagging data and WSDL documents through augmented Latent Dirichlet Allocation (LDA). We also develop three strategies to preprocess tagging data before being integrated into the LDA framework for clustering. Comprehensive experiments based on real data and the implementation of a Web service search engine demonstrate the effectiveness of the proposed LDA-based service clustering approach.

Liang Chen, Yilun Wang, Qi Yu, Zibin Zheng, Jian Wu

Does One-Size-Fit-All Suffice for Service Delivery Clients?

The traditional mode of delivering IT services has been through customer-specific teams. A dedicated team is assigned to address all (and only those) requirements that are specific to the customer. However, this way of organizing service delivery leads to inefficiencies due to inability to use expertise and available resources across teams in a flexible manner. To address some of these challenges, in recent times, there has been interest in shared delivery of services, where instead of having customer specific teams working in silos, there are cross-customer teams (shared resource pools) that can potentially service more than one customer. However, this gives rise to the question of what is the best way of grouping the shared resources across customer? Especially, with the large variations in the technical and domain skills required to address customer requirements, what should be the service delivery model for diverse customer workloads? Should it be customer-focused? Business domain focused? Or Technology focused? This paper simulates different delivery models in face of complex customer workload, diverse customer profiles, stringent service contracts, and evolving skills, with the goal of scientifically deriving principles of decision making for a suitable delivery model. Results show that workload arrival pattern, customer work profile combinations and domain skills, all play a significant role in the choice of delivery model. Specifically, the complementary nature of work arrivals and degree of overlapping skill requirements among customers play a crucial role in the choice of models. Interestingly, the impact of skill expertise level of resources is overshadowed by these two factors.

Shivali Agarwal, Renuka Sindhgatta, Gargi B. Dasgupta

Runtime Evolution of Service-Based Multi-tenant SaaS Applications

The Single-Instance Multi-Tenancy (SIMT) model for service delivery enables a SaaS provider to achieve economies of scale via the reuse and runtime sharing of software assets between tenants. However, evolving such an application at runtime to cope with the changing requirements from its different stakeholders is challenging. In this paper, we propose an approach to evolving service-based SIMT SaaS applications that are developed based on Dynamic Software Product Lines (DSPL) with runtime sharing and variation among tenants. We first identify the different kinds of changes to a service-based SaaS application, and the consequential impacts of those changes. We then discuss how to realize and manage each change and its resultant impacts in the DSPL. A software engineer declaratively specifies changes in a script, and realizes the changes to the runtime model of the DSPL using the script. We demonstrate the feasibility of our approach with a case study.

Indika Kumara, Jun Han, Alan Colman, Malinda Kapuruge

Critical Path-Based Iterative Heuristic for Workflow Scheduling in Utility and Cloud Computing

This paper considers the workflow scheduling problem in utility and cloud computing. It deals with the allocation of tasks to suitable resources so as to minimize total rental cost of all resources while maintaining the precedence constraints on one hand and meeting workflow deadlines on the other. A Mixed Integer programming (MILP) model is developed to solve small-size problem instances. In view of its NP-hard nature, a Critical Path-based Iterative (CPI) heuristic is developed to find approximate solutions to large-size problem instances where the multiple complete critical paths are iteratively constructed by Dynamic Programming according to the service assignments for scheduled activities and the longest (cheapest) services for the unscheduled ones. Each critical path optimization problem is relaxed to a Multi-stage Decision Process (MDP) problem and optimized by the proposed dynamic programming based Pareto method. The results of the scheduled critical path are utilized to construct the next new critical path. The iterative process stops as soon as the total duration of the newly found critical path is no more than the deadline of all tasks in the workflow. Extensive experimental results show that the proposed CPI heuristic outperforms the existing state-of-the-art algorithms on most problem instances. For example, compared with an existing PCP (partial critical path based) algorithm, the proposed CPI heuristic achieves a 20.7% decrease in the average normalized resource renting cost for instances with 1,000 activities.

Zhicheng Cai, Xiaoping Li, Jatinder N. D. Gupta

REFlex: An Efficient Web Service Orchestrator for Declarative Business Processes

Declarative business process modeling is a flexible approach to business process management in which participants can decide the order in which activities are performed. Business rules are employed to determine restrictions and obligations that must be satisfied during execution time. In this way, complex control-flows are simplified and participants have more flexibility to handle unpredicted situations. Current implementations of declarative business process engines focus only on manual activities. Automatic communication with external applications to exchange data and reuse functionality is barely supported. Such automation opportunities could be better exploited by a declarative engine that integrates with existing SOA technologies. In this paper, we introduce an engine that fills this gap. REFlex is an efficient, data-aware declarative web services orchestrator. It enables participants to call external web services to perform automated tasks. Different from related work, the REFlex algorithm does not depend on the generation of all reachable states, which makes it well suited to model large and complex business processes. Moreover, REFlex is capable of modeling data-dependent business rules, which provides unprecedent context awareness and modeling power to the declarative paradigm.

Natália Cabral Silva, Renata Medeiros de Carvalho, César Augusto Lins Oliveira, Ricardo Massa Ferreira Lima

Task Scheduling Optimization in Cloud Computing Applying Multi-Objective Particle Swarm Optimization

Optimizing the scheduling of tasks in a distributed heterogeneous computing environment is a nonlinear multi-objective NP-hard problem which is playing an important role in optimizing cloud utilization and Quality of Service (QoS). In this paper, we develop a comprehensive multi-objective model for optimizing task scheduling to minimize task execution time, task transferring time, and task execution cost. However, the objective functions in this model are in conflict with one another. Considering this fact and the supremacy of Particle Swarm Optimization (PSO) algorithm in speed and accuracy, we design a multi-objective algorithm based on multi-objective PSO (MOPSO) method to provide an optimal solution for the proposed model. To implement and evaluate the proposed model, we extend Jswarm package to multi-objective Jswarm (MO-Jswarm) package. We also extend Cloudsim toolkit applying MO-Jswarm as its task scheduling algorithm. MO-Jswarm in Cloudsim determines the optimal task arrangement among VMs according to MOPSO algorithm. The simulation results show that the proposed method has the ability to find optimal trade-off solutions for multi-objective task scheduling problems that represent the best possible compromises among the conflicting objectives, and significantly increases the QoS.

Fahimeh Ramezani, Jie Lu, Farookh Hussain

Verification of Artifact-Centric Systems: Decidability and Modeling Issues

Artifact-centric business processes have recently emerged as an approach in which processes are centred around the evolution of business entities, called


, giving equal importance to control-flow and data. The recent Guard-State-Milestone (GSM) framework provides means for specifying business artifacts lifecycles in a declarative manner, using constructs that match how executive-level stakeholders think about their business. However, it turns out that formal verification of GSM is undecidable even for very simple propositional temporal properties. We attack this challenging problem by translating GSM into a well-studied formal framework.We exploit this translation to isolate an interesting class of “state-bounded” GSM models for which verification of sophisticated temporal properties is decidable. We then introduce some guidelines to turn an arbitrary GSM model into a state-bounded, verifiable model.

Dmitry Solomakhin, Marco Montali, Sergio Tessaris, Riccardo De Masellis

Automatically Composing Services by Mining Process Knowledge from the Web

Current approaches in Service-Oriented Architecture (SOA) are challenging for users to get involved in the service composition due to the in-depth knowledge required for SOA standards and techniques. To shield users from the complexity of SOA standards, we automatically generate composed services for end-users using process knowledge available in the Web. Our approach uses natural language processing techniques to extract tasks. Our approach automatically identifies services required to accomplish the tasks. We represent the extracted tasks in a task model to find the services and then generate a user interface (UI) for a user to perform the tasks. Our case study shows that our approach can extract the tasks from how-to instructions Web pages with high precision (


90%). The generated task model helps to discover services and compose the found services to perform a task. Our case study shows that our approach can reach more than 90% accuracy in service composition by identifying accurate data flow relation between services.

Bipin Upadhyaya, Ying Zou, Shaohua Wang, Joanna Ng

Batch Activities in Process Modeling and Execution

In today’s process engines, instances of a process usually run independently to each other. However, in certain situations a synchronized execution of a group of instances of the same process is necessary especially to allow the comparison of business cases or to improve process performance. In this paper, we introduce the concept of batch activities to process modeling and execution. We provide the possibility to assign a batch model to an activity for making it a batch activity. As opposed to related approaches, the batch model has several parameters with which the process designer can configure individually the batch execution. A rule-based batch activation is used to enable a flexible batch handling. Our approach allows that several batches can run in parallel in case of multiple resources. The applicability of the approach is illustrated in a case study.

Luise Pufahl, Mathias Weske

Multi-Objective Service Composition Using Reinforcement Learning

Web services have the potential to offer the enterprises with the ability to compose internal and external business services in order to accomplish complex processes. Service composition then becomes an increasingly challenging issue when complex and critical applications are built upon services with different


criteria. However, most of the existing


-aware compositions are simply based on the assumption that multiple criteria, no matter whether these multiple criteria are conflicting or not, can be combined into a single criterion to be optimized, according to some utility functions. In practice, this can be very difficult as utility functions or weights are not well known a priori. In this paper, a novel multi-objective approach is proposed to handle


-aware Web service composition with conflicting objectives and various restrictions on quality matrices. The proposed approach uses reinforcement learning to deal with the uncertainty characteristic inherent in open and decentralized environments. Experimental results reveal the ability of the proposed approach to find a set of Pareto optimal solutions, which have the equivalent quality to satisfy multiple


-objectives with different user preferences.

Ahmed Moustafa, Minjie Zhang

Provisioning Quality-Aware Social Compute Units in the Cloud

To date, on-demand provisioning models of human-based services in the cloud are mainly used to deal with simple human tasks solvable by individual compute units (ICU). In this paper, we propose a framework allowing the provisioning of a group of people as an execution service unit, a so-called Social Compute Unit (SCU), by utilizing clouds of ICUs. Our model allows service consumers to specify quality requirements, which contain constraints and objectives with respect to skills, connectedness, response time, and cost. We propose a solution model for tackling the problem in quality-aware SCUs provisioning and employ some metaheuristic techniques to solve the problem. A prototype of the framework is implemented, and experiments using data from simulated clouds and consumers are conducted to evaluate the model.

Muhammad Z. C. Candra, Hong-Linh Truong, Schahram Dustdar

Process Discovery Using Prior Knowledge

In this paper, we describe a process discovery algorithm that leverages prior knowledge and process execution data to learn a control-flow model. Most process discovery algorithms are not able to exploit prior knowledge supplied by a domain expert. Our algorithm incorporates prior knowledge using ideas from Bayesian statistics. We demonstrate that our algorithm is able to recover a control-flow model in the presence of noisy process execution data, and uncertain prior knowledge.

Aubrey J. Rembert, Amos Omokpo, Pietro Mazzoleni, Richard T. Goodwin

Mirror, Mirror, on the Web, Which Is the Most Reputable Service of Them All?

A Domain-Aware and Reputation-Aware Method for Service Recommendation

With the wide adoption of service and cloud computing, nowadays we observe a rapidly increasing number of services and their compositions, resulting in a complex and evolving service ecosystem. Facing a huge number of services with similar functionalities, how to identify the core services in different domains and recommend the trustworthy ones for developers is an important issue for the promotion of the service ecosystem. In this paper, we present a heterogeneous network model, and then a unified reputation propagation (URP) framework is introduced to calculate the global reputation of entities in the ecosystem. Furthermore, the topic model based on Latent Dirichlet Allocation (LDA) is used to cluster the services into specific domains. Combining URP with the topic model, we re-rank services’ reputations to distinguish the core services so as to recommend trustworthy domain-aware services. Experiments on ProgrammableWeb data show that, by fusing the heterogeneous network model and the topic model, we gain a 66.67% improvement on top20 precision and 20%~ 30% improvement on long tail (top200~top500) precision. Furthermore, the reputation and domain-aware recommendation method gains a 118.54% improvement on top10 precision.

Keman Huang, Jinhui Yao, Yushun Fan, Wei Tan, Surya Nepal, Yayu Ni, Shiping Chen

Service Discovery from Observed Behavior while Guaranteeing Deadlock Freedom in Collaborations

Process discovery techniques can be used to derive a process model from observed example behavior (i.e., an event log). As the observed behavior is inherently incomplete and models may serve different purposes, four competing quality dimensions—fitness, precision, simplicity, and generalization—have to be balanced to produce a process model of high quality.

In this paper, we investigate the discovery of processes that are specified as services. Given a service


and observed behavior of a service


interacting with


, we discover a service model of


. Our algorithm balances the four quality dimensions based on user preferences. Moreover, unlike existing discovery approaches, we guarantees that the composition of




is deadlock free. The service discovery technique has been implemented in ProM and experiments using service models of industrial size demonstrate the scalability or our approach.

Richard Müller, Christian Stahl, Wil M. P. van der Aalst, Michael Westergaard

Priority-Based Human Resource Allocation in Business Processes

In Business Process Management Systems, human resource management typically covers two steps: resource assignment at design time and resource allocation at run time. Although concepts like rolebased assignment often yield several potential performers for an activity, there is a lack of mechanisms for prioritizing them, e.g., according to their skills or current workload. In this paper, we address this research gap. More specifically, we introduce an approach to define resource preferences grounded on a validated, generic user preference model initially developed for semantic web services. Furthermore, we show an implementation of the approach demonstrating its feasibility.

Cristina Cabanillas, José María García, Manuel Resinas, David Ruiz, Jan Mendling, Antonio Ruiz-Cortés

Prediction of Remaining Service Execution Time Using Stochastic Petri Nets with Arbitrary Firing Delays

Companies realize their services by business processes to stay competitive in a dynamic market environment. In particular, they track the current state of the process to detect undesired deviations, to provide customers with predicted remaining durations, and to improve the ability to schedule resources accordingly. In this setting, we propose an approach to predict remaining process execution time, taking into account

passed time

since the last observed event.

While existing approaches update predictions only upon event arrival and subtract elapsed time from the latest predictions, our method also considers expected events that have


yet occurred, resulting in better prediction quality. Moreover, the prediction approach is based on the Petri net formalism and is able to model concurrency appropriately. We present the algorithm and its implementation in ProM and compare its predictive performance to state-of-the-art approaches in simulated experiments and in an industry case study.

Andreas Rogge-Solti, Mathias Weske

Research Track Short Paper

Entity-Centric Search for Enterprise Services

The consumption of APIs, such as Enterprise Services (ESs) in an enterprise Service-Oriented Architecture (eSOA), has largely been a task for experienced developers. With the rapidly growing number of such (Web)APIs, users with little or no experience in a given API face the problem of finding relevant API operations – e.g., mashups developers. However, building an effective search has been a challenge: Information Retrieval (IR) methods struggle with the brevity of text in API descriptions, whereas semantic search technologies require domain ontologies and formal queries. Motivated by the search behavior of users, we propose an iterative keyword search based on entities. The entities are part of a knowledge base, whose content stems from model-driven engineering. We implemented our approach and conducted a user study showing significant improvements in search effectiveness.

Marcus Roy, Ingo Weber, Boualem Benatallah

Tactical Service Selection with Runtime Aspects

The quality of service (QoS) of a service composition is addressed by a QoS-aware service selection. In the presence of sophisticated service charging models a cost-minimized service selection can be obtained related to a number of requests expected for a service composition in a predefined planning horizon. A service selection that is used to execute requests throughout an entire planning horizon is called tactical. The majority of service selection models assume a deterministic service execution and therefore the need for runtime adaptions of a service composition to react on service failures or deviating QoS values is neglected. The challenge that is addressed with this paper is to develop a tactical service selection approach that anticipates runtime adaptions of a service composition. It is shown that the tactical service selection can be efficiently combined with an existing service reconfiguration method to achieve both runtime-related goals and tactical objectives.

Rene Ramacher, Lars Mönch

Online Reliability Time Series Prediction for Service-Oriented System of Systems

A Service-Oriented System of System (or SoS) considers system as a service and constructs a value-added SoS by outsourcing external systems through service composition. To cope with the dynamic and uncertain running environment and assure the overall Quality of Service (or QoS), online reliability prediction for SoS arises as a grand challenge in SoS research. In this paper, we propose a novel approach for component level online reliability time series prediction based on Probabilistic Graphical Models (or PGMs). We assess the proposed approach via invocation records collected from widely used real web services and experiment results demonstrate the effectiveness of our approach.

Lei Wang, Hongbing Wang, Qi Yu, Haixia Sun, Athman Bouguettaya

Multi-level Elasticity Control of Cloud Services

Fine-grained elasticity control of cloud services has to deal with multiple elasticity perspectives (quality, cost, and resources). We propose a cloud services elasticity control mechanism that considers the service structure for controlling the cloud service elasticity at multiple levels, by firstly defining an abstract composition model for cloud services and enabling multi-level elasticity control. Secondly, we define mechanisms for solving conflicting elasticity requirements and generating action plans for elasticity control. Using the defined concepts and mechanisms we develop a runtime system supporting multiple levels of elasticity control and validate the resulted prototype through experiments.

Georgiana Copil, Daniel Moldovan, Hong-Linh Truong, Schahram Dustdar

Reasoning on UML Data-Centric Business Process Models

Verifying the correctness of data-centric business process models is important to prevent errors from reaching the service that is offered to the customer. Although the semantic correctness of these models has been studied in detail, existing works deal with models defined in low-level languages (e.g. logic), which are complex and difficult to understand. This paper provides a way to reason semantically on data-centric business process models specified from a high-level and technology-independent perspective using UML.

Montserrat Estañol, Maria-Ribera Sancho, Ernest Teniente

QoS-Aware Multi-granularity Service Composition Based on Generalized Component Services

QoS-aware service composition aims to maximize overall QoS values of the resulting composite service. Traditional methods only consider service instances that implement one abstract service in the composite service as candidates, and neglect those that fulfill multiple abstract services. To overcome this shortcoming, we present the concept of generalized component services to expand the selection scope to achieve a better solution. The problem of QoS-aware multi-granularity service composition is then formulated and how to discover candidates for each generalized component service is elaborated. A genetic algorithm based approach is proposed to optimize the resulting composite service instance. Empirical studies are performed at last.

Quanwang Wu, Qingsheng Zhu, Xing Jian

Evaluating Cloud Services Using a Multiple Criteria Decision Analysis Approach

The potential of Cloud services for cost reduction and other benefits has been capturing the attention of organizations. However, a difficult decision arises when an IT manager has to select a Cloud services provider because there are no established guidelines to help make that decision. In order to address this problem, we propose a multi-criteria model to evaluate Cloud services using the MACBETH method. The proposed method was demonstrated in a City Council in Portugal to evaluate and compare two Cloud services: Google Apps and Microsoft Office 365.

Pedro Costa, João Carlos Lourenço, Miguel Mira da Silva

An Approach for Compliance-Aware Service Selection with Genetic Algorithms

Genetic algorithms are popular for service selection as they deliver good results in short time. However, current approaches do not consider compliance rules for single tasks in a process model. To address this issue, we present an approach for compliance-aware service selection with genetic algorithms. Our approach employs the notion of compliance distance to detect and recover violations and can be integrated into existing genetic algorithms by means of a repair operation. As a proof-of-concept, we present a genetic algorithm incorporating our approach and compare it with related state-of-the-art genetic algorithms lacking this kind of check and recovery mechanism for compliance.

Fatih Karatas, Dogan Kesdogan

Decomposing Ratings in Service Compositions

An important challenge for service-based systems is to be able to select services based on feedback from service consumers and, therefore, to be able to distinguish between good and bad services. However, ratings are normally provided to a service as a whole, without taking into consideration that services are normally formed by a composition of other services. In this paper we propose an approach to support the decomposition of ratings provided to a service composition into ratings to the participating services in a composition. The approach takes into consideration the rating provided for a service composition as a whole, past trust values of the services participating in the composition, and expected and observed QoS aspects of the services. A prototype tool has been implemented to illustrate and evaluate the work. Results of some experimental evaluation of the approach are also reported in the paper.

Icamaan da Silva, Andrea Zisman

Automatic Generation of Test Models for Web Services Using WSDL and OCL

Web services are a very popular solution to integrate components when building a software system, or to allow communication between a system and third-party users, providing a flexible, reusable mechanism to access its functionalities.

To ensure these properties though, intensive testing of web services is a key activity: we need to verify their behaviour and ensure their quality as much as possible, as efficiently as possible. In practise, the compromise between effort and cost leads too often to smaller and less exhaustive testing than it would be desirable.

In this paper we present a framework to test web services based on their WSDL specification and certain constraints written in OCL, following a black-box approach and using property-based testing. This combination of strategies allows us to face the problem of generating good quality test suites and test cases by automatically deriving those from the web service formal description. To illustrate the use of our framework, we present an industrial case study: a distributed system which serves media contents to customers’ TV screens.

Macías López, Henrique Ferreiro, Miguel A. Francisco, Laura M. Castro

An Incentive Mechanism for Game-Based QoS-Aware Service Selection

QoS-aware service selection deals with choosing the service providers from the candidates which are discovered to fulfill a requirement, while meeting specific QoS constraints. In fact, the requester and its candidate service providers usually are autonomous and self-interested. In the case, there is a private information game of the service selection between a requester and its candidate providers. An ideal solution of the game is that the requester selects and reaches agreement about the interest allocation with the high-QoS and low-cost service providers. This paper proposes an approach to design a novel incentive mechanism to get the ideal solution of the game. The incentive mechanism design is solved as a constrained optimization problem. Finally, the experiments are performed to show the effectiveness of the incentive mechanism.

Puwei Wang, Xiaoyong Du

Goal Oriented Variability Modeling in Service-Based Business Processes

In any organization, business processes are designed to adhere to specified business goals. On many occasions, however, in order to accommodate differing usage contexts, multiple variants of the same business process may need to be designed, all of which should adhere to the same goal. For business processes modeled as compositions of services, automated generation of such

goal preserving

process variants is a challenge. To that end, we present our approach for generating all goal preserving variants of a service-based business process. Our approach leverages our earlier works on semantic annotations of business processes and service variability modeling. Throughout our paper, we illustrate our ideas with a realistic running example, and also present a proof-of-concept prototype.

Karthikeyan Ponnalagu, Nanjangud C. Narendra, Aditya Ghose, Neeraj Chiktey, Srikanth Tamilselvam

A Cooperative Management Model for Volunteer Infrastructure as a Service in P2P Cloud

IaaS model in the Cloud Computing provides infrastructure services to users. However, the provider of such centralized Cloud requires notable investments to maintain the infrastructures. P2P Cloud, whose infrastructures are provided by multiple volunteer nodes in the P2P network, gives a low cost option to the provision of Cloud Computing. In this paper, a decentralized P2P infrastructure cooperative management model is proposed to offer autonomic infrastructure management and on-demand resource allocation as a service. The model supports nodes to manage complex and various computational resources in P2P infrastructure. Overlay self-configuration service is proposed to dynamically configure the connectivity of nodes in decentralized environments. Task assignment service is designed to allocate resources to run tasks submitted by individual users. Moreover, on-demand resource aggregation mechanism provides service of resource aggregation under user-defined criteria.

Jiangfeng Li, Chenxi Zhang

Process Refinement Validation and Explanation with Ontology Reasoning

In process engineering, processes can be refined from simple ones to more and more complex ones with decomposition and restructuring of activities. The validation of these refinements and the explanation of invalid refinements are non-trivial tasks. This paper formally defines process refinement validation based on the execution set semantics and presents a suite of refinement reduction techniques and an ontological representation of process refinement to enable reasoning for the validation and explanation of process refinement. Results show that it significantly improves efficiency, quality and productivity of process engineering.

Yuan Ren, Gerd Gröner, Jens Lemcke, Tirdad Rahmani, Andreas Friesen, Yuting Zhao, Jeff Z. Pan, Steffen Staab

Automated Service Composition for on-the-Fly SOAs

In the service-oriented computing domain, the number of available software services steadily increased in recent years, favored by the rise of cloud computing with its attached delivery models like Software-as-a-Service (SaaS). To fully leverage the opportunities provided by these services for developing highly flexible and aligned SOA, integration of new services as well as the substitution of existing services must be simplified. As a consequence, approaches for automated and accurate service discovery and composition are needed. In this paper, we propose an automatic service composition approach as an extension to our earlier work on automatic service discovery. To ensure accurate results, it matches service requests and available offers based on their structural as well as behavioral aspects. Afterwards, possible service compositions are determined by composing service protocols through a composition strategy based on labeled transition systems.

Zille Huma, Christian Gerth, Gregor Engels, Oliver Juwig

Deriving Business Process Data Architecturesfrom Process Model Collections

The focus in BPM shifts from single processes to process interactions. Business process architectures were established as convenient way to model and analyze such interactions on an abstract level focusing on message and trigger relations. Shared data objects are often a means of interrelating processes. In this paper, we extract hidden data dependencies between processes from process models with data annotations and their object life cycles. This information is used to construct a business process architecture, thus enabling analysis with existing methods. We describe and validate our approach on an extract from a case study that demonstrates its applicability to real world use cases.

Rami-Habib Eid-Sabbagh, Marcin Hewelt, Andreas Meyer, Mathias Weske

A Case Based Approach to Serve Information Needs in Knowledge Intensive Processes

Case workers who are involved in knowledge intensive business processes have critical information needs.When dealing with a case, they often need to check how similar case(s) were handled and what best practices, methods and tools proved useful. In this paper, we present our Solution Information Management (SIM) system developed to assist case workers by retrieving and offering targeted and contextual content recommendations to them. In particular, we present a novel method for intelligently weighing different fields in a case when they are used as context to derive recommendations. Experimental results indicate that our approach can yield recommendations that are approximately 15 more precise than those obtained through a baseline approach where the fields in the context have equal weights. SIM is being actively used by case workers in a large IT services company.

Debdoot Mukherjee, Jeanette Blomberg, Rama Akkiraju, Dinesh Raghu, Monika Gupta, Sugata Ghosal, Mu Qiao, Taiga Nakamura

Patience-Aware Scheduling for Cloud Services: Freeing Users from the Chains of Boredom

Scheduling of service requests in Cloud computing has traditionally focused on the reduction of pre-service wait, generally termed as waiting time. Under certain conditions such as peak load, however, it is not always possible to give reasonable response times to all users. This work explores the fact that different users may have their own levels of tolerance or patience with response delays. We introduce scheduling strategies that produce better assignment plans by prioritising requests from users who expect to receive results earlier and by postponing servicing jobs from those who are more tolerant to response delays. Our analytical results show that the behaviour of users’ patience plays a key role in the evaluation of scheduling techniques, and our computational evaluation demonstrates that, under peak load, the new algorithms typically provide better user experience than the traditional FIFO strategy.

Carlos Cardonha, Marcos D. Assunção, Marco A. S. Netto, Renato L. F. Cunha, Carlos Queiroz

MaxInsTx: A Best-Effort Failure Recovery Approach for Artifact-Centric Business Processes

Process instances may overlap and interweave with each other. This significantly complicates the failure recovery issue. Most of existing mechanis- msassumeaone-to-onerelationshipbetweenprocessinstances,whichwillcauseunnecessaryrecoveryinsuchcontext.Artifact-centricbusinessprocess models give equal consideration on both data and control flow of activities, thus facilit- ate addressing this issue. In this paper, we propose a best-effort failure recovery approach MaxInsTx: a transactional artifact-centric business process model with complex cardinality relationships and correlationsconsidered; a recovery mechanism to resolve the impact of the failed process on concurrent processes meanwhile protect maximal instances involved in failures from failure impact.

Haihuan Qin, Guosheng Kang, Lipeng Guo

Extending WS-Agreement to Support Automated Conformity Check on Transport and Logistics Service Agreements

Checking whether the agreed service quality attributes are fulfilled or maintained during the service life-cycle is a very important task for SLA (Service Level Agreement) enforcement. In this paper, we leverage conformance checking techniques developed for computational services to automate the conformity checking of transport & logistics services. Our solution extends the WS-Agreement metamodel to support the definition of frame and specific SLAs. With this extension, we define a new validation operation for the conformity check of transport & logistics SLAs based on CSPs solvers. The key contribution of our work is that, as far as we know, it is the first definition of an automated conformity check solution for long term agreements in the transport & logistics domain. Nonetheless, other domains in which similar SLAs are defined can also benefit from our solution.

Antonio Manuel Gutiérrez, Clarissa Cassales Marquezan, Manuel Resinas, Andreas Metzger, Antonio Ruiz-Cortés, Klaus Pohl

Automatic Composition of Form-Based Services in a Context-Aware Personal Information Space

Personal Information Spaces (PIS) help in structuring, storing, and retrieving personal information. Still, it is the users’ duty to sequence the basic steps in different online procedures, and to fill out the corresponding forms with personal information, in order to fulfill some objectives. We propose an extension for PIS that assists users in achieving this duty. We perform a composition of form-based services in order to reach objectives expressed as workflow of capabilities. Further, we take into account that user personal information can be contextual and that the user may have personal information privacy policies. Our solution is based on graph planning and is fully tool-supported.

Rania Khéfifi, Pascal Poizat, Fatiha Saïs

Synthesizing Cost-Minimal Partners for Services

Adapter synthesis

bridges incompatibilites between loosely coupled, stateful services. Formally, adapter synthesis reduces to

partner synthesis

. Beside an adapter, a partner could be a configurator or serve as an ingredient in solutions for discovery and substitution. We synthesize a


partner for a given service based on additional behaviorial constraints. We consider the worst case total costs, specifying individual transition costs as natural numbers. In this paper, we sketch our formal approach, and briefly discuss our implementation.

Jan Sürmeli, Marvin Triebel

An Architecture to Provide Quality of Service in OGC SWE Context

The aim of this paper is to describe an architecture named SWARCH (Sensor Web Architecture) that provides quality of service in the context of Sensor Web Enablement (SWE) standards. Sensor Web Enablement is a set of standards proposed by OGC (OpenGis Consortium). These standards provide a transparent and interoperable way to access data measured by sensors. Thus, SWARCH adds to these features of the SWE standard ways of service selection that meet several quality requirements such as response time, availability of sensors, measurement reliability, among others. Quality requirements are defined by users and a broker in the architecture. This broker allows appropriate selection of the sensor network that matches to the QoS parameters. To validate our results, a case study showing reductions up to 50% and 25% in access times to SOS and SES services are presented.

Thiago Caproni Tavares, Regina Helenna Carlucci Santana, Marcos José Santana, Júlio Cezar Estrella

Verification of Semantically-Enhanced Artifact Systems

Artifact-Centric systems have emerged in the last years as a suitable framework to model business-relevant entities, by combining their static and dynamic aspects. In particular, the Guard-Stage-Milestone (GSM) approach has been recently proposed to model artifacts and their lifecycle in a declarative way. In this paper, we enhance GSM with a Semantic Layer, constituted by a fullfledged OWL 2 QL ontology linked to the artifact information models through mapping specifications. The ontology provides a conceptual view of the domain under study, and allows one to understand the evolution of the artifact system at a higher level of abstraction. In this setting, we present a technique to specify temporal properties expressed over the Semantic Layer, and verify them according to the evolution in the underlying GSM model. This technique has been implemented in a tool that exploits state-of-the-art ontology-based data access technologies to manipulate the temporal properties according to the ontology and the mappings, and that relies on the GSMC model checker for verification.

Babak Bagheri Hariri, Diego Calvanese, Marco Montali, Ario Santoso, Dmitry Solomakhin

A Framework for Cross Account Analysis

A key challenge of Strategic Outsourcing (SO) from a service delivery perspective is trying to understand one key question: Why two SO accounts that seemingly looks the same have very different cost structure? In this article we present a parameterized framework for modeling and analysis of cross account behavior. We abstract certain key account features as parameters and construct models for answering behavioral characteristics of SO accounts. We use spectral graph clustering for detecting similar accounts, and also develop parameterized clustering for detecting coherent behavior of accounts. We have implemented a prototype of the approach and we discuss some preliminary empirical result of cross account analysis.

Vugranam C. Sreedhar

DataSheets: A Spreadsheet-Based Data-Flow Language

We are surrounded by data, a vast amount of data that has brought about an increasing need for combining and analyzing it in order to extract information and generate knowledge. A need not exclusive of big software companies with expert programmers; from scientists to bloggers, many end-user programmers currently demand data management tools to generate information according to their discretion. However, data is usually distributed among multiple sources, hence, it requires to be integrated, and unfortunately, this process is still available just for professional developers. In this paper we propose DataSheets, a novel approach to make the data-flow specification accessible and its representation comprehensible to end-user programmers. This approach consists of a spreadsheet-based data-flow language that has been tested and evaluated in a service-centric composition framework.

Angel Lagares Lemos, Moshe Chai Barukh, Boualem Benatallah

Industry Track

Decision Making in Enterprise Crowdsourcing Services

Enterprises are increasingly employing crowdsourcing to engage employees and public as part of their business processes, given a promising, low cost, access to scalable workforce online. Common examples include harnessing of crowd expertise for enterprise knowledge discovery, software development, product support and innovation. Crowdsourcing tasks vary in their complexity, required level of business support and investment, and most importantly the quality of outcome. As such, not every step in a business process can successfully lend itself to crowdsourcing. In this paper, we present a decision-making and execution service, called CrowdArb, operating on crowdsourcing tasks in the large global enterprise. The system employs decision theoretic methodology to assess whether to crowdsource or not a selected step of the knowledge discovery process. The system addresses the challenges of trade-off between the quality and time of the crowdsourcing responses, as well as the trade-off between the cost of crowdsourcing experts and time required to complete the entire campaign. We present evaluation results from simulations of CrowdArb in enterprise crowdsourcing campaign that engaged over 560 client representatives to obtain actionable insights. We discuss how proposed solution addresses the opportunity to close the gap of semi-automated task coordination in crowdsourcing environments.

Maja Vukovic, Rajarshi Das

Towards Optimal Risk-Aware Security Compliance of a Large IT System

A modern information technology (IT) system may consist of thousands of servers, software components and other devices. Operational security of such a system is usually measured by the compliance of the system with a group of security policies. However, there is no generally accepted method of assessing the risk-aware compliance of an IT system with a given set of security policies. The current practice is to state the fraction of non-compliant systems, regardless of the varying levels of risk associated with violations of the policies and their exposure time windows. We propose a new metric that takes into account the risk of non-compliance, along with the number and duration of violations. This metric affords a risk-aware compliance posture in a single number. It is used to determine a course of remediation, returning the system to an acceptable level of risk while minimizing the cost of remediation and observing the physical constraints on the system, and the limited human labor available. This metric may also be used in the course of the normal operation of the IT system, alerting the operators to potential security breaches in a timely manner.

Daniel Coffman, Bhavna Agrawal, Frank Schaffa

Behavioral Analysis of Service Delivery Models

Enterprises and IT service providers are increasingly challenged with the goal of improving quality of service while reducing cost of delivery. Effective distribution of complex customer workloads among delivery teams served by diverse personnel under strict service agreements is a serious management challenge. Challenges become more pronounced when organizations adopt ad-hoc measures to reduce operational costs and mandate unscientific transformations. This paper simulates different delivery models in face of complex customer workload, stringent service contracts, and evolving skills, with the goal of scientifically deriving design principles of delivery organizations. Results show while Collaborative models are beneficial for highest priority work, Integrated models works best for volume-intensive work, through up-skilling the population with additional skills. In repetitive work environments where expertise can be gained, these training costs are compensated with higher throughput. This return-on-investment is highest when people have at most two skills. Decoupled models work well for simple workloads and relaxed service contracts.

Gargi B. Dasgupta, Renuka Sindhgatta, Shivali Agarwal

Industry Track Short Paper

A Novel Service Composition Approach for Application Migration to Cloud

Migrating business applications to cloud can be costly, labor-intensive, and error-prone due to the complexity of business applications, the constraints of the clouds, and the limitations of existing migration techniques provided by migration service vendors. However, the emerging software-as-a-service offering model of migration services makes it possible to combine multiple migration services for a single migration task. In this paper, we propose a novel migration service composition approach to achieve a cost-effective migration solution. In particular, we first formalize the migration service composition problem into an optimization model. Then, we present an algorithm to determine the optimal composition solution for a given migration task. Finally, using synthetic trace driven simulations, we validate the effectiveness and efficiency of the proposed optimization model and algorithm.

Xianzhi Wang, Xuejun Zhuo, Bo Yang, Fan Jing Meng, Pu Jin, Woody Huang, Christopher C. Young, Catherine Zhang, Jing Min Xu, Michael Montinarelli

Demo Track

PPINOT Tool Suite: A Performance Management Solution for Process-Oriented Organisations

A key aspect in any process-oriented organisation is the measurement of process performance for the achievement of its strategic and operational goals. Process Performance Indicators (PPIs) are a key asset to carry out this evaluation, and, therefore, the management of these PPIs throughout the whole BP lifecycle is crucial. In this demo we present PPINOT Tool Suite, a set of tools aimed at facilitating and automating the PPI management. The support includes their definition using either a graphical or a template-based textual notation, their automated analysis at design-time, and their automated computation based on the instrumentation of a Business Process Management System.

Adela del-Río-Ortega, Cristina Cabanillas, Manuel Resinas, Antonio Ruiz-Cortés

SYBL+MELA: Specifying, Monitoring, and Controlling Elasticity of Cloud Services

One of the major challenges in cloud computing is to simplify the monitoring and control of elasticity. On the one hand, the user should be able to specify complex elasticity requirements in a simple way and to monitor and analyze elasticity behavior based on his/her requirements. On the other hand, supporting tools for controlling and monitoring elasticity must be able to capture and control complex factors influencing the elasticity behavior of cloud services. To date, we lack tools supporting the specification and control of elasticity at multiple levels of cloud services and multiple elasticity metrics. In this demonstration, we will showcase a system facilitating the multi-level and cross-layer monitoring, analysis and control of cloud service elasticity.

Georgiana Copil, Daniel Moldovan, Hong-Linh Truong, Schahram Dustdar

Modeling and Monitoring Business Process Execution

The growing adoption of IT systems to support business activities has made available huge amount of data, that can be used to monitor the actual execution of business processes. However, in many real settings, due to the different degrees of abstraction between business and technological layers and to information hiding, the potentiality of this data cannot be fully exploited. The PROMO tool, grounded on reasoning services, aims at reconciling the technical and the business layer, in order to enable the effective monitoring and analysis of business process instances in the face of abovementioned issues.

Piergiorgio Bertoli, Mauro Dragoni, Chiara Ghidini, Emanuele Martufi, Michele Nori, Marco Pistore, Chiara Di Francescomarino

A Tool for Business Process Architecture Analysis

Business Process Architectures (BPA) are used for structuring and managing process collections. For optimising business processes a high level view on their interdependencies is necessary. BPAs allow to capture message and trigger flow relations between processes and their multiple process instances within a process collection. However, tools that allow analysis of BPAs besides visualization do not exist. This contribution presents a novel tool to model and to analyse the correctness of a BPA by transforming it into open nets, translate the correctness criteria into CTL formula and model check those using LoLA.

Rami-Habib Eid-Sabbagh, Marcin Hewelt, Mathias Weske

OpenTOSCA – A Runtime for TOSCA-Based Cloud Applications

TOSCA is a new standard facilitating platform independent description of Cloud applications. OpenTOSCA is a runtime for TOSCA-based Cloud applications. The runtime enables fully automated plan-based deployment and management of applications defined in the OASIS TOSCA packaging format CSAR. This paper outlines the core concepts of TOSCA and provides a system overview on OpenTOSCA by describing its modular and extensible architecture, as well as presenting our prototypical implementation. We demonstrate the use of OpenTOSCA by deploying and instantiating the school management and learning application Moodle.

Tobias Binz, Uwe Breitenbücher, Florian Haupt, Oliver Kopp, Frank Leymann, Alexander Nowak, Sebastian Wagner

iAgree Studio: A Platform to Edit and Validate WS–Agreement Documents

The widespread use of SLA-regulated Cloud services, in which the violation of SLA terms may imply a penalty for the parties, have increased the importance and complexity of systems supporting the SLA lifecycle. Although these systems can be very different from each other, ranging from service monitoring platforms to auto-scaling solutions according to SLAs, they all share the need of having machine-processable and semantically valid SLAs. In this paper we present iAgree studio, the first application, up to our knowledge, that is able to edit and semantically validate agreement documents that are compliant with the WS–Agreement specification by checking properties such as its consistency, and the compliance between templates and agreement offers. In addition, it reports explanations when documents are not valid. Moreover, it allows users to combine the validation and explanation operations by means of a scenarios developer.

Carlos Müller, Antonio Manuel Gutiérrez, Manuel Resinas, Pablo Fernández, Antonio Ruiz-Cortés

Winery – A Modeling Tool for TOSCA-Based Cloud Applications

TOSCA is a new OASIS standard to describe composite applications and their management. The structure of an application is described by a topology, whereas management plans describe the application’s management functionalities, e.g., provisioning or migration. Winery is a tool offering an HTML5-based environment for graph-based modeling of application topologies and defining reusable component and relationship types. Thereby, it uses TOSCA as internal storage, import, and export format. This demonstration shows how Winery supports modeling of TOSCA-based applications. We use the school management software Moodle as running example throughout the paper.

Oliver Kopp, Tobias Binz, Uwe Breitenbücher, Frank Leymann

Barcelona: A Design and Runtime Environment for Declarative Artifact-Centric BPM

A promising approach to managing business operations is based on business artifacts, a.k.a. business entities (with lifecycles) [8, 6]. These are key conceptual entities that are central to guiding the operations of a business, and whose content changes as they move through those operations. A business artifact type is modeled using (a) an information model, which is intended to hold all business-relevant data about entities of this type, and (b) a lifecycle model, which is intended to hold the possible ways that an entity of this type might progress through the business. In 2010 a declarative style of business artifact lifecycles, called Guard-Stage-Milestone (GSM), was introduced [4, 5]. GSM has since been adopted [7] to form the conceptual basis of the OMG Case Management Model and Notation (CMMN) standard [1]. The Barcelona component of the recently open-sourced [2] ArtiFact system supports both design-time and run-time environments for GSM. Both of these will be illustrated in the proposed demo.

Fenno (Terry) Heath, David Boaz, Manmohan Gupta, Roman Vaculín, Yutian Sun, Richard Hull, Lior Limonad


Weitere Informationen

Premium Partner

Neuer Inhalt

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Product Lifecycle Management im Konzernumfeld – Herausforderungen, Lösungsansätze und Handlungsempfehlungen

Für produzierende Unternehmen hat sich Product Lifecycle Management in den letzten Jahrzehnten in wachsendem Maße zu einem strategisch wichtigen Ansatz entwickelt. Forciert durch steigende Effektivitäts- und Effizienzanforderungen stellen viele Unternehmen ihre Product Lifecycle Management-Prozesse und -Informationssysteme auf den Prüfstand. Der vorliegende Beitrag beschreibt entlang eines etablierten Analyseframeworks Herausforderungen und Lösungsansätze im Product Lifecycle Management im Konzernumfeld.
Jetzt gratis downloaden!