Skip to main content
Top

2014 | Book

Service-Oriented Computing – ICSOC 2013 Workshops

CCSA, CSB, PASCEB, SWESE, WESOA, and PhD Symposium, Berlin, Germany, December 2-5, 2013. Revised Selected Papers

Editors: Alessio R. Lomuscio, Surya Nepal, Fabio Patrizi, Boualem Benatallah, Ivona Brandić

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the revised selected papers of the workshops of the 11th International Conference on Service-Oriented Computing (ICSOC 2013), held in Berlin, Germany, in December 2013. The conference hosted the following five workshops: 3rd International Workshop on Cloud Computing and Scientific Applications (CCSA'13); 1st International Workshop on Cloud Service Brokerage (CSB'13); 1st International Workshop on Pervasive Analytical Service Clouds for the Enterprise and Beyond (PASCEB'13); 9th International Workshop on Semantic Web Enabled Software Engineering (SWESE'13); 9th International Workshop on Engineering Service-Oriented Applications (WESOA'13); and a PhD Symposium, with best papers also being included in this book. The 54 papers included in this volume were carefully reviewed and selected from numerous submissions. They address various topics in the service-oriented computing domain and its emerging applications.

Table of Contents

Frontmatter

Engineering Service-Oriented Applications WESOA 2013

Introduction to the 9th International Workshop on Engineering Service-Oriented Applications (WESOA’13)

The Workshop on Engineering Service Oriented Applications (WESOA’13) focuses on core service software engineering issues keeping pace with new developments such as methods for engineering of cloud services. Over the past nine years the WESOA workshop has been able to attract high-quality contributions across a range of service engineering topics with recent proceedings published by Springer in the LNCS series. The ninth Workshop on Engineering Service Oriented Applications (WESOA’13) was held in Berlin, Germany on 2 December 2013. We have received twenty-four submissions and following review of each paper by at least three reviewers we accepted ten papers for presentation at the workshop and publication in the ICSOC’2013 Workshop Proceedings. The workshop included an excellent keynote presentation by Tom Baeyens, the CEO of Effektif.com titled "A decade of open API’s", followed by ten papers organized into three sessions. The first session focused on Business Processes and I.T. Services Management and includedpapers titled:

From Process Models to Business Process Architectures: Connecting the Layers

by Rami-Habib Eid-Sabbagh and Mathias Weske,

Integrating Service Release Management with Service Solutioning Processes

by Heiko Ludwig, Juan Cappi, Valeria Becker, Bairbre Stewart and Susan Meade, and

Practical Compiler-based User Support during the Development of Business Processes

by Thomas M. Prinz and Wolfram Amme. The second session focused on Automating Process Discovery and Composition and included papers titled:

Towards Automating the Detection of Event Sources

by Nico Herzberg, Oleh Khovalko, Anne Baumgrass, and Mathias Weske,

Discovering Pattern-Based Mediator Services from Communication Logs

by Christian Gierds and Dirk Fahland, and

Goal-driven Composition of Business Process Models

by Benjamin Nagel, Christian Gerth, and Gregor Engels. The final session included papers on Modelling Service-Oriented and Adaptive Systems:

Model Checking GSM-Based Multi-Agent Systems

by Pavel Gonzalez, Andreas Griesmayer, and Alessio Lomuscio,

Towards Modelling and Execution of Collective Adaptive Systems

by Vasilios Andrikopoulos, Antonio Bucchiarone, Santiago Gomez Saez, Dimka Karastoyanova, and Claudio Antares Mezzina,

A Requirements-based Model for Effort Estimation in Service-oriented Systems

by Bertrand Verlaine, Ivan J. Jureta, and Stephane Faulkner, and

Augmenting Complex Problem Solving with Hybrid Compute Units

by Hong-Linh Truong, Hoa Khanh Dam, Aditya Ghose and Schahram Dustdar. The workshop provided an effective platform for exchange of ideas and extensive discussion of topics covered by paper presentations.

George Feuerlicht, Winfried Lamersdorf, Guadalupe Ortiz, Christian Zirpins
From Process Models to Business Process Architectures: Connecting the Layers

Business process management has become a standard commodity to manage and improve business operations in organisations. Large process model collections emerged. Managing, and maintaining them has become a major area of research. Business process architectures (BPAs) have been introduced to support this task focusing on interdependencies between processes. Both the process and BPA layer are often modeled independently, creating inconsistencies between both layers. However, a consistent overview on process interdependencies on BPA level is of high importance, especially in regard to assessing the impact of change when optimising business process collaborations. In this paper, we propose a formal approach to extract BPAs from process model collections connecting process layer and BPA layer for assuring consistency between them. Interdependencies between process models will be reflected in trigger and message flows on BPA level giving a high level overview of process collaboration as well as allowing its formal verification with existing approaches. We will show the extraction of BPAs from process model collections on a running example modeled in BPMN.

Rami-Habib Eid-Sabbagh, Mathias Weske
Goal-Driven Composition of Business Process Models

Goal-driven requirements engineering is a well-known approach for the systematic elicitation and specification of strategic business goals in early phases of software engineering processes. From these goals concrete operations can be derived that are composed in terms of a business process model. Lacking consistency between goal models and derived business processes especially with respect to the dependencies between goals can result in an implementation that is not in line with the actual business objectives. Hence, constraints indicated from these dependencies need to be considered in the derivation of business process models. In previous work, we introduced the extended goal modeling language Kaos4SOA that provides comprehensive modeling capabilities for temporal and logical dependencies among goals. Further, we presented an approach to validate the consistency between goal models and business process models regarding these dependencies. Extending the previous work, this paper presents a constructive approach for the derivation of consistent business processes from goal models. We introduce an algorithm that calculates logically encapsulated business process fragments from a given goal model and describe how these fragments can be composed to a business process model that fulfills the given temporal constraints.

Benjamin Nagel, Christian Gerth, Gregor Engels
Integrating Service Release Management with Service Solution Design

Web-delivered services such as Web or Cloud services are often made available to users in a fast cadence of releases, taking advantage of the single deployment environment of a centrally controlled service. This enables organizations to bring service enhancements to customers in a timely way and respond quickly to market demands. Organizations use multiple Web-delivered services by one or multiple vendors to compose complex solutions to their business problems in conjunction with standard applications and custom implementation and delivery services. Designing these complex solutions often takes considerable time and multiple new releases of a service and a changed service roadmap may have influence on a customer’s solution design. Existing IT service management and software development best practices do not consider the relationship between service release management and service design sufficiently to address frequent releases and changes to a service roadmap. This paper discusses the relationship from both the point of view of the service provider and the service customer and proposes an approach to manage those interdependencies between service design and release management.

Heiko Ludwig, Juan Cappi, Valeria Becker, Bairbre Stewart, Susan Meade
Practical Compiler-Based User Support during the Development of Business Processes

An erroneous execution of business processes causes high costs and could damage the prestige of the providing company. Therefore, validation of the correctness of business processes is essential. In general, business processes are described with Petri nets semantics, even though this kind of description allows only algorithms with a worse processing time and bad failure information to this moment.

In this paper, we describe new compiler-based techniques that could be used instead of Petri net algorithms for the verification of business processes. Basic idea of our approach is, to start analyses on different points of workflow graphs and to find potential structural errors. These developed techniques improved other known approaches, as it guarantees a precise visualization and explanation of all determined structural errors, which substantially supports the development of business processes.

Thomas M. Prinz, Wolfram Amme
Model Checking GSM-Based Multi-Agent Systems

Artifact systems are a novel paradigm for implementing service oriented computing. Business artifacts include both data and process descriptions at interface level thereby providing more sophisticated and powerful service inter-operation capabilities. In this paper we put forward a technique for the practical verification of business artifacts in the context of multi-agent systems. We extend GSM, a modelling language for artifact systems, to multi-agent systems and map it into a variant of AC-MAS, a semantics for reasoning about artifact systems. We introduce a symbolic model checker for verifying GSM-based multi-agent systems. We evaluate the tool on a scenario from the service community.

Pavel Gonzalez, Andreas Griesmayer, Alessio Lomuscio
Towards Modeling and Execution of Collective Adaptive Systems

Collective Adaptive Systems comprise large numbers of heterogeneous entities that can join and leave the system at any time depending on their own objectives. In the scope of pervasive computing, both physical and virtual entities may exist, e.g., buses and their passengers using mobile devices, as well as city-wide traffic coordination systems. In this paper we introduce a novel conceptual framework that enables Collective Adaptive Systems based on well-founded and widely accepted paradigms and technologies like service orientation, distributed systems, context-aware computing and adaptation of composite systems. Toward achieving this goal, we also present an architecture that underpins the envisioned framework, discuss the current state of our implementation effort, and we outline the open issues and challenges in the field.

Vasilios Andrikopoulos, Antonio Bucchiarone, Santiago Gómez Sáez, Dimka Karastoyanova, Claudio Antares Mezzina
A Requirements-Based Model for Effort Estimation in Service-Oriented Systems

Assessing the development costs of an application remains an arduous task for many project managers. Using new technologies and specific software architectures makes this job even more complicated. In order to help people in charge of this kind of work, we propose a model for estimating the effort required to implement a service-oriented system. Its starting point lies in the requirements and the specifications of the system-to-be. It is able to provide an estimate of the development effort needed. The latter is expressed in a temporal measurement unit, easily convertible into a monetary value. The model proposed takes into account the three types of system complexity, i.e., the structural, the conceptual and the computational complexity.

Bertrand Verlaine, Ivan J. Jureta, Stéphane Faulkner
Augmenting Complex Problem Solving with Hybrid Compute Units

Combining software-based and human-based services is crucial for several complex problems that cannot be solved using software-based services alone. In this paper, we present novel methods for modeling and developing hybrid compute units of software-based and human-based services. We discuss high-level programming elements for different types of software- and human-based service units and their relationships. In particular, we focus on novel programming elements reflecting hybridity, collectiveness and adaptiveness properties, such as elasticity and social connection dependencies, and on-demand and pay-per-use economic properties, such as cost, quality and benefits, for complex problem solving. Based on these programming elements, we present programming constructs and patterns for building complex applications using hybrid services.

Hong-Linh Truong, Hoa Khanh Dam, Aditya Ghose, Schahram Dustdar
Towards Automating the Detection of Event Sources

During business process execution, various systems and services produce a variety of data, messages, and events that are valuable for gaining insights about business processes, e.g., to ensure a business process is executed as expected. However, these data, messages, and events usually originate from different kinds of sources, each specified by different kinds of descriptions. This variety makes it difficult to automate the detection of relevant event sources for business process monitoring. In this paper, we present a course of actions to automatically associate different event sources to event object types required for business process monitoring. In particular, in a three-step approach we determine the similarity of event sources to event object types, rank those results, and derive a mapping between their attributes. Thus, relevant event sources and their bindings to specified event object types of business processes can be automatically identified. The approach is implemented and evaluated using schema matching techniques for a specific use case that is aligned with real-world energy processes, data, messages, and events.

Nico Herzberg, Oleh Khovalko, Anne Baumgrass, Mathias Weske
Discovering Pattern-Based Mediator Services from Communication Logs

Process discovery is a technique for deriving a conceptual high-level process model from the execution logs of a running implementation. The technique is particularly useful when no high-level model is available or in case of significant gaps between process documentation and implementation. The discovered model makes the implementation accessible to various kinds of analysis for functional and non-functional properties. In this paper we extend process discovery to mediator services (or adapters) which adapt the messaging protocols of 2 or more otherwise incompatible services. We propose a technique that takes as input logs of communication behaviors—one log for each service connected to the adapter—and a library of high-level data transformation rules relevant for the domain of the adapter, and then returns an operational adapter model describing the

control-flow

and the

data flow

of the adapter in terms of Coloured Petri Nets – if such model exists. We discuss benefits and limitations of this idea and evaluate it with a prototype implementation on industrial size models.

Christian Gierds, Dirk Fahland

Cloud Service Brokerage CSB 2013

Cloud Service Brokerage - 2013: Methods and Mechanisms

In the future, the Cloud will evolve into a rich ecosystem of service providers and consumers, each building upon the offerings of others. Cloud service brokers will play an important role, mediating between providers and consumers. As well as providing vertical integration and value-added aggregation of services, brokers will play an increased role in continuous quality assurance and optimization. This may range from setting common standards for service specification, providing mechanisms for lifecycle governance and service certification, to automatic arbitrage respecting consumer preferences, continuous optimization of service delivery, failure prevention and recovery at runtime. This workshop introduces some of these anticipated methods and investigates some of the mechanisms envisaged in future Cloud service brokerage.

Gregoris Mentzas, Anthony J. H. Simons, Iraklis Paraskakis
A Comparison Framework and Review of Service Brokerage Solutions for Cloud Architectures

Cloud service brokerage has been identified as a key concern for future cloud technology development and research. We compare service brokerage solutions. A range of specific concerns like architecture, programming and quality will be looked at. We apply a 2-pronged classification and comparison framework. We will identify challenges and wider research objectives based on an identification of cloud broker architecture concerns and technical requirements for service brokerage solutions. We will discuss complex cloud architecture concerns such as commoditisation and federation of integrated, vertical cloud stacks.

Frank Fowley, Claus Pahl, Li Zhang
Brokerage for Quality Assurance and Optimisation of Cloud Services: An Analysis of Key Requirements

As the number of cloud service providers grows and the requirements of cloud service consumers become more complex, the latter will come to depend more and more on the intermediation services of cloud service brokers. Continuous quality assurance and optimisation of services is becoming a mission-critical objective that many consumers will find difficult to address without help from cloud service intermediaries. The Broker@Cloud project envisages a software framework that will make it easier for cloud service intermediaries to address this need, and this paper provides an analysis of key requirements for this framework. We discuss the methodology that we followed to capture these requirements, which involved defining a conceptual service lifecycle model, carrying out a series of Design Thinking workshops, and formalising requirements based on an agile requirements information model. Then, we present the key requirements identified through this process in the form of summarised results.

Dimitrios Kourtesis, Konstantinos Bratanis, Andreas Friesen, Yiannis Verginadis, Anthony J. H. Simons, Alessandro Rossini, Antonia Schwichtenberg, Panagiotis Gouvas
Towards Value-Driven Business Modelling Based on Service Brokerage

Service engineering is an emerging interdisciplinary subject which crosscuts business modeling, knowledge management and economic analysis. To satisfy service providers’ profiting goals, the service system modeling needs to take care of both the short and long run customer satisfaction. The ideology of value driven design fits well for this need. We propose to work towards value driven design by introducing a form of service design patterns, we call

service value broker

(SVB), with the aim to shorten the distance between economical analysis and IT implementation and increase the value added on all sides. SVB allow us to not only study the value added in terms of functional and business aspects, but also reason about the need for brokerage across various domains. In this paper, we model the basis of SVB and its network based organization architecture in the background of Cloud.

Yucong Duan, Keman Huang, Ajay Kattepur, Wencai Du
Introducing Policy-Driven Governance and Service Level Failure Mitigation in Cloud Service Brokers: Challenges Ahead

Cloud service brokerage represents a novel operational model in the scope of cloud computing. A cloud broker acts as an intermediary between a service provider and a service consumer with the goal of adding as much value as possible to the service being provisioned and consumed. Continuous quality assurance is a type of brokerage capability having high value to both providers and consumers of cloud services. At the same time, it can be among the most challenging kinds of capability for cloud service brokers to realise. In this paper we focus on two specific themes within this scope. We present a motivating scenario and outline key research challenges associated with introducing policy-driven governance and service level failure mitigation capabilities in brokers.

Konstantinos Bratanis, Dimitrios Kourtesis
Model-Based Testing in Cloud Brokerage Scenarios

In future Cloud ecosystems, brokers will mediate between service providers and consumers, playing an increased role in quality assurance, checking services for functional compliance to agreed standards, among other aspects. To date, most Software-as-a-Service (SaaS) testing has been performed manually, requiring duplicated effort at the development, certification and deployment stages of the service lifecycle. This paper presents a strategy for achieving automated testing for certification and re-certification of SaaS applications, based on the adoption of simple state-based and functional specifications. High-level test suites are generated from specifications, by algorithms that provide the necessary and sufficient coverage. The high-level tests must be grounded for each implementation technology, whether SOAP, REST or rich-client. Two examples of grounding are presented, one into SOAP for a traditional web service and the other into Selenium for a SAP HANA rich-client application. The results demonstrate good test coverage. Further work is required to fully automate the grounding.

Mariam Kiran, Andreas Friesen, Anthony J. H. Simons, Wolfgang K. R. Schwach
Value-Added Modelling and Analysis in Service Value Brokerage

In our previous work, we have introduced various

Service Value Broker

(SVB) patterns which integrate business modeling, knowledge management and economic analysis. We have identified that value added is a main driving force for adoption and application of SVB by different stakeholders including providers, customers and public administrators. Based on an e-tourism platform, we analyze the sources of value added which could originate in SVB application from the perspective of various stakeholders. We model the situations of value added balancing and tradeoff in the background of long run and short run economical goals. Experiments and simulations are developed for demonstration purpose.

Yucong Duan, Yongzhi Wang, Jinpeng Wei, Ajay Kattepur, Wencai Du

Semantic Web Enabled Software Engineering SWESE 2013

Introduction to the Proceedings of the 9th International Workshop on Semantic Web Enabled Software Engineering (SWESE) 2013

The 9th international workshop on

Semantic Web Enabled Software Engineering (SWESE)

has been hold in conjunction with the 11th International Conference on Service Oriented Computing (ICSOC 2013) in Berlin, Germany. This workshop builds on prior events and have begun to explore and evaluate the potential of SemanticWeb technologies in software, system and service engineering. Semantic Web technologies provide understandablemodeling formalisms and tractable reasoning services with widely established tool support. In this workshop series, we are interested in applying Semantic Web technologies to support, improve and ease both the process and product of software and service development.

Gerd Gröner, Jeff Z. Pan, Yuting Zhao, Elisa F. Kendall, Ljiljana Stojanovic
Management of Variability in Modular Ontology Development

The field of variability management deals with the formalization of mandatory, alternative and optional domain concepts in product line engineering. Ontologies in turn, describe domain knowledge in form of predicates, subjects and constraints in various forms. Based on existing ontology mapping approaches, we developed a method to organize a set of modular ontologies using the concepts of variability management (MOVO). This ontology driven variability model can be stepwise adapted to the needs of a business driven one, resulting in a variability model that fits the needs of business and makes modular ontologies reusable in a simple manner. In order to avoid a technological break and to benefit from the opportunities that ontologies offer, the resulting variability model is expressed in an ontology itself. The approach is evaluated by one case study with enterprise architecture ontologies.

Melanie Langermeier, Peter Rosina, Heiner Oberkampf, Thomas Driessen, Bernhard Bauer
Towards Automated Service Matchmaking and Planning for Multi-Agent Systems with OWL-S – Approach and Challenges

In the past, the demand for modular, distributed and dynamic computer systems has increased rapidly. In the field of multi-agent systems (MAS) many of the current approaches try to account for these requirements. In this paper we discuss the shortcomings of the semantic service selection component SeMa

2

, propose improvements and describe an integration concept into a multi-agent framework. Further, we illustrate how this system can be extended by an automated service composition component using methods from the AI planning community.

Johannes Fähndrich, Nils Masuch, Hilmi Yildirim, Sahin Albayrak
Re-engineering the ISO 15926 Data Model: A Multi-level Metamodel Perspective

The ISO 15926 standard was developed to facilitate the integration of life-cycle data of process plants. The core of the standard is a highly generic and extensible data model trying to capture a holistic view of the world. We investigated the standard from a software modelling point of view and identified some challenges in terminology, circular definitions and inconsistencies in relationships during the mapping from concepts specified in the standard to an object-oriented model. This makes the standard difficult to understand and more challenging to implement. In this paper we look at mapping the ISO 15926 data model to a

multilevel

metamodel, and aim to formalise critical aspects of the data model which will simplify the model and ease the adoption process.

Andreas Jordan, Matt Selway, Georg Grossmann, Wolfgang Mayer, Markus Stumptner
Fluent Calculus-Based Semantic Web Service Composition and Verification Using WSSL

We propose a composition and verification framework for Semantic Web Services specified using WSSL, a novel service specification language based on the fluent calculus, that addresses issues related to the frame, ramification and qualification problems. These deal with the succinct and flexible representation of non-effects, indirect effects and preconditions, respectively. The framework exploits the unique features of WSSL, allowing, among others, for: compositions that take into account ramifications of services; determining the feasibility of a composition a priori; and considering exogenous qualifications during the verification process. The framework is implemented using FLUX-based planning, supporting compositions with fundamental control constructs, including nondeterministic ones such as conditionals and loops. Performance is evaluated with regard to termination and execution time for increasingly complex synthetic compositions.

George Baryannis, Dimitris Plexousakis
Template-Based Ontology Population for Smart Environments Configuration

Smart Environments is one of several domains in which Semantic Web technologies are applied nowadays. Ontologies, in particular, are used as core modeling languages for representing devices, systems and environments. Developing such ontologies, that typically involve several device descriptions (individuals) and related information, i.e., individuals of classes contributing to the device model, is often done by a manual, time consuming, and error-prone approach.

This paper presents a template based approach, which increases accuracy, ease of use, and time-effectiveness of the ontology population process by reducing the amount of user-given information of about an order of magnitude, with respect to the fully manual approach. User-required information only pertains device features (e.g., name, location, etc.) and never implies knowledge of Semantic Web technologies, thus enabling end-user configuration of smart homes and buildings. Experimental results with a prototypical implementation confirm the viability of the approach on a real-world use case.

Sebastián Aced López, Dario Bonino, Fulvio Corno

Cloud Computing and Scientific Applications CCSA 2013

Introduction to the 3rd International Workshop on Cloud Computing and Scientific Applications (CCSA’13)

CCSA workshop has been formed to promote research and development activities focused on enabling and scaling scientific applications using distributed computing paradigms, such as cluster, Grid, and Cloud Computing. With the rapid emergence of virtualized environments for accessing software systems and solutions, the volume of users and their data are growing exponentially. According to the IDC, by 2020, when the ICT industry reaches $5 billion - $1.7 billion larger than it is today - at least 80% of the industry’s growth will driven by 3rd platform technologies, such as cloud services and big data analytics. Existing computing infrastructure, software system designs, and use cases will have to take into account the enormity in volume of requests, size of data, computing load, locality and type of users, and every growing needs of all applications. Cloud computing promises reliable services delivered through next-generation data centers that are built on compute and storage virtualization technologies. Users will be able to access applications and data from a Cloud anywhere in the world on demand. In other words, the Cloud appears to be a single point of access for all the computing needs of users. The users are assured that the Cloud infrastructure is robust and will always be available at any time. To address the growing needs of both applications and Cloud computing paradigm, CCSA brings together researchers and practitioners from around the world to share their experiences, to focus on modeling, executing, and monitoring scientific applications on Clouds. 
In this workshop, there were 20 submissions. The committee decided to accept 7 papers. The program also includes 1 invited talk as a keynote.

Suraj Pandey, Surya Nepal
SLA-Aware Load Balancing in a Web-Based Cloud System over OpenStack

This paper focuses on the scalability problem in cloud-based systems when changing the computing requirements, this is, when there is a high degree of requesting service variability in cloud-computing environments. We study a specific scenario for web-based application deployed in a cloud system, where the number of requests can change with time. This paper deals with guaranteeing the SLA (Service-Level Agreement) in scalable clouds with web-based load variability.

We present an architecture able to balance the load (mainly web-browser applications) between different computing virtual machines. This is accomplished by monitoring the system in order to determine when to create or terminate virtual machines. A novel scheduling policy to manage the requested cloud services based on the presented architecture is also proposed.

The good results obtained by implementing the proposed architecture in a real cloud framework prove the applicability of our proposal for guaranteeing SLA.

Jordi Vilaplana, Francesc Solsona, Jordi Mateo, Ivan Teixido
Are Public Clouds Elastic Enough for Scientific Computing?

Elasticity can be seen as the ability of a system to increase or decrease the computing resources allocated in a dynamic and on demand way. It is an important feature provided by cloud computing, that has been widely used in web applications and is also gaining attention in the scientific community. Considering the possibilities of using elasticity in this context, a question arises: “Are the available public cloud solutions suitable to support elastic scientific applications?” To answer the question, we present a review of some solutions proposed by public cloud providers and point the open issues and challenges in providing elasticity for scientific applications. We also present some initiatives that are being developed in order to overcome the current problems. In our opinion, current computational clouds have not yet reached the necessary maturity level to meet all scientific applications elasticity requirements.

Guilherme Galante, Luis Carlos Erpen De Bona, Antonio Roberto Mury, Bruno Schulze
A Light-Weight Framework for Bridge-Building from Desktop to Cloud

A significant trend in science research for at least the past decade has been the increasing uptake of computational techniques (modelling) for insilico experimentation, which is trickling down from the grand challenges that require capability computing to smaller-scale problems suited to capacity computing. Such virtual experiments also establish an opportunity for collaboration at a distance. At the same time, the development of web service and cloud technology, is providing a potential platform to support these activities. The problem on which we focus is the technical hurdles for users without detailed knowledge of such mechanisms – in a word, ‘accessibility’ – specifically: (i) the heavy weight and diversity of infrastructures that inhibits shareability and collaboration between services, (ii) the relatively complicated processes associated with deployment and management of web services for non-disciplinary specialists, and (iii) the relative technical difficulty in packaging the legacy software that encapsulates key discipline knowledge for web-service environments. In this paper, we describe a light-weight framework based on cloud and REST to address the above issues. The framework provides a model that allows users to deploy REST services from the desktop on to computing infrastructure without modification or recompilation, utilizing legacy applications developed for the command-line. A behind-the-scenes facility provides asynchronous distributed staging of data (built directly on HTTP and REST). We describe the framework, comprising the service factory, data staging services and the desktop file manager overlay for service deployment, and present experimental results regarding: (i) the improvement in turnaround time from the data staging service, and (ii) the evaluation of usefulness and usability of the framework through case studies in image processing and in multi-disciplinary optimization.

Kewei Duan, Julian Padget, H. Alicia Kim
Planning and Scheduling Data Processing Workflows in the Cloud with Quality-of-Data Constraints

Data-intensive and long-lasting applications running in the form of workflows are being increasingly more dispatched to cloud computing systems. Current scheduling approaches for graphs of dependencies fail to deliver high resource efficiency while keeping computation costs low, especially for continuous data processing workflows, where the scheduler does not perform any reasoning about the impact new input data may have in the workflow final output. To face such stark challenge, we introduce a new scheduling criterion, Quality-of-Data (QoD), which describes the requirements about the data that worth the triggering of tasks in workflows. Based on the QoD notion, we propose a novel service-oriented scheduler planner, for continuous data processing workflows, that is capable of enforcing QoD constraints and guide the scheduling to attain resource efficiency, overall controlled performance, and task prioritization. To contrast the advantages of our scheduling model against others, we developed WaaS (Workflow-as-a-Service), a workflow coordinator system for the Cloud where data is shared among tasks via cloud columnar database.

Sérgio Esteves, Luís Veiga
Galaxy + Hadoop: Toward a Collaborative and Scalable Image Processing Toolbox in Cloud

With emergence and adoption of cloud computing, cloud has become an effective collaboration platform for integrating various software tools to deliver as services. In this paper, we present a cloud-based image processing toolbox by integrating Galaxy, Hadoop and our proprietary image processing tools. This toolbox allows users to easily design and execute complex image processing tasks by sharing various advanced image processing tools and scalable cloud computation capacity. The paper provides the integration architecture and technical details about the whole system. In particular, we present our investigations to use Hadoop to handle massive image processing jobs in the system. A number of real image processing examples are used to demonstrate the usefulness and scalability of this class of data-intensive applications.

Shiping Chen, Tomasz Bednarz, Piotr Szul, Dadong Wang, Yulia Arzhaeva, Neil Burdett, Alex Khassapov, John Zic, Surya Nepal, Tim Gurevey, John Taylor
SciLightning: A Cloud Provenance-Based Event Notification for Parallel Workflows

Conducting scientific experiments modeled as workflows is a challenging task due to the complex management of several (often inter-related) computer-based simulations. Many of these scientific workflows are compute intensive and demand High Performance Computing environments to run, such as virtual parallel machines in a cloud computing environment. These workflows commonly present long-term "black-box" executions (

i.e.

several days or weeks), thus making it very difficult for scientists to monitor its execution course. We present a workflow event notification mechanism based on runtime monitoring of provenance data produced by parallel scientific workflow systems in clouds. This mechanism queries provenance data generated at runtime for identifying preconfigured events and notifying scientists using technologies such as Android devices and message services in social networks such as Twitter. The proposed mechanism, named SciLightning, was evaluated by monitoring SciPhy, a large-scale parallel execution of a bioinformatics phylogenetic analysis workflow. SciPhy took six days to complete its execution in Amazon AWS cloud environment using a cloud parallel workflow engine called SciCumulus. The evaluation showed that the proposed approach is effective with respect to monitoring and notifying preconfigured events.

Julliano Trindade Pintas, Daniel de Oliveira, Kary A. C. S. Ocaña, Eduardo Ogasawara, Marta Mattoso
Energy Savings on a Cloud-Based Opportunistic Infrastructure

In this paper, we address energy savings on a Cloud-based opportunistic infrastructure. The infrastructure implements opportunistic design concepts to provide basic services, such as virtual CPUs, RAM and Disk while profiting from unused capabilities of desktop computer laboratories in a non-intrusive way.

We consider the problem of virtual machines consolidation on the opportunistic cloud computing resources. We investigate four workload packing algorithms that place a set of virtual machines on the least number of physical machines to increase resource utilization and to transition parts of the unused resources into a lower power states or switching off. We empirically evaluate these heuristics on real workload traces collected from our experimental opportunistic cloud, called UnaCloud. The final aim is to implement the best strategy on UnaCoud. The results show that a consolidation algorithm implementing a policy taking into account features and constraints of the opportunistic cloud saves energy more than 40% than related consolidation heuristics, over the percentage earned by the opportunistic environment.

Johnatan E. Pecero, Cesar O. Diaz, Harold Castro, Mario Villamizar, Germán Sotelo, Pascal Bouvry

Pervasive Analytical Service Clouds for the Enterprise and Beyond PACEB 2013

Introduction to the Proceedings of the Workshop on Pervasive Analytical Service Clouds for the Enterprise and Beyond (PASCEB) 2013

The First Workshop on Pervasive Analytical Service Clouds for the Enterprise and Beyond (PASCEB) 2013was held in conjunctionwith the ICSOC’13 conference in Berlin, Germany. The workshop focused on an emerging area addressing the gap of how to design socio-technical, dependable and secure cloud-service ecosystems for commercial collaboration use. To establish separation of concerns, the addressed gap has different aspects in terms of concepts, frameworks, technologies that facilitate the management and coordination of large Internet of Things (IoT), -Services (IoS) and -People (IoP) that comprise a service-ecosystem for collaborating towards a common goal for which we envision a lifecycle. Integral is the analysis of large volumes of heterogeneous, highspeed data sets, i.e., big data.With the latter, the quality of socio-technical collaboration can evolve in a better way.

A. Norta, Weishan Zhang, C. M. Chituc, R. Vaculin
Towards a Formal Model for Cloud Computing

The use of formal methods is an effective means to improve complex systems reliability and quality. In this context, we adopt one of these methods to formalize cloud computing concepts. We focus on modeling interactions between cloud services and customers. Based on Bigraphical Reactive Systems, the formalization process is realized via the definition of a Cloud General Bigraph (CGB) obtained by associating; primarily, a CCB (Cloud Customers Bigraph) to cloud customers. Then, a Cloud Services Bigraph (CSB) is proposed to formally specify cloud services structure. Finally, juxtaposing these two bigraphs (CSB and CCB) gives rise to the suited CGB. In addition, a natural specification of cloud deployment models is specified. This paper also addresses cloud service dynamics by defining a set of reaction rules on bigraphs in a way that is amenable to reconfigure the designed cloud system.

Zakaria Benzadri, Faiza Belala, Chafia Bouanaka
An Optimized Strategy for Data Service Response with Template-Based Caching and Compression

Data service is a specialization of Web service, and end-users can synthesize cross-organizational data by composing data services. As composite schemes overlap each other, some primitive data services could be called repeatedly by composite data services, so that the response delay and server load are aggravated. In this paper, an optimized strategy for data service response with template-based caching and compression is proposed. Firstly, the strategy uses the message template to extract the application-relevant values from SOAP messages. Secondly, the strategy holds the objects from application-relevant values rather than XML representations to decrease the overhead of SOAP message parsing. Thirdly, the strategy uses the XMill compression algorithm to decrease the volume of data transmitted. Extensive experiments based on Spring-WS-Test benchmark demonstrate the strategy is an effective approach to reduce response latency and server load compared to non-caching tehcnniques.

Zhang Peng, Xu Kefu, Li Yan, Guo Li
Model-Driven Event Query Generation for Business Process Monitoring

While executing business processes, a variety of events is produced that is valuable for getting insights about the process execution. Specifically, these events can be processed by Complex Event Processing(CEP) engines to deliver a base for business process monitoring. Mobile, flexible, and distributed business processes challenge existing process monitoring techniques, especially if process execution is partially done manually. Thus, it is not trivial to decide where the required business process execution information can be found, how this information can be extracted, and to which point in the process it belongs to. Tackling these challenges, we present a model-driven approach to support the automated creation of CEP queries for process monitoring. For this purpose, we decompose a process model that includes monitoring information into its structural components. Those are transformed to CEP queries to monitor business process execution based on events. For illustration, we show an implementation for Business Process Model and Notation(BPMN) and describe possible applications.

Michael Backmann, Anne Baumgrass, Nico Herzberg, Andreas Meyer, Mathias Weske
Enabling Semantic Complex Event Processing in the Domain of Logistics

During the execution of business processes, companies generate vast amounts of events, which makes it hard to detect meaningful process information that could be used for process analysis and improvement. Complex event processing (CEP) can help in this matter by providing techniques for continuous analysis of events. The consideration of domain knowledge can increase the performance of reasoning tasks but it is different for each domain and depends on the requirements of these domains. In this paper, an existing approach of combining CEP and ontological knowledge is applied to the domain of logistics. We show the benefits of semantic complex event processing (SCEP) for logistics processes along the specific use case of tracking and tracing goods and processing related events. In particular, we provide a novel domain-specific function that allows to detect meaningful events for a transportation route. For the demonstration, a prototypical implementation of a system enabling SCEP queries is introduced and analyzed in an experiment.

Tobias Metzke, Andreas Rogge-Solti, Anne Baumgrass, Jan Mendling, Mathias Weske
Towards Self-adaptation Planning for Complex Service-Based Systems

A complex service-based system (CSBS), which comprises a multi-layer structure possibly spanning multiple organizations, operates in a highly dynamic and heterogeneous environment. At run time the quality of service provided by a CSBS may suddenly change, so that violations of the Service Level Agreements (SLAs) established within and across the boundaries of organizations can occur. Hence, a key management choice is to design the CSBS as a self-adaptive system, so that it can properly plan adaptation decisions to maintain the overall quality defined in the SLAs. However, the challenge in planning the CSBS adaptation is the uncertainty effect of adaptation actions that can variously affect the multiple layers of the CSBS. In a dynamic and constantly evolving environment, there is no guarantee that the adaptation action taken at a given layer can have an overall positive effect. Furthermore, the complexity of the cross-layer interactions makes the decision making process a non-trivial task. In this paper, we address the problem by proposing a multi-layer adaptation planning with local and global adaptation managers. The local manager is associated with a single planning model, while the global manager is associated with a multiple planning model. Both planning models are based on Markov Decision Processes (MDPs) that provide a suitable technique to model decisions under uncertainty. We present an example of scenario to show the practicality of the proposed approach.

Azlan Ismail, Valeria Cardellini
Towards an Integration Platform for Bioinformatics Services

Performing in-silico experiments, which involves an intensive access to distributed services and information resources through Internet, is nowadays one of the main activities in Bioinformatics. Although existing tools facilitate the implementation of workflow-oriented applications, they lack of capabilities to integrate services beyond low-scale applications, particularly integrating services with heterogeneous interaction patterns and in a larger scale, ideally based on a Platform as a Service paradigm. On the other hand, such integration mechanisms are provided by middleware products like Enterprise Service Buses (ESB). This paper proposes an integration platform, based on enterprise middleware, to integrate Bioinformatics services. It presents a multi-level reference architecture and focuses on ESB-based mechanisms to provide asynchronous communications, event-based interactions and data transformation capabilities.

Guzmán Llambías, Laura González, Raúl Ruggia
Requirements to Pervasive System Continuous Deployment

Pervasive applications present stringent requirements that make their deployment especially challenging. The unknown and fluctuating environment in which pervasive applications are executed makes traditional approaches not suitable. In addition, the current trend to build applications out of separated components and services makes the deployment process inherently continuous and dynamic. In the last years, we developed several industrial pervasive platforms and applications. From these experiences, we identified ten requirements vital to support the continuous deployment of pervasive systems. In this paper we present these requirements and the associated challenges.

Clément Escoffier, Ozan Günalp, Philippe Lalanda
Towards Structure-Based Quality Awareness in Software Ecosystem Use

Software ecosystems – a group of actors, one or more business models that serve these actors in a possible wider sense than direct revenues, one or more software platforms that the business models are built upon and the relationships of the actors and business models – are gaining importance in software development as a way of increasing software innovation, decreasing internal development cost, and spreading software platforms. Software quality, not only of individual applications or components, but also of the software ecosystems as a whole is important, but has not received much attention so far. We here aim to explore to which extent composition of components from a software ecosystem influences software quality. We do this in order to provide groundwork for application awareness of software quality in a software ecosystem context.

We ran the same Maven build tasks in 15 simultaneous releases (including associated service releases) of Eclipse and measured time, energy, and memory performance. Based on an analysis of the plugins installed with the versions of Eclipse, we next found the structure of the subset of the Eclipse software ecosystem that was used in each version. The performance measurements and computed structure were then analyzed and compared. We found that performance and structure changed considerably throughout versions of Eclipse. While we found no direct correlation between the evolution of the two, our exploratory study warrants further study.

Klaus Marius Hansen, Weishan Zhang
An Adaptive Enterprise Service Bus Infrastructure for Service Based Systems

Service-based systems (SBS) increasingly need adaptation capabilities to agilely respond to unexpected changes (e.g. regarding quality of service). The Enterprise Service Bus (ESB), a recognized infrastructure to support the development of SBS, provides native mediation capabilities (e.g. message transformation) which can be used to perform adaptation actions. However, the configuration of these capabilities cannot usually be performed at runtime. To deal with this limitation, Adaptive ESB Infrastructures have been proposed which leverage their mediation capabilities to deal with adaptation requirements in SBSs in an automatic and dynamic way at runtime. This paper presents a JBossESB-based implementation of an Adaptive ESB Infrastructure and demonstrates its operation by describing their main functionalities. The paper also presents an evaluation of the implemented solution.

Laura González, Jorge Luis Laborde, Matías Galnares, Mauricio Fenoglio, Raúl Ruggia
Dynamic Adaptation of Business Process Based on Context Changes: A Rule-Oriented Approach

In a dynamic environment, business process needs to be adjusted and evolved in response to the changeable internal policies and external environment. However, it is a time-consuming and laborious way by redesigning process model and executing the process instance. In this paper, we propose a rule-oriented approach to dynamically generate business process according to the current context at runtime. To enable dynamic and context-aware adaptation, the relationship between services and context is described as rules, which are then used to generate the solution with a mapping mechanism. Two algorithms are designed to generate the activity sequence at runtime, which is the solution of process adaptation. In order to achieve the preference selection, a process assessment strategy has been proposed to constrain the generated activity sequence. Simulation experiments have been conducted to demonstrate the efficiency of our approach.

Guangchang Hu, Budan Wu, Junliang Chen
Flexible Component Migration in an OSGi Based Pervasive Cloud Infrastructure

Task and service migration is an important feature for mobile cloud computing in order to improve capabilities of small devices. However, the flexible management of components migration between small devices themselves and powerful nodes in between is remaining a critical challenge for enabling mobile clouds. In this paper, we present a solution using OSGi component model based on the OSGi pervasive cloud (OSGi-PC) infrastructure we have developed. We have evaluated the component migration in different scenarios in terms of performance and power consumption to show the usability of our approach.

Weishan Zhang, Licheng Chen, Qinghua Lu, Peiying Zhang, Su Yang
Hybrid Emotion Recognition Using Semantic Similarity

It is challenging to know emotion status of people at run time as emotion can be influenced by many factors. Speech contents heart rates are used in our former work on hybrid emotion recognition approach. However, fixed emotional keywords is not enough as there may be unknown and new keywords arise. Therefore we add semantic similarity to emotional keywords recognition. After obtaining the content of the user, even if the emotional keywords are not in the knowledge base, we calculate the similarity between the keywords in the talk of the user and the words in the knowledge base. A hybrid similarity calculation algorithm is proposed to alleviate the problems that some words do not exist in HowNet knowledge base where we combine the similarity calculation method for Tongyici Cilin. If the similarity is greater than a threshold, then a corresponding emotion status will be recognized together with the heart rate of the user. The advantage of using semantic similarity is that it is much more flexible than the one with only fixed emotional keywords in the knowledge base, together with a higher recognition accuracy than before.

Zhanshan Zhang, Xin Meng, Peiying Zhang, Weishan Zhang, Qinghua Lu

PhD Symposium

ICSOC PhD Symposium 2013

The 9th edition of the ICSOC PhD Symposium was held on December 2, 2013, in Berlin, as a satellite event of the 11th International Conference on Service Oriented Computing (ICSOC 2013). The aim of the PhD Symposium series is to bring together Ph.D. students and established researchers in the field of service oriented computing, to give students the opportunity to present their research, share ideas and experiences, and stimulate a constructive discussion involving experienced researchers. The Symposium is intended for active students whose research is still undergoing, where a problem has been clearly identified but whose solution is not fully developed or needs still major improvements.

Fabio Patrizi, Boualem Benatallah, Ivona Brandic
Towards the Automated Synthesis of Data Dependent Service Controllers

The treatment of data is a crucial step in service composition but it is currently done manually and informally. This makes the development process time-consuming, expensive, and error-prone, which is serious in safety-critical domains like the medical area. To overcome this problem, we present a novel approach for synthesizing datadependent service controllers automatically based on composition and analysis methods for algebraic Petri nets. Consequently, our approach allows the automated, fast, and cost-efficient synthesis of correct controllers regarding data-dependent functional and safety-critical properties, which enables a reliable interoperability of medical devices.

Franziska Bathelt-Tok, Sabine Glesner
Multi-agent Approach for Managing Workflows in an Inter-Cloud Environment

Despite the several attractive features that offers the cloud technology, managing, controlling processes and resources are among the serious obstacles that cloud service providers need to overcome. These issues increase when cloud providers intend to exploit services from several distributed platforms to satisfy client’s requests and requirements. At this moment, they need to deal with some critical problems like heterogeneity, collaboration, coordination and communication between different types of participants.

In another side, the most known properties of an agent are: autonomy, pro-activity, cooperation and mobility. These features are attractive and have a great importance to design and implement software systems that operate in distributed and open environments such like cloud and grid. Our main goal through this thesis is to propose an approach and architectures to permit the integration of cloud/grid and multi-agent systems concepts and technologies for managing workflows in distributed service-oriented environments. Explicitly, in an Inter-Cloud environment.

Sofiane Bendoukha
An Information-Centric System for Building the Web of Things

In recent years, common-use devices has seen a leap transition in terms of equipped technology, introducing the so called “smart things” to the consumer market. This technological and societal revolution has underpinned the realization of the Internet of Things. To take full advantage of the opportunities arising from connectivity capabilities, smart things approached the application realm bringing the novel Web of Things vision to life. The Web, as a collaborative global space of information, is a critical asset to create value-added services. However, such a promising potential entails a number of challenges including data interoperability, data integration, information reuse and collaboration. This Ph.D. work focuses on a novel approach to take a smart thing to the Web, by representing it as graph of granular and individually addressable information called IDN-Document. IDN-Documents are simply structured web resources which can be aggregated, linked, reused and combined to build collaboration oriented, value-added services. IDN-Documents are managed by the InterDataNet middleware leveraging Linked Data and REST.

Stefano Turchi
Testing of Distributed Service-Oriented Systems

We are experiencing an exponential growth of devices connected to the Internet and services offered through the web. Today, we are just a few mobile-clicks away using services which enormously simplify our life. Just think of how we are paying our bills, recharging our mobile pre-paid account, or how we buy tickets for the events we want to attend. It is all being done through web services. This increasing reliance on distributed service-oriented systems provided through the web places a high expectation on their reliability. To keep up with this growing trend that is embracing changes on a daily basis, the software development of the services has to be rapid and at the same time leaving not much space for software errors or failures.

Faris Nizamic
Automation of the SLA Life Cycle in Cloud Computing ⋆ 

Cloud computing has emerged as a popular paradigm for scalable infrastructure solutions and services. The requirement of automated management of Service Level Agreements (SLAs) between the cloud service provider and the cloud user has increased to minimize user interaction with the computing environment. Thus, effective SLA negotiation, monitoring and timely detection of possible SLA violations represent challenging research issues. A big gap exists between a manual/ semi-automated and a fully automated SLA life cycle. This gap can be bridged with formalization of generally existing natural language SLAs. Algorithms and strategies for SLA monitoring, management and SLA violation are directly dependent on a complete formalization of SLAs. The goal of the thesis is to analyze currently existing SLA description languages, to find their shortcomings and to develop a complete SLA description language. As next step, we plan to develop distributed algorithms for automated SLA negotiation, monitoring, integration and timely SLA violations detection for cloud computing.

Waheed Aslam Ghumman
Towards a Dynamic Declarative Service Workflow Reference Model

Functional, nonfunctional, just-in-time approaches to composing web services span the sub-disciplines of software engineering, data management, and artificial intelligence. Our research addresses the process that must occur once the composition has completed and stakeholders must investigate historical and online operations/data flow to reengineer the process either off-line or in real-time. This research introduces

an effective reference model to assess the message flow of long-running service workflows.

We examine Dynamic Bayesian Networks (dDBNs), a data-driven modeling technique employed in machine learning, to create service workflow reference models. Unlike other reference models, this method is not limited by static assumptions. We achieve this by including the trend and time varying variables in the model. We demonstrate this method using a flight dataset collected from various airlines.

Damian Clarke
A Context-Aware Access Control Framework for Software Services

In the present age, context-awareness is an important aspect of the dynamic environments and the different types of dynamic context information bring new challenges to access control systems. Therefore, the need for the new access control frameworks to link their decision making abilities with the context-awareness capabilities have become increasingly significant. The main goal of this research is to develop a new access control framework that is capable of providing secure access to information resources or software services in a context-aware manner. Towards this goal, we propose a new semantic policy framework that extends the basic role-based access control (RBAC) approach with both dynamic associations of user-role and role-service capabilities. We also introduce a context model in modelling the basic and high-level context information relevant to access control. In addition, a situation can be determined on the fly so as to combine the relevant states of the entities and the purpose or user’s intention in accessing the services. For this purpose, we can propose a situation model in modelling the purpose-oriented situations. Finally we need a policy model that will let the users to access resources or services when certain dynamically changing conditions (using context and situation information) are satisfied.

A. S. M. Kayes, Jun Han, Alan Colman
Description and Composition of Services towards the Web-Telecom Convergence

Current research trends within a Next Generation Networks (NGN) are investigating the benefits and feasibility of developing integrated services in order to converge the Telco and Web worlds. These trends responds to the need to integrate features offered by heterogeneous subjects to provide new innovative value added services to end users on any device equipped with a web browser. This PhD work focuses on the study of service description models and mechanisms that facilitate and automate the interoperation and composition of heterogeneous services (Web and Telecom) within a NGN. The objectives of this research work are: first, creating a model for abstract and concrete service interface specifications for each service type and interaction model, second, defining a service creation environment (SCE) using a orchestration language to compose heterogeneous services, and third, developing a convergent platform for the orchestration and composition of heterogeneous services from different domains environments.

Terence Ambra
Backmatter
Metadata
Title
Service-Oriented Computing – ICSOC 2013 Workshops
Editors
Alessio R. Lomuscio
Surya Nepal
Fabio Patrizi
Boualem Benatallah
Ivona Brandić
Copyright Year
2014
Publisher
Springer International Publishing
Electronic ISBN
978-3-319-06859-6
Print ISBN
978-3-319-06858-9
DOI
https://doi.org/10.1007/978-3-319-06859-6

Premium Partner