Skip to main content

Über dieses Buch

This book constitutes revised selected papers from the Australasian Symposium on Service Research and Innovation, ASSRI, held in Sydney Australia.The 11 full papers presented from ASSRI 2017, which took place during October 19-20, 2017, were carefully reviewed and selected from 26 submissions. The volume also contains 3 papers from ASSRI 2015, which took place during November 2-3, 2015, and one invited paper on the software development processes.The papers were organized in topical sections named: invited talk; modelling; design; quality; social, and application.



Invited Talk


Big Data Analytics Has Little to Do with Analytics

As big data analytics is adapted across multitude of domains and applications there is a need for new platforms and architectures that support analytic solution engineering as a lean and iterative process. In this paper we discuss how different software development processes can be adapted to data analytic process engineering, incorporating service oriented architecture, scientific workflows, model driven engineering and semantic technology. Based on the experience obtained through ADAGE framework [1] and the findings of the survey on how semantic modeling is used for data analytic solution engineering [6], we propose two research directions - big data analytic development lifecycle and data analytic knowledge management for lean and flexible data analytic platforms.
Fethi Rabhi, Madhushi Bandara, Anahita Namvar, Onur Demirors



Accommodating Information Priority Model in Cloudlet Environment

Massive amounts of data during disaster situations require timely collection and analysis for the emergency team to mitigate the impact of the disaster under challenging social-technical conditions. The absence of Internet or its intermittent and bandwidth-constraint connection in disaster areas may exacerbate and disrupt the data collection process which may prevent some vital information to reach the control room in time for immediate response. Regardless the rare connection in the disaster area, there is a need to group information acquired during the response into a specific information model which accommodates different information priority levels. This is to establish a proper mechanism in transmitting higher prioritized information to the control room before other information. The purpose of this paper is to propose an information priority model and system architectures for data collection under challenging conditions in disaster areas.
Teuku Aulia Geumpana, Fethi Rabhi, Liming Zhu

Learning Planning Model for Semantic Process Compensation

Recent advancements in business process conformance analysis have shown that the detection of non-conformance states can be learned with discovering inconsistencies between process models and their historical execution logs, despite their real behaviour. A key challenge in managing business processes is compensating non-conformance states. The concentration of this work is on the hardest aspect of the challenge, where the process might be structurally conformant, but it does not achieve an effect conform to what is required by design. In this work, we propose learning and planning model to address the compensation of semantically non-conformance states. Our work departs from the integration of two well-known AI paradigms, Machine Learning (ML) and Automated Planning (AP). Learning model is divided into two models to address two planning problems: learning predictive model that provides the planner with the ability to respond to violation points during the execution of the process model, and instance-based learning model that provides the planer with a compensation based on the nearest class when there are no compensations perfectly fit to the violation point.
Ahmad Alelaimat, Metta Santipuri, Yingzhi Gou, Aditya Ghose



Information Systems as a Service (ISaaS): Consumer Co-creation of Value

The exchange of goods and services is an essential and intertwined aspect of human activity. Consumer co-creation of value is an important premise of Service-Dominant Logic (SDL). An interesting contrast is that Socio-Technical Design (STD) is shifting its focus towards incompletion of design where consumer complements design in-use. Furthermore, eminent scholars reflecting on the trajectory of the IS field, propose that Complex Adaptive Systems (CAS) and co-evolution as promising avenues to provide a conceptual basis for IS. Ultimately, service is what a consumer seeks from an IS. What distinguishes an IS from other technologies is its dual nature in fulfilling human needs: a direct service provider or mediator for an end-technology. In summary, this research posits that great synergies can be accrued by considering IS as a Service (ISaaS). This discussion while highlighting consumer co-creation of value, strengthens our theoretical understanding of both IS and Service Science disciplines.
Saradhi Motamarri

Scalable Architecture for Personalized Healthcare Service Recommendation Using Big Data Lake

The personalized health care service utilizes the relational patient data and big data analytics to tailor the medication recommendations. However, most of the health care data are in unstructured form and it consumes a lot of time and effort to pull them into relational form. This study proposes a novel data lake architecture to reduce the data ingestion time and improve the precision of healthcare analytics. It also removes the data silos and enhances the analytics by allowing the connectivity to the third-party data providers (such as clinical lab results, chemist, insurance company, etc.). The data lake architecture uses the Hadoop Distributed File System (HDFS) to provide the storage for both structured and unstructured data. This study uses K-means clustering algorithm to find the patient clusters with similar health conditions. Subsequently, it employs a support vector machine to find the most successful healthcare recommendations for the each cluster. Our experiment results demonstrate the ability of data lake to reduce the time for ingesting data from various data vendors regardless of its format. Moreover, it is evident that the data lake poses the potential to generate clusters of patients more precisely than the existing approaches. It is obvious that the data lake provides an unified storage location for the data in its native format. It can also improve the personalized healthcare medication recommendations by removing the data silos.
Sarathkumar Rangarajan, Huai Liu, Hua Wang, Chuan-Long Wang

Declarative Approaches for Compliance by Design

The interest of scholars in devising automated methods to describe and analyse business processes has increased in the last decades due to the extreme interest of organisations in achieving their business objectives while remaining compliant with the relevant normative system. Adhering with norms and policies does not only help to avoid severe sanctions but also results in greater confidence by the consumers, and prestige for the organisation. Defining processes through the paradigm of declarative specifications is gaining momentum due to its intrinsic characteristic of being able to capture business as well as normative specifications within the same framework. We describe some of the state of the art techniques in the field of Business Process Compliance, focusing on pros and cons of such techniques, and advancing future lines of research.
Francesco Olivieri, Guido Governatori, Nick van Beest, Nina Ghanbari Ghooshchi



Auction-Based Models for Composite Service Selection: A Design Framework

Composite service selection refers to the process of selecting an optimal set of web services out of a pool of available candidates based on their quality of service and price. The goal is to logically compose these atomic web services and create value-added composite services which in turn can be used to develop service-based systems. Existing approaches to composite service selection are mostly based on optimization and negotiation techniques. In this paper, we study an emerging trend of composite service selection approaches based on auction models. These techniques benefit from the dynamic pricing of auction models compared to a fixed pricing approach and have the potential to incorporate the dependencies that exist between services constituting a composition. We propose a design framework that introduces two components which need to be addressed when developing an auction-based model for composite service selection: the elements in an auction-based model and a set of design decisions associated with those elements.
Mahboobeh Moghaddam, Joseph G. Davis

A Game-Theoretic Approach to Quality Improvement in Crowdsourcing Tasks

Together with the rise of social media and mobile computing, crowdsourcing increasingly is being relied on as a popular source of information. Crowdsourcing techniques can be employed to solve a wide range of problems, mainly Human Intelligence Tasks (HITs) which are easy to do for human, but difficult or even impossible for computers. However, the quality of crowdsourced information always has been an issue. Several methods have been proposed in order to increase the chance of receiving high quality contributions from the crowd. In this paper, we propose a novel approach to improve the quality of contributions in crowdsourcing tasks. We employ the game theory to motivate people towards providing information of higher quality levels. We also take into account players’ quality factors such as reputation score, expertise and the level of agreement between players of the game to ensure that the problem owner receives an outcome of an accepted quality level. Simulation results demonstrate the efficacy of our proposed approach in terms of improving quality of the contributions as well as the chance of successful completion of the games, in comparison with state-of-the-art similar methods.
Mohammad Allahbakhsh, Haleh Amintoosi, Salil S. Kanhere

Investigating Performance Metrics for Evaluation of Content Delivery Networks

Content Delivery Networks are one of the most common services in order to overcome performance problems caused by massive data requests in popular web applications. CDNs improve clients’ perceived quality of service by placing replica servers scattered around the globe and consequently redirecting users to closer servers. While CDNs’ ultimate goal is to improve the performance of data delivery, their own efficiency can also be an issue to investigate. Due to the complexity of these services, plenty of factors can impact the performance of CDNs. As a result, the efficiency of CDNs can be measured using various metrics. In this paper we review some of the well-known performance metrics in the literature for evaluating CDNs. We also present some other measures including Fairness and Content Travel. In order to attain an overall insight about a CDN, a Cost Function is also presented which incorporates most of the metrics in a single formula.
Seyed Jalal Jafari, HamidReza Naji, Masoumeh Jannatifar



Toward Unified Cloud Service Discovery for Enhanced Service Identification

Nowadays cloud services are being increasingly used by professionals. A wide variety of cloud services are being introduced every day, and each of which is designed to serve a set of specific purposes. Currently, there is no cloud service specific search engine or a comprehensive directory that is available online. Therefore, cloud service customers mainly select cloud services based on the word of mouth, which is of low accuracy and lacks expressiveness. In this paper, we propose a comprehensive cloud service search engine to enable users to perform personalized search based on certain criteria including their own intention of use, cost and the features provided. Specifically, our cloud service search engine focuses on: (1) extracting and identifying cloud services automatically from the Web; (2) building a unified model to represent the cloud service features; and (3) prototyping a search engine for online cloud services. To this end, we propose a novel Service Detection and Tracking (SDT) model for modeling Cloud services. Then based on the SDT model, a cloud service search engine (CSSE) is implemented for helping effectively discover cloud services, relevant service features and service costs that are provided by the cloud service providers.
Abdullah Alfazi, Quan Z. Sheng, Ali Babar, Wenjie Ruan, Yongrui Qin

Predicting Issues for Resolving in the Next Release

Deciding which features or requirements (or commonly referred to as issues) to be implemented for the next release is an important and integral part of any type of incremental development. Existing approaches consider the next release problem as a single or multi-objective optimization problem (on customer values and implementation costs) and thus adopt evolutionary search-based techniques to address it. In this paper, we propose a novel approach to the next release problem by mining historical releases to build a predictive model for recommending if a requirement should be implemented for the next release. Results from our experiments performed on a dataset of 22,400 issues in five large open source projects demonstrate the effectiveness of our approach.
Shien Wee Ng, Hoa Khanh Dam, Morakot Choetkiertikul, Aditya Ghose

Trust and Privacy Challenges in Social Participatory Networks

Trust and privacy in social participatory sensing systems have always been challenging issues. Trust and privacy are somehow interconnected and interdependent concepts, and solutions that take into account both of these two parameters simultaneously will result in better people evaluation in the context of social participatory networks. In this paper, we propose a trust and privacy aware framework for recruiting workers in social participatory networks which controls and adjusts the privacy and trustworthiness of workers accordingly. The proposed method employs the reputation scores gained by a worker to adjust the privacy settings from which the worker can benefit. This interdependency helps requesters find more suitable workers. The simulation results show the promising behavior of the proposed framework.
Haleh Amintoosi, Mohammad Allahbakhsh, Salil S. Kanhere, Aleksandar Ignjatovic



Relating SOA Governance to IT Governance and EA Governance

Service-Oriented Architecture (SOA) governance is considered a key success factor when using a service-oriented approach for aligning IT to business. However, some organizations misinterpret the role of SOA inside the organization and there is scarce empirical evidence about how SOA governance is applied in practice. This research paper will study the position of SOA governance in relation to Information Technology (IT) governance and Enterprise Architecture (EA) governance inside the organization. Semi-structured interviews were conducted with experts in the field. The findings illustrate how organizations initially considered SOA governance as a separate entity; they recently started seeing the relationships between SOA governance, IT governance, Enterprise Architecture governance and corporate governance. In this paper, different views and opinions are presented; nevertheless, they all lead to the conclusion that SOA governance need to be considered at a higher level inside the organization and organizations should not treat SOA governance as a separate entity from and IT governance and EA governance.
George Joukhadar, Fethi Rabhi

Semantic Textual Similarity as a Service

Ensembling well performing models has been proved to outperform individual models in semantic textual similarity task; however, employing existing models still remains a challenge. In this paper, we tackle this issue by providing a service oriented system to index a text similarity model using RESTful services. We also propose a baseline approach, based on an effective penalty-award weighting schema and word-level edit distance, in which pairs of sentences are divided into two main categories based on the number of substitution, insert, and delete required to convert the first sentence to the second one. It is debated that, when the word-level edit distance is very small, it is wiser to measure dissimilarity than similarity. Using knowledge bases along with common natural language processing tools, the proposed method tries to enhance the accuracy of measuring similarity between two sentences. We compared the proposed method with existing approaches, and we found that it produces promising results. Our source code is freely available on GitLab.
Roghayeh Fakouri-Kapourchali, Mohammad-Ali Yaghoub-Zadeh-Fard, Mehdi Khalili

Logistics and Supply Chain Management Investigation: A Case Study

This paper investigates several aspects of logistics and supply chain management such as advantages of a full model of logistics and supply chain management. In addition, it also details a series of challenges in logistics and supply chain management in general and in the computer and video game industry in particular. It also focuses on some popular models and the common trend in logistics and supply chain management. Especially, it analyses the logistics and supply chain model of Ubisoft Australia – a computer and video game publisher. By conducting interviews and observations together with gathering company internal records, it points out some potential problems of Ubisoft Australia with the software system, communication and information flow in inbound logistic and non-conforming returns. Finally, several recommendations are made for future improvements.
Ngoc Hong Tam Dao, Jay Daniel, Stephen Hutchinson, Mohsen Naderpour


Weitere Informationen

Premium Partner