Skip to main content

2019 | Buch

Services – SERVICES 2019

15th World Congress, Held as Part of the Services Conference Federation, SCF 2019, San Diego, CA, USA, June 25–30, 2019, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 15th World Congress on Services, SERVICES 2019, held as part of the Services Conference Federation, SCF 2019, in San Diego, USA, in June 2019.

The 11 full papers and 2 short papers presented were carefully reviewed and selected from 14 submissions. The papers cover topics in the field of software engineering foundations and applications with a focus on novel approaches for engineering requirements, design and architectures, testing, maintenance and evolution, model-driven development, software processes, metrics, quality assurance and new software economics models, search-based software engineering, benefiting day-to-day services sectors and derived through experiences, with appreciation to scale, pragmatism, transparency, compliance and/or dependability.

Inhaltsverzeichnis

Frontmatter
SMT-Based Modeling and Verification of Cloud Applications
Abstract
Cloud applications have been rapidly evolving and gained more and more attention in the past decade. Formal modeling and verification of cloud services are necessarily needed to guarantee their correctness and reliability of complex cloud applications. In this paper, we present a formal framework for modeling and verification of cloud applications based on the SMT solver Z3. Simple cloud services are specified as the basis for the modeling of composition and more complex cloud services. Three different classes Service, Composition and Cloud indicating simple cloud services, composition patterns and composed cloud services are defined, which facilitates the further development of attributes and methods. We also propose an approach to check the refinement and equivalence relations between cloud services, in which counter examples can be automatically generated when the relation is not valid.
Xiyue Zhang, Meng Sun
On Efficiency of Scrambled Image Forensics Service Using Support Vector Machine
Abstract
Images can be a very good evidence during investigation of a crime scene. At the same time they can also contain very personal information which should not be exposed without the consent of the involved people. In this paper, We have presented here a practical approach to protect privacy of under investigation images with the use of Arnold’s Transform (AT) scrambling and Support Vector Machine, We also provide a new approach towards the whole forensics service provided by the designated agencies with the help of implementation of our approach. We enhanced the security of AT and provided privacy preserving mechanism to ensure protection of privacy. In literature only policies are defined to protect the privacy and lack of a solid approach which we have tried to resolve with a proof of concept implementation. In short, we have provided a full image forensics framework for illegal image detection while preserving the privacy.
Sahibzada Muhammad Shuja, Raja Farhat Makhdoom Khan, Munam Ali Shah, Hasan Ali Khattak, Assad Abbass, Samee U. Khan
Investigating Personally Identifiable Information Posted on Twitter Before and After Disasters
Abstract
Through social media, users may consciously reveal information about their personality; however, they could also unintentionally disclose private information related to the locations of where they live/work, car license plates, signatures or even their identification documents. Any disclosed information can endanger an individual’s privacy, possibly resulting in burglaries or identity theft. To the best of our knowledge, this paper is the first to claim and demonstrate that people may reveal information when they are vulnerable and actively seek help; evidently, in the event of a natural disaster, people change their behavior and become more inclined to share their personal information. To examine this phenomenon, we investigate two hurricane events (Harvey and Maria) and one earthquake (Mexico City) using datasets obtained from Twitter. Our findings show a significant change in people’s behavioral pattern in the disaster areas, regarding tweeting images that contain Personally Identifiable Information (PII), before and after a disaster event.
Pezhman Sheinidashtegol, Aibek Musaev, Travis Atkison
Current Trends in Collaborative Filtering Recommendation Systems
Abstract
Many different approaches for designing recommendation systems exist, including collaborative filtering, content-based, and hybrid approaches. Following an overview of different collaborative filtering recommendation system design methodologies, this paper reviews 71 journals articles and conference papers to provide a detailed literature review of model-based collaborative filtering. The articles selected for this review were published within the last decade between 2008–2018. They are classified by database, application field, methodology, and publication year. Papers using Clustering, Bayesian, Association Rule, Neural Networks, Regression, and Ensemble methodologies are surveyed. Application areas include books, music, movies, social networks, and business. This survey also analyzes the type of the data that was used for application field. This literature review identifies trends for model-based collaborative filtering and through empirical results gives insight into future research trajectories in this field.
Sana Abida Amin, James Philips, Nasseh Tabrizi
Big Data Quality: A Data Quality Profiling Model
Abstract
Big Data is becoming a standard data model, and it is gaining wide adoption in the digital universe. Estimating the Quality of Big Data is recognized to be essential for data management and data governance. To ensure a fast and efficient data quality assessment represented by its dimensions, we need to extend the data profiling model to incorporate also quality profiling. The latter encompasses more value-added quality processes that go beyond data and its corresponding metadata. In this paper, we propose a Data Quality Profiling Model (BDQPM) for Big Data that involves several modules such as sampling, profiling, exploratory quality profiling, quality profile repository (QPREPO), and the data quality profile (DQP). Thus, the QPREPO plays an important role in managing many quality-related elements such as data quality dimensions and their related metrics, pre-defined quality actions scenarios, pre-processing activities (PPA), their related functions (PPAF), and the data quality profile. Our exploratory quality profiling method discovers a set of PPAF from systematic predefined quality actions scenarios to leverage the quality trends of any data set and show the cause and effects of such a process on the data. Such a quality overview is considered as a preliminary quality profile of the data. We conducted a series of experiments to test different features of the BDQPM including sampling and profiling, quality evaluation, and exploratory quality profiling for Big Data quality enhancement. The results prove that quality profiling tracks quality at the earlier stage of Big data life cycle leading to quality improvement and enforcement insights from exploratory quality profiling methodology.
Ikbal Taleb, Mohamed Adel Serhani, Rachida Dssouli
On Development of Data Science and Machine Learning Applications in Databricks
Abstract
Databricks is a unified analytics engine that allows rapid development of data science applications using machine learning techniques such as classification, linear and nonlinear regression, clustering, etc. Existence of myriad sophisticated computational options, however, can become overwhelming for designers as it may not always be clear what choices can produce the best predictive model given a specific data set. Further, the mere high dimensionality of big data sets is a challenge for data scientists to gain a deep understanding of the results obtained by a utilized model.
This paper provides general guidelines for utilizing a variety of machine learning algorithms on the cloud computing platform, Databricks. Visualization is an important means for users to understand the significance of the underlying data. Therefore, it is also demonstrated how graphical tools such as Tableau can be used to efficiently examine results of classification or clustering. The dimensionality reduction techniques such as Principal Component Analysis (PCA), which help reduce the number of features in a learning experiment, are also discussed.
To demonstrate the utility of Databricks tools, two big data sets are used for performing clustering and classification. A variety of machine learning algorithms are applied to both data sets, and it is shown how to obtain the most accurate learning models employing appropriate evaluation methods.
Wenhao Ruan, Yifan Chen, Babak Forouraghi
Concept Drift Adaptive Physical Event Detection for Social Media Streams
Abstract
Event detection has long been the domain of physical sensors operating in a static dataset assumption. The prevalence of social media and web access has led to the emergence of social, or human sensors who report on events globally. This warrants development of event detectors that can take advantage of the truly dense and high spatial and temporal resolution data provided by more than 3 billion social users. The phenomenon of concept drift, which causes terms and signals associated with a topic to change over time, renders static machine learning ineffective. Towards this end, we present an application for physical event detection on social sensors that improves traditional physical event detection with concept drift adaptation. Our approach continuously updates its machine learning classifiers automatically, without the need for human intervention. It integrates data from heterogeneous sources and is designed to handle weak-signal events (landslides, wildfires) with around ten posts per event in addition to large-signal events (hurricanes, earthquakes) with hundreds of thousands of posts per event. We demonstrate a landslide detector on our application that detects almost 350% more landslides compared to static approaches. Our application has high performance: using classifiers trained in 2014, achieving event detection accuracy of 0.988, compared to 0.762 for static approaches.
Abhijit Suprem, Aibek Musaev, Calton Pu
ClientNet Cluster an Alternative of Transferring Big Data Files by Use of Mobile Code
Abstract
Big Data has become a nontrivial problem in the field of business as well as in scientific applications. It becomes more complex with the growth of data and scaling of data entry points. These points refer to the remote and local sources where huge data is generated within tiny slots of time. This may also refer to the end user devices including computers, sensors and wireless gadgets. As far as scientific applications are concerned, for example, Geo Physics applications or real time weather forecast requires heavy data and complex mathematical computations. Such applications generate large chunks of data that needs to transfer it through conventional computer networks. Problem with Big Data applications emerges when heavy amount of data is transferred or downloaded (files or objects) from remote locations. The results drawn in real-time from large data files/sets become obsolete due to the fact data keeps on adding new data into the files and the downloading by remote machines remains slower as compared to file growth. This paper addresses this problem and provides possible solution through ClientNet Cluster of remote computers, Specialized Cluster of Computers, as one of the alternative to deal with real-time data analytics under the hard constraints of network. The idea is moving code, for analytic processing, to the remotely available big size files and returning the results to distributed remote locations. The Big Data file does not need to move around network for uploading or downloading whenever the processing is required from distributed locations.
Waseem Akhtar Mufti
Utilizing Blockchain Technology to Enhance Halal Integrity: The Perspectives of Halal Certification Bodies
Abstract
Brand owners face multiple challenges in establishing end-to-end halal supply chains and in halal issue management. To this end, leading Halal Certification Bodies (HCBs) set up a discussion group to achieve consensus on a way forward and to examine the potential role of the halal blockchain in resolving these issues, as well as the key parameters, and segregation and communication requirements of the blockchain. Halal issues can be divided broadly into three areas: contamination, non-compliance and perception. Only cases involving contamination and non-compliance need to be reported to the HCB. Consensus has been achieved in the segregation of halal supply chains in terms of designated halal transport, storage and halal compliant terminals, for Muslim (majority) countries, whereas in non-Muslim (majority) countries greater leniency is possible. Effective segregation is only possible with effective communication, whereby the term ‘halal supply chain’ is encoded in freight documents, on freight labels and within the ICT system.
Marco Tieman, Mohd Ridzuan Darun, Yudi Fernando, Abu Bakar Ngah
Maintaining Fog Trust Through Continuous Assessment
Abstract
Cloud computing continues to provide flexible and efficient way for delivery of services, meeting user requirements and challenges of the time. Software, Infrastructures, and Platforms are provided as services in cloud and fog computing in a cost-effective manner. Migration towards fog instigate new aspects of research for security & privacy. Trust is dependent on measures taken for availability, security, and privacy of users’ services as well as data in fog as well as sharing of these statistics with stakeholders. Any type of lapses in measures for security & privacy shatter user’s trust. In order to provide a trust worthy security and privacy system, we have conducted a thorough survey of existing techniques. A generic model for trustworthiness is proposed in this paper. This model yields a comprehensive component-based architecture of a trust management system to aid fog service providers to preserve users’ Trust in a fog computing environment.
Hasan Ali Khattak, Muhammad Imran, Assad Abbas, Samee U. Khan
Study on the Coordination Degree Between FDI and Modern Service Industry Development in Shenzhen
Abstract
Foreign direct investment (FDI) is an important driving force for economic development. In the period of rapid development and growth of Modern Service Industry (MSI), whether FDI had any impact on it was not clear. Through selecting some indicators form scale, structure and performance, this paper constructs the “FDI-MSI” coordination degree evaluate model. Based on the data of foreign direct investment (FDI) and modern service industry (MSI) collected from Shenzhen, the development coordination degree between foreign direct investment (FDI) and modern service industry (MSI) was estimated in 2010–2018. Research results show that the foreign direct investment and the modern service industry are generally coordinated in Shenzhen, but the overall level is not high. The structure of foreign capital is not reasonable, and the current inefficiency in the use of foreign capital by the service industry is the main reason. In the process of utilizing foreign capital in the future, Shenzhen should strengthen the guidance of foreign investment with the goal of modern service industry development, create a favorable environment for foreign capital to flow into modern service industry, and promote the coordinated development of foreign capital utilization and modern service industry.
Xiangbo Zhu, Guoliang Ou, Gang Wu
Correction to: Study on the Coordination Degree Between FDI and Modern Service Industry Development in Shenzhen
Xiangbo Zhu, Guoliang Ou, Gang Wu
Backmatter
Metadaten
Titel
Services – SERVICES 2019
herausgegeben von
Yunni Xia
Liang-Jie Zhang
Copyright-Jahr
2019
Electronic ISBN
978-3-030-23381-5
Print ISBN
978-3-030-23380-8
DOI
https://doi.org/10.1007/978-3-030-23381-5

Neuer Inhalt