Skip to main content
Top

2017 | Book

Trust Management XI

11th IFIP WG 11.11 International Conference, IFIPTM 2017, Gothenburg, Sweden, June 12-16, 2017, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 11th IFIP WG 11.11 International Conference on Trust Management, IFIPTM 2017, held in Gothenburg, Sweden, in June 2017.
The 8 revised full papers and 6 short papers presented were carefully reviewed and selected from 29 submissions. The papers are organized in the following topical sections: information sharing and personal data; novel sources of trust and trust information; applications of trust; trust metrics; and reputation systems. Also included is the 2017 William Winsborough commemorative address and three short IFIPTM 2017 graduate symposium presentations.

Table of Contents

Frontmatter

Information Sharing and Personal Data

Frontmatter
Partial Commitment – “Try Before You Buy” and “Buyer’s Remorse” for Personal Data in Big Data & Machine Learning
Abstract
The concept of partial commitment is discussed in the context of personal privacy management in data science. Uncommitted, promiscuous or partially committed user’s data may either have a negative impact on model or data quality, or it may impose higher privacy compliance cost on data service providers. Many Big Data (BD) and Machine Learning (ML) scenarios involve the collection and processing of large volumes of person-related data. Data is gathered about many individuals as well as about many parameters in individuals. ML and BD both spend considerable resources on model building, learning, and data handling. It is therefore important to any BD/ML system that the input data trained and processed is of high quality, represents the use case, and is legally processes in the system. Additional cost is imposed by data protection regulation with transparency, revocation and correction rights for data subjects. Data subjects may, for several reasons, only partially accept a privacy policy, and chose to opt out, request data deletion or revoke their consent for data processing. This article discusses the concept of partial commitment and its possible applications from both the data subject and the data controller perspective in Big Data and Machine Learning.
Lothar Fritsch
VIGraph – A Framework for Verifiable Information
Abstract
In order to avail of some service, a user may need to share with a service provider her personal chronological information, e.g., identity, financial record, health information and so on. In the context of financial organisations, a process often referred to as the know your customer (KYC) is carried out by financial organisations to collect information about their customers. Sharing this information with multiple service providers duplicates the data making it difficult to keep it up-to-date as well as verify. Furthermore, the user has limited to no control over the, mostly sensitive, data that is released to such organisations. In this preliminary work, we propose an efficient framework – Verifiable Information Graph or VIGraph – based on generalised hash trees, which can be used for verification of data with selective release of sensitive information. Throughout the paper, we use personal profile information as the running example to which our proposed framework is applied.
Anirban Basu, Mohammad Shahriar Rahman, Rui Xu, Kazuhide Fukushima, Shinsaku Kiyomoto
A Flexible Privacy-Preserving Framework for Singular Value Decomposition Under Internet of Things Environment
Abstract
The singular value decomposition (SVD) is a widely used matrix factorization tool which underlies many useful applications, e.g. recommendation system, abnormal detection and data compression. Under the environment of emerging Internet of Things (IoT), there would be an increasing demand for data analysis. Moreover, due to the large scope of IoT, most of the data analysis work should be handled by fog computing. However, the fog computing devices may not be trustable while the data privacy is the significant concern of the users. Thus, the data privacy should be preserved when performing SVD for data analysis. In this paper, we propose a privacy-preserving fog computing framework for SVD computation. The security and performance analysis shows the practicability of the proposed framework. One application of recommendation system is introduced to show the functionality of the proposed framework.
Shuo Chen, Rongxing Lu, Jie Zhang

Novel Sources of Trust and Trust Information

Frontmatter
The Game of Trust: Using Behavioral Experiment as a Tool to Assess and Collect Trust-Related Data
Abstract
Trust is one of the most important dimensions in developing and maintaining business relationships. However, due to the difficult to collect trust-related data from industry, given its concerns surrounding privacy and trade secret protection, it still very problematic to investigate it. Motivated by the growing interest in behavioral research in the field of operations and supply chain management, and by the lack of supply chain trust-related datasets, the authors of this paper proposed and designed a novel trust behavioral experiment. Utilizing concepts of gamification and serious games, the experiment is capable of gathering information regarding individuals’ behavior during procurement, information exchange, and ordering decisions considering trust relations in the context of supply chains.
Diego de Siqueira Braga, Marco Niemann, Bernd Hellingrath, Fernando Buarque de Lima Neto
Social Network Analysis for Trust Prediction
Abstract
From car rental to knowledge sharing, the connection between online and offline services is increasingly tightening. As a consequence, online trust management becomes crucial for the success of services run in the physical world. In this paper, we outline a framework for identifying social web users more inclined to trust others by looking at their profiles. We use user centrality measures as a proxy of trust, and we evaluate this framework on data from Konnektid, a knowledge sharing social Web platform. We introduce five metrics for measuring trust. Performance achieved an accuracy between 43% and 99%.
Davide Ceolin, Simone Potenza
Investigating Security Capabilities in Service Level Agreements as Trust-Enhancing Instruments
Abstract
Many government agencies (GAs) increasingly rely on external computing, communications and storage services supplied by service providers (SPs) to process, store or transmit sensitive data to increase scalability and decrease the costs of maintaining services. The relationships with external SPs are usually established through service level agreements (SLAs) as trust-enhancing instruments. However, there is a concern that existing SLAs are mainly focused on the system availability and performance aspects, but overlook security in SLAs. In this paper, we investigated ‘real world’ SLAs in terms of security guarantees between GAs and external SPs, using Indonesia as a case study. This paper develops a grounded adaptive Delphi method to clarify the current and potential attributes of security-related SLAs that are common among external service offerings. To this end, we conducted a longitudinal study of the Indonesian government auctions of 59 e-procurement services from 2010–2016 to find ‘auction winners’. Further, we contacted five selected major SPs (n = 15 participants) to participate in a three-round Delphi study. Using a grounded theory analysis, we examined the Delphi study data to categorise and generalise the extracted statements in the process of developing propositions. We observed that most of the GAs placed significant importance on service availability, but security capabilities of the SPs were not explicitly expressed in SLAs. Additionally, the GAs often use the provision of service availability to demand additional security capabilities supplied by the SPs. We also observed that most of the SPs found difficulties in addressing data confidentiality and integrity in SLAs. Overall, our findings call for a proposition-driven analysis of the Delphi study data to establish the foundation for incorporating security capabilities into security-related SLAs.
Yudhistira Nugraha, Andrew Martin

Applications of Trust

Frontmatter
Managing Software Uninstall with Negative Trust
Abstract
A problematic aspect of software management systems in view of integrity preservation is the handling, approval, tracking and eventual execution of change requests. In the context of the relation between clients and repositories, trust can help identifying all packages required by the intended installation. Negative trust, in turn, can be used to approach the complementary problem induced by removing packages. In this paper we offer a logic for negative trust which allows to identify admissible and no-longer admissible software packages in the current installation profile in view of uninstall processes. We provide a simple working example and the system is formally verified using the Coq theorem prover.
Giuseppe Primiero, Jaap Boender
Towards Trust-Aware Collaborative Intrusion Detection: Challenges and Solutions
Abstract
Collaborative Intrusion Detection Systems (CIDSs) are an emerging field in cyber-security. In such an approach, multiple sensors collaborate by exchanging alert data with the goal of generating a complete picture of the monitored network. This can provide significant improvements in intrusion detection and especially in the identification of sophisticated attacks. However, the challenge of deciding to which extend a sensor can trust others, has not yet been holistically addressed in related work. In this paper, we firstly propose a set of requirements for reliable trust management in CIDSs. Afterwards, we carefully investigate the most dominant CIDS trust schemes. The main contribution of the paper is mapping the results of the analysis to the aforementioned requirements, along with a comparison of the state of the art. Furthermore, this paper identifies and discusses the research gaps and challenges with regard to trust and CIDSs.
Emmanouil Vasilomanolakis, Sheikh Mahbub Habib, Pavlos Milaszewicz, Rabee Sohail Malik, Max Mühlhäuser
Self-trust, Self-efficacy and Digital Learning
Abstract
Self-trust is overlooked in trust research. However, self-trust is crucial to a learner’s success in a digital learning space. In this paper, we review self-trust and the notion of self-efficacy used by the education researchers. We claim self-efficacy is self-trust. We then explore what self-trust and its expression means to one group of learners and use this data to provide design suggestions for digital learning spaces that improve students’ self-trust.
Natasha Dwyer, Stephen Marsh

Trust Metrics

Frontmatter
Advanced Flow Models for Computing the Reputation of Internet Domains
Abstract
The Domain Name System (DNS) is an essential component of the Internet infrastructure that translates domain names into IP addresses. Recent incidents verify the enormous damage of malicious activities utilizing DNS such as bots that use DNS to locate their command & control servers. We believe that a domain that is related to malicious domains is more likely to be malicious as well and therefore detecting malicious domains using the DNS network topology is a key challenge.
In this work we improve the flow model presented by Mishsky et al. [12] for computing the reputation of domains. This flow model is applied on a graph of domains and IPs and propagates their reputation scores through the edges that connect them to express the impact of malicious domains on related domains. We propose the use of clustering to guide the flow of reputation in the graph and examine two different clustering methods to identify groups of domains and IPs that are strongly related. The flow algorithms use these groups to emphasize the influence of nodes within the same cluster on each other. We evaluate the algorithms using a large database received from a commercial company. The experimental evaluation of our work have shown the expected improvement over previous work [12] in detecting malicious domains.
Hussien Othman, Ehud Gudes, Nurit Gal-Oz
Trust Trust Me (The Additivity)
Abstract
We present a mathematical formulation of a trust metric using a quality and quantity pair. Under a certain assumption, we regard trust as an additive value and define the soundness of a trust computation as not to exceed the total sum. Moreover, we point out the importance of not only soundness of each computed trust but also the stability of the trust computation procedure against changes in trust value assignment. In this setting, we define trust composition operators. We also propose a trust computation protocol and prove its soundness and stability using the operators.
Ken Mano, Hideki Sakurada, Yasuyuki Tsukada
Towards Statistical Trust Computation for Medical Smartphone Networks Based on Behavioral Profiling
Abstract
Due to the popularity of mobile devices, medical smartphone networks (MSNs) have been evolved, which become an emerging network architecture in healthcare domain to improve the quality of service. There is no debate among security experts that the security of Internet-enabled medical devices is woefully inadequate. Although MSNs are mostly internally used, they still can leak sensitive information under insider attacks. In this case, there is a need to evaluate a node’s trustworthiness in MSNs based on the network characteristics. In this paper, we focus on MSNs and propose a statistical trust-based intrusion detection mechanism to detect malicious nodes in terms of behavioral profiling (e.g., camera usage, visited websites, etc.). Experimental results indicate that our proposed mechanism is feasible and promising in detecting malicious nodes under medical environments.
Weizhi Meng, Man Ho Au

Reputation Systems

Frontmatter
Reputation-Enhanced Recommender Systems
Abstract
Recommender systems are pivotal components of modern Internet platforms and constitute a well-established research field. By now, research has resulted in highly sophisticated recommender algorithms whose further optimization often yields only marginal improvements. This paper goes beyond the commonly dominating focus on optimizing algorithms and instead follows the idea of enhancing recommender systems with reputation data. Since the concept of reputation-enhanced recommender systems has attracted considerable attention in recent years, the main aim of the paper is to provide a comprehensive survey of the approaches proposed so far. To this end, existing work are identified by means of a systematic literature review and classified according to carefully considered dimensions. In addition, the resulting structured analysis of the state of the art serves as a basis for the deduction of future research directions.
Christian Richthammer, Michael Weber, Günther Pernul
Self-reported Verifiable Reputation with Rater Privacy
Abstract
Reputation systems are a major feature of every modern e-commerce website, helping buyers carefully choose their service providers and products. However, most websites use centralized reputation systems, where the security of the system rests entirely upon a single Trusted Third Party. Moreover, they often disclose the identities of the raters, which may discourage honest users from posting frank reviews due to the fear of retaliation from the ratees. We present a reputation system that is decentralized yet secure and efficient, and could therefore be applied in a practical context. In fact, users are able to retrieve the reputation score of a service provider directly from it in constant time, with assurance regarding the correctness of the information obtained. Additionally, the reputation system is anonymity-preserving, which ensures that users can submit feedback without their identities being associated to it. Despite this anonymity, the system still offers robustness against attacks such as ballot-stuffing and Sybil attacks.
Rémi Bazin, Alexander Schaub, Omar Hasan, Lionel Brunie

William Winsborough Commemorative Address and Award 2017

Frontmatter
Strong Accountability and Its Contribution to Trustworthy Data Handling in the Information Society
Abstract
Accountability has long been the subject of discussion within public administration. Especially given the potential privacy and security risks arising from rapidly changing usage of information technology (IT), it can be useful to apply this notion also in the commercial world, relating to the actions of private organisations. However, accountability may be neither a necessary nor a sufficient condition for trust. In order to provide an improved basis for trustworthiness via enhancing accountability, certain conditions need to be met. In this paper we elucidate what these conditions are and explain the related notion and importance of strong accountability. Further, we ground this analysis within the wider context of organisational ethical decision making. As a topical case in point we focus on the data protection area and the protection of personal data.
Siani Pearson
Backmatter
Metadata
Title
Trust Management XI
Editors
Jan-Philipp Steghöfer
Babak Esfandiari
Copyright Year
2017
Electronic ISBN
978-3-319-59171-1
Print ISBN
978-3-319-59170-4
DOI
https://doi.org/10.1007/978-3-319-59171-1

Premium Partner