Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 8th IFIP WG 11.11 International Conference on Trust Management, IFIPTM 2014, held in Singapore, in July 2014. The 12 revised full papers and 5 short papers presented were carefully reviewed and selected from 36 submissions. In addition, the book contains one invited paper. The papers cover a wide range of topics focusing on the following main areas: trust and reputation models; privacy issues and social and behavioral models of trust; the relationship between trust and security; trust under attacks and trust in the cloud environment.

Inhaltsverzeichnis

Frontmatter

Invited Paper

The Importance of Trust in Computer Security

Abstract
The computer security community has traditionally regarded security as a “hard” property that can be modelled and formally proven under certain simplifying assumptions. Traditional security technologies assume that computer users are either malicious, e.g. hackers or spies, or benevolent, competent and well informed about the security policies. Over the past two decades, however, computing has proliferated into all aspects of modern society and the spread of malicious software (malware) like worms, viruses and botnets have become an increasing threat. This development indicates a failure in some of the fundamental assumptions that underpin existing computer security technologies and that a new view of computer security is long overdue.
In this paper, we examine traditionalmodels, policies and mechanisms of computer security in order to identify areas where the fundamental assumptions may fail. In particular, we identify areas where the “hard” security properties are based on trust in the different agents in the system and certain external agents who enforce the legislative and contractual frameworks.
Trust is generally considered a “soft” security property, so building a “hard” security mechanism on trust will at most give a spongy result, unless the underlying trust assumptions are made first class citizens of the security model. In most of the work in computer security, trust assumptions are implicit and they will surely fail when the environment of the systems change, e.g. when systems are used on a global scale on the Internet. We argue that making such assumptions about trust explicit is an essential requirement for the future of system security and argue why the formalisation of computational trust is necessary when we wish to reason about system security
Christian Damsgaard Jensen

Full Papers

TrustMUSE: A Model-Driven Approach for Trust Management

Abstract
With the increasing acceptance of Trust Management as a building block of distributed applications, the issue of providing its benefits to real world applications becomes more and more relevant. There are multiple Trust Management frameworks ready to be applied; however, they are either unknown to developers or cannot sufficiently be adapted to applications’ use cases. In our research, we have defined a meta model to modularize Trust Management, where each element in the model has clearly defined dependencies and responsibilities – also enforced by a complete API. Based on this model, we were able to develop a process supported by a number of tools that enables non-security expert users to find an applicable Trust Management solution for their specific problem case. Our solution – collectively called the TrustMUSE system – has evolved over an iterative user-centered development process: starting with multiple focus group workshops to identify requirements, and having multiple prototypes to conduct usage observations. Our user evaluation has shown that our system is understandable for system designers, and is able to support them in their work.
Mark Vinkovits, René Reiners, Andreas Zimmermann

Reusability for Trust and Reputation Systems

Abstract
Reputation systems have been extensively explored in various disciplines and application areas. A problem in this context is that the computation engines applied by most reputation systems available are designed from scratch and rarely consider well established concepts and achievements made by others. Thus, approved models and promising approaches may get lost in the shuffle. In this work, we aim to foster reuse in respect of trust and reputation systems by providing a hierarchical component taxonomy of computation engines which serves as a natural framework for the design of new reputation systems. In order to assist the design process we, furthermore, provide a component repository that contains design knowledge on both a conceptual and an implementation level.
Johannes Sänger, Günther Pernul

On Robustness of Trust Systems

Abstract
Trust systems assist in dealing with users who may betray one another. Cunning users (attackers) may attempt to hide the fact that they betray others, deceiving the system. Trust systems that are difficult to deceive are considered more robust. To formally reason about robustness, we formally model the abilities of an attacker. We prove that the attacker model is maximal, i.e. 1) the attacker can perform any feasible attack and 2) if a single attacker cannot perform an attack, then a group of attackers cannot perform that attack. Therefore, we can formulate robustness analogous to security.
Tim Muller, Yang Liu, Sjouke Mauw, Jie Zhang

Design of Intrusion Sensitivity-Based Trust Management Model for Collaborative Intrusion Detection Networks

Abstract
Network intrusions are becoming more and more sophisticated to detect. To mitigate this issue, intrusion detection systems (IDSs) have been widely deployed in identifying a variety of attacks and collaborative intrusion detection networks (CIDNs) have been proposed which enables an IDS to collect information and learn experience from other IDSs with the purpose of improving detection accuracy. A CIDN is expected to have more power in detecting attacks such as denial-of-service (DoS) than a single IDS. In real deployment, we notice that each IDS has different levels of sensitivity in detecting different types of intrusions (i.e., based on their own signatures and settings). In this paper, we propose a machine learning-based approach to assign intrusion sensitivity based on expert knowledge and design a trust management model that allows each IDS to evaluate the trustworthiness of others by considering their detection sensitivities. In the evaluation, we explore the performance of our proposed approach under different attack scenarios. The experimental results indicate that by considering the intrusion sensitivity, our trust model can enhance the detection accuracy of malicious nodes as compared to existing similar models.
Wenjuan Li, Weizhi Meng, Lam-For Kwok

Exploiting Trust and Distrust Information to Combat Sybil Attack in Online Social Networks

Abstract
Due to open and anonymous nature, online social networks are particularly vulnerable to the Sybil attack, in which a malicious user can fabricate many dummy identities to attack the systems. Recently, there is a flurry of interests to leverage social network structure for Sybil defense. However, most of graph-based approaches pay little attention to the distrust information, which is an important factor for uncovering more Sybils. In this paper, we propose an unified ranking mechanism by leveraging trust and distrust in social networks against such kind of attacks based on a variant of the PageRank-like model. Specifically, we first use existing topological anti-Sybil algorithms as a subroutine to produce reliable Sybil seeds. To enhance the robustness of these approaches against target attacks, we then also introduce an effective similarity-based graph pruning technique utilizing local structure similarity. Experiments show that our approach outperforms existing competitive methods for Sybil detection in social networks.
Huanhuan Zhang, Chang Xu, Jie Zhang

Anomaly Detection for Mobile Device Comfort

Abstract
As part of the Device Comfort paradigm, we envision a mobile device which, armed with the information made available by its sensors, is able to recognize whether it is being used by its owner or whether its owner is using the mobile device in an “unusual” manner. To this end, we conjecture that the use of a mobile device follows diurnal patterns and introduce a method for the detection of such anomalies in the use of a mobile device. We evaluate the accuracy of our method with two publicly available data sets and show its feasibility on two mobile devices.
Mehmet Vefa Bicakci, Babak Esfandiari, Stephen Marsh

Improving the Exchange of Lessons Learned in Security Incident Reports: Case Studies in the Privacy of Electronic Patient Records

Abstract
The increasing use of Electronic Health Records has been mirrored by a similar rise in the number of security incidents where confidential information has inadvertently been disclosed to third parties. These problems have been compounded by an apparent inability to learn from previous violations; similar security incidents have been observed across Europe, North America and Asia. This paper presents the results of an empirical study that evaluates the utility and usability of conventional text-based security incident reports with a graphical formalism based on the Goal Structuring Notation. The two methods were compared in term of the users’ ability to identify a number of lessons learned from investigations into previous incidents involving the disclosure of healthcare records. These lessons included both the causes of the incident but also the participants’ ability to understand the reasons why particular recommendations were proposed as ways of avoiding future violations. Even using a relatively small sample, we were able to obtain statistically significant differences between the two approaches. The study showed that the graphical approach resulted in higher accuracy in terms of number of correct answers generated by participants. However, subjective feedback raised further questions about the usability of both approaches as the readers of security incident reports try to interpret the lessons that can increase the security of patient data.
Ying He, Chris Johnson, Yu Lyu, Arniyati Ahmad

A Privacy Risk Model for Trajectory Data

Abstract
Time sequence data relating to users, such as medical histories and mobility data, are good candidates for data mining, but often contain highly sensitive information. Different methods in privacy-preserving data publishing are utilised to release such private data so that individual records in the released data cannot be re-linked to specific users with a high degree of certainty. These methods provide theoretical worst-case privacy risks as measures of the privacy protection that they offer. However, often with many real-world data the worst-case scenario is too pessimistic and does not provide a realistic view of the privacy risks: the real probability of re-identification is often much lower than the theoretical worst-case risk. In this paper we propose a novel empirical risk model for privacy which, in relation to the cost of privacy attacks, demonstrates better the practical risks associated with a privacy preserving data release. We show detailed evaluation of the proposed risk model by using k-anonymised real-world mobility data.
Anirban Basu, Anna Monreale, Juan Camilo Corena, Fosca Giannotti, Dino Pedreschi, Shinsaku Kiyomoto, Yutaka Miyake, Tadashi Yanagihara, Roberto Trasarti

Providing Trustworthy Advice Online

An Exploratory Study on the Potential of Discursive Psychology in Trust Research
Abstract
The Internet serves as an important source for people who are looking for information and advice from peers. Within search behavior a central role is reserved for trust; it will guide the decision to participate online, to share experiences or to pick up information. This paper explores insights from discursive psychology as a potentially interesting approach for trust research in online peer environments. This allows for a certain shift of focus. Instead of looking at the information seeker, we focus on the information provider: How does he try to present himself – and the information sources he refers to in his arguments – as trustworthy and authoritative? Within this theoretical perspective trust is being studied as something that is highly negotiable depending on context and the effect the information provider tries to achieve. Throughout the paper conversation fragments - collected from an online forum on home-improvement - are incorporated to clarify and illustrate some central concepts of discursive psychology.
Sarah Talboom, Jo Pierson

Extending Trust Management with Cooperation Incentives: Achieving Collaborative Wi-Fi Sharing Using Trust Transfer to Stimulate Cooperative Behaviours

Abstract
There are still many issues to achieve collaborative Wi-Fi sharing: the legal liability of the sharer; high data access costs in some situations (mobility when going over a monthly subscription quota, roaming…); no appropriate incentives to share. Current trust management could exclude the malicious users, but still could not foster Wi-Fi sharing. We have extended an appropriate trust metric with cooperation incentives to mitigate all the above issues. We have evaluated our proposal with a trust metric and incentive effectiveness through simulations and we have found the bootstrapping time for such a system and the average depletion time for its users linking it with the size of the system’s user base, proving the feasibility for such a combination.
Carlos Ballester Lafuente, Jean-Marc Seigneur

A Calculus for Trust and Reputation Systems

Abstract
Trust and reputation models provide soft-security mechanisms that can be used to induce cooperative behaviors in user-centric communities in which user-generated services and resources are shared. The effectiveness of such models depends on several, orthogonal aspects that make their analysis a challenging issue. This paper aims to provide support to the design of trust and reputation infrastructures and to verify their adequacy in the setting of software architectures and computer networks underlying online communities. This is done by proposing a formal framework encompassing a calculus of concurrent systems, a temporal logic for trust, and model checking techniques.
Alessandro Aldini

Knots Maintenance for Optimal Management of Trust Relations

Abstract
The knot model is aimed at obtaining a trust-based reputation in communities of strangers. It identifies groups of trustees, denoted as knots and among whom overall trust is strong, and is thus considered the most capable solution for providing reputation information to other members within the same knot. The problem of identifying knots in a trust network is modeled as a graph clustering problem. When considering dynamic and large-scale communities, the task of keeping the clustering correct over time is a great challenge. This paper introduces a clustering maintenance algorithm based on the properties of knots of trust. A maintenance strategy is defined that addresses violations of knot properties due to changes in trust relations that occur with time in response to the dynamic nature of the community. Based on this strategy, a reputation management procedure is implemented in two phases: the first identifies the essence of change and makes a decision regarding the need to improve knot clustering. The second phase locally modifies the clustering to preserve a stable network structure while keeping the network correctly clustered with respect to the knot utility function. We demonstrate by simulation the efficiency of the maintenance algorithm in preserving knots quality, for cases in which only local changes have occurred, to ensure the reliability of the reputation system.
Libi Gur, Nurit Gal-Oz, Ehud Gudes

Short Papers

On the Tradeoff among Trust, Privacy, and Cost in Incentive-Based Networks

Abstract
Incentive strategies are used in collaborative user-centric networks, the functioning of which depends on the willingness of users to cooperate. Classical mechanisms stimulating cooperation are based on trust, which allows to set up a reputation infrastructure quantifying the subjective reliance on the expected behavior of users, and on virtual currency, which allows to monetize the effect of prosocial behaviors. In this paper, we emphasize that a successful combination of social and economic strategies should take into account the privacy of users. To this aim, we discuss the theoretical and practical issues of two alternative tradeoff models that, depending on the way in which privacy is disclosed, reveal the relation existing among trust, privacy, and cost.
Alessandro Aldini, Alessandro Bogliolo, Carlos Ballester Lafuente, Jean-Marc Seigneur

Reputation-Based Cooperation in the Clouds

Abstract
The popularity of the cloud computing paradigm is opening new opportunities for collaborative computing. In this paper we tackle a fundamental problem in open-ended cloud-based distributed computing platforms, i.e., the quest for potential collaborators. We assume that cloud participants are willing to share their computational resources for shared distributed computing problems, but they are not willing to disclose the details of their resources. Lacking such information, we advocate to rely on reputation scores obtained by evaluating the interactions among participants. More specifically, we propose a methodology to assess, at design time, the impact of different (reputation-based) collaborator selection strategies on the system performance. The evaluation is performed through statistical analysis on a volunteer cloud simulator.
Alessandro Celestini, Alberto Lluch Lafuente, Philip Mayer, Stefano Sebastio, Francesco Tiezzi

Introducing Patient and Dentist Profiling and Crowdsourcing to Improve Trust in Dental Care Recommendation Systems

Abstract
Healthcare blogs, podcasts, search engines and health social networks are now widely used, and referred as crowdsources, to share information such as opinions, side effects, medication and types of therapies. Although attitudes and perceptions of the users play a vital role on how they create, share, retrieve and utilise the information for their own or recommend to others, recommendation systems have not taken the attitudes and perceptions into considerations for matching. Our research aims at defining a trust dependent framework to design recommendation system that uses profiling and social networks in dental care. This paper focuses on trust derived in direct interaction between a patient and a dentist from subjective characteristics’ point of view. It highlights that attitudes, behaviours and perception of both patients and dentists are important social elements, which enhance trust and improve the matching process between them. This study forms a basis for our profile-based framework for dynamic dental care recommendation systems.
Sojen Pradhan, Valerie Gay

Abstract Accountability Language

Abstract
Accountability becomes a necessary principle for future computer systems. This is specially critical for the cloud and Web applications that collect personal and sensitive data from end users. Accountability regards the responsibility and liability for the data handling performed by a computer system on behalf of an organization. In case of misconduct (e.g. security breaches, personal data leaks, etc.), accountability should imply remediation and redress actions. Contrary to data privacy and access control, which is already supported by several concrete languages, there is currently no language supporting accountability clauses representation. In this work, we provide an abstract language for accountability clauses representation with temporal logic semantics.
Walid Benghabrit, Hervé Grall, Jean-Claude Royer, Mohamed Sellami, Karin Bernsmed, Anderson Santana De Oliveira

Trust Assessment Using Cloud Broker

Abstract
Despite the advantages and rapid growth of Cloud computing, the cloud environments are still not sufficiently trustworthy from a customer’s perspective. Several challenges such as specification of service level agreements, standards, security measures, selection of service providers and computation of trust still persists, that concerns the customer. To deal with these challenges and provide a trustworthy environment, a mediation layer may be essential. In this paper we propose a cloud broker as a mediation layer, to deal with complex decision of selecting a trustworthy cloud provider, that fulfils the service requirements, create agreements and also provisions security. The cloud broker operates in different modes and this enables a variety of trust assessments.
Pramod S. Pawar, Muttukrishnan Rajarajan, Theo Dimitrakos, Andrea Zisman

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise