Skip to main content
Top

2006 | Book

Trust Management

4th International Conference, iTrust 2006, Pisa, Italy, May 16-19, 2006. Proceedings

Editors: Ketil Stølen, William H. Winsborough, Fabio Martinelli, Fabio Massacci

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This volume constitutes the proceedings of the 4th International Conference on Trust Management, held in Pisa, Italy during 16–19 May 2006. The conference followed successful International Conferences in Crete in 2003, Oxford in 2004 and Paris in 2005. The ?rst three conferences were organized by iTrust, which was a working group funded as a thematic network by the Future and Emerging Technologies(FET) unit of the Information Society Technologies(IST) program of the European Union. The purpose of the iTrust working group was to provide a forum for cro- disciplinary investigation of the applications of trust as a means of increasing security, building con?dence and facilitating collaboration in dynamic open s- tems. The aim of the iTrust conference series is to provide a common forum, bringing together researchers from di?erent academic branches, such as the technology-oriented disciplines, law, social sciences and philosophy, in order to develop a deeper and more fundamental understanding of the issues and ch- lenges in the area of trust management in dynamic open systems. The response to this conference was excellent; from the 88 papers submitted to the conference, we selected 30 full papers for presentation. The program also included one keynote address, given by Cristiano Castelfranchi; an industrial panel; 7 technology demonstrations; and a full day of tutorials.

Table of Contents

Frontmatter

Invited Talks

Why We Need a Non-reductionist Approach to Trust

I will underline the real complexity of trust (not for mere theoretical purposes but for advanced applications), and I will criticize some of those reductionist view of Trust. I will illustrate: how trust can be a disposition, but also is an ‘evaluation’, and also a ‘prediction’ or better an ‘expectation’; and how it is a ‘decision’ and an ‘action’, and ‘counting on’ (relying) and ‘depending on’ somebody; and which is the link with uncertainty and risk taking (fear and hope); how it creates social relationships; how it is a dynamic phenomenon with loop-effects; how it derives from several sources.

Cristiano Castelfranchi

Full Papers

Dynamic Trust Federation in Grids

Grids are becoming economically viable and productive tools. They provide a way of utilizing a vast array of linked resources such as computing systems, databases and services online within Virtual Organizations (VO). However, today’s Grid architectures are not capable of supporting dynamic, agile federation across multiple administrative domains and the main barrier, which hinders dynamic federation over short time scales is security. Federating security and trust is one of the most significant architectural issues in Grids. Existing relevant standards and specifications can be used to federate security services, but do not directly address the dynamic extension of business trust relationships into the digital domain. In this paper we describe an experiment which highlights those challenging architectural issues and forms the basis of an approach that combines a dynamic trust federation and a dynamic authorization mechanism for addressing dynamic security trust federation in Grids. The experiment made with the prototype described in this paper is used in the NextGRID project to define the requirements of next generation Grid architectures adapted to business application needs.

Mehran Ahsant, Mike Surridge, Thomas Leonard, Ananth Krishna, Olle Mulmo
Being Trusted in a Social Network: Trust as Relational Capital

Trust can be viewed as an instrument both for an agent selecting the right partners in order to achieve its own goals (the point of view of the trustier), and for an agent of being selected from other potential partners (the point of view of the trustee) in order to establish with them a cooperation/ collaboration and to take advantage from the accumulated trust. In our previous works we focused our main attention on the first point of view. In this paper we will analyze trust as the agents’

relational capital

. Starting from the classical dependence network (in which needs, goals, abilities and resources are distributed among the agents) with potential partners, we introduce the analysis of what it means for an agent to be trusted and how this condition could be strategically used from it for achieving its own goals, that is, why it represents a form of power. Although there is a big interest in literature about ‘social capital’ and its powerful effects on the wellbeing of both societies and individuals, often it is not clear enough what is it the object under analysis. Individual trust capital (relational capital) and collective trust capital not only should be disentangled, but their relations are quite complicated and even conflicting. To overcome this gap, we propose a study that first attempts to understand what trust is as

capital of individuals

. In which sense “trust” is a capital. How this capital is built, managed and saved. In particular, how this capital is the result of the others’ beliefs and goals. Then we aim to analytically study the cognitive dynamics of this object.

Cristiano Castelfranchi, Rino Falcone, Francesca Marzo
A Requirements-Driven Trust Framework for Secure Interoperation in Open Environments

A key challenge in emerging multi-domain open environments is the need to establish trust-based, loosely coupled partnerships between previously unknown domains. An efficient trust framework is essential to facilitate trust negotiation based on the service requirements of the partner domains. While several trust mechanisms have been proposed, none address the issue of integrating the trust mechanisms with the process of integrating access control policies of partner domains to facilitate secure interoperation. In this paper, we propose a requirements-driven trust framework for secure interoperation in open environments. Our framework tightly integrates game-theory based trust negotiation with service negotiation, and policy mapping to ensure secure interoperation.

Suroop Mohan Chandran, Korporn Panyim, James B. D. Joshi
Normative Structures in Trust Management

The modelling of trust for the purpose of trust management gives rise to a puzzle that opens up fundamental questions concerning the relationship between trust and calculative reason as the basis for cooperation. It is argued that, ironically, trust management seem not to maximise trust but, instead, to reduce the need for trust. This conclusion is used to argue that the normative aspects of trust must be given a central role in the modelling of trust and trust management. The following question is addressed: What can an agent R infer about the future actions of another agent E, if R assumes that E is trustworthy? It is suggested that a generalised version of Barwise and Seligman’s theory of information flow can be used to model the role of normative structures in reasoning in trust relationships. Implications for trust management are discussed.

Dag Elgesem
Gathering Experience in Trust-Based Interactions

Evidence based trust management, where automated decision making is supported through collection of evidence about the trustworthiness of entities from a variety of sources, has gained popularity in recent years. So far work in this area has primarily focussed on schemes for combining evidence from potentially unreliable sources (recommenders) with the aim of improving the quality of decision making. The large body of literature on reputation systems is testament to this. At the same time, little consideration has been given to the actual gathering of useful and detailed experiential evidence. Most proposed systems use quite simplistic representations for experiences, and mechanisms where high level feedback is provided by users. Consequently, these systems provide limited support for automated decision making. In this paper we build upon our previous work in trust-based interaction modelling and we present an interaction monitor that enables automated collection of detailed interaction evidence. The monitor is a prototype implementation of our generic interaction monitoring architecture that combines well understood rule engine and event management technology. This paper also describes a distributed file server scenario, in order to demonstrate our interaction model and monitor. Finally, the paper presents some preliminary results of a simulation-based evaluation of our monitor in the context of the distributed file server scenario.

Colin English, Sotirios Terzis
Multilateral Decisions for Collaborative Defense Against Unsolicited Bulk E-mail

Current anti-spam tools focus on filtering incoming e-mails. The scope of these tools is limited to local administrative domains. With such limited information, it is difficult to make accurate spam control decisions. We observe that sending servers process more information on their outgoing e-mail traffic than receiving servers do on their incoming traffic. Better spam control can be achieved if e-mail servers collaborate with one another by checking both outgoing and incoming traffic. However, the control of outgoing traffic provides little direct benefit to the sending server. Servers in different administrative domains presently have little incentive to improve spam control on other receiving servers, which hampers a move toward cross-domain collaboration. We propose a collaborative framework in which spam control decisions are drawn from the data aggregated within a group of e-mail servers across different administrative domains. The collaboration provides incentive for outgoing spam control. The servers that contribute to the control of outgoing spam are rewarded, while traffic restriction is imposed on the irresponsible servers. A Federated Security Context (FSC) is established to enable transparent negotiation of multilateral decisions among the group of collaborators without common trust. Information from trusted collaborators counts more for one’s final decision compared to information from untrustworthy servers. The FSC mitigates potential threats of fake information from malicious servers. The collaborative approach to spam control is more efficient than a decision in isolation, providing dynamic identification and adaptive restriction to spam generators.

Noria Foukia, Li Zhou, Clifford Neuman
Generating Predictive Movie Recommendations from Trust in Social Networks

Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user’s opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.

Jennifer Golbeck
Temporal Logic-Based Specification and Verification of Trust Models

Mutual trust is essential in performing economical transactions. In modern internet-based businesses, however, traditional trust gaining mechanisms cannot be used and new ways to build trust between e-business partners have to be found. In consequence, a lot of models describing trust and the mechanisms to build it were developed. Unfortunately, most of these models neither provide the right formalism to model relevant aspects of the trust gaining process (e.g., context and time of a trust-related interaction), nor do they allow refinement proofs verifying that a trust management tool implements a certain trust model. Therefore, we propose the temporal logic-based specification and verification technique cTLA which provides a formalism enabling to model context- and time-related aspects of a trust building process. Moreover, cTLA facilitates formal refinement proofs. In this paper, we discuss the application of cTLA to describe trust purposes by means of simple example systems which are used to decide about the application of certain policies based on the reputation of a party. In particular, we introduce a basic and a refined reputation system and sketch the proof that the refined system is a correct realization of the simple one.

Peter Herrmann
Modelling Trade and Trust Across Cultures

Misunderstandings arise in international trade due to difference in cultural background of trade partners. Trust and the role it plays in trade are influenced by culture. Considering that trade always involves working on the relationship with the trade partner, understanding the behaviour of the other is of the essence. This paper proposes to involve cultural dimensions in the modelling of trust in trade situations. A case study is presented to show a conceptualisation of trust with respect to the cultural dimension of performance orientation versus cooperation orientation.

Gert Jan Hofstede, Catholijn M. Jonker, Sebastiaan Meijer, Tim Verwaart
Estimating the Relative Trustworthiness of Information Sources in Security Solution Evaluation

When evaluating alternative security solutions, such as security mechanism, security protocols etc., “hard” data or information is rarely available, and one have to relay on the opinions of domain experts. Log-files from IDS, Firewalls and honeypots might also be used. However, such source are most often only used in an “penetrate and patch” strategy, meaning that system administrators, security experts or similar surveillance the network and initiate appropriate reactions to the actions observed. Such sources refers to real-time information, but might also be used in a more preventive manner by combining it with the opinions provided by the domain experts. To appropriately combine the information from such various sources the notion of trust is used. Trust represents the degree to which a particular information source can be trusted to provide accurate and correct information, and is measured as information source relative trustworthiness. In this paper we show how to assign this relative trustworthiness using two trust variables; (1) knowledge level and (2) level of expertise.

Siv Hilde Houmb, Indrakshi Ray, Indrajit Ray
Trust-Based Route Selection in Dynamic Source Routing

Unlike traditional mobile wireless networks, ad hoc networks do not rely on any fixed infrastructure. Nodes rely on each other to route packets to other mobile nodes or toward stationary nodes that may act as a gateway to a fixed network. Mobile nodes are generally assumed to participate as routers in the mobile wireless network. However, blindly trusting all other nodes to respect the routing protocol exposes the local node to a wide variety of vulnerabilities. Traditional security mechanisms rely on either the authenticated identity of the requesting principal or some form of credentials that authorise the client to perform certain actions. Generally, these mechanisms require some underlying infrastructure, e.g., a public key infrastructure (PKI). However, we cannot assume such infrastructures to be in place in an ad hoc network. In this paper we propose an extension to an existing ad hoc routing protocols, which selects the route based on a local evaluation of the trustworthiness of all known intermediary nodes (routers) on the route to the destination. We have implemented this mechanism in an existing ad hoc routing protocol, and we show how trust can be built from previous experience and how trust can be used to avoid routing packets through unreliable nodes.

Christian D. Jensen, Paul O Connell
Implementing Credential Networks

Credential networks have recently been introduced as a general model for distributed authenticity and trust management in open networks. This paper focuses on issues related to the implementation of credential networks. It presents a system called

Caution

, which consists of a simple language to define credential networks and an underlying machinery to perform the evaluation. The paper also describes the necessary algorithms in further details.

Jacek Jonczy, Rolf Haenni
Exploring Different Types of Trust Propagation

Trust propagation is the principle by which new trust relationships can be derived from pre-existing trust relationship. Trust transitivity is the most explicit form of trust propagation, meaning for example that if Alice trusts Bob, and Bob trusts Claire, then by transitivity, Alice will also trust Claire. This assumes that Bob recommends Claire to Alice. Trust fusion is also an important element in trust propagation, meaning that Alice can combine Bob’s recommendation with her own personal experience in dealing with Claire, or with other recommendations about Claire, in order to derive a more reliable measure of trust in Claire. These simple principles, which are essential for human interaction in business and everyday life, manifests itself in many different forms. This paper investigates possible formal models that can be implemented using belief reasoning based on subjective logic. With good formal models, the principles of trust propagation can be ported to online communities of people, organisations and software agents, with the purpose of enhancing the quality of those communities.

Audun Jøsang, Stephen Marsh, Simon Pope
PathTrust: A Trust-Based Reputation Service for Virtual Organization Formation

Virtual Organizations enable new forms of collaboration for businesses in a networked society. During their formation business partners are selected on an as-needed basis. We consider the problem of using a reputation system to enhance the member selection in Virtual Organizations. The paper identifies the requirements for and the benefits of using a reputation system for this task. We identify attacks and analyze their impact and threat to using reputation systems. Based on these findings we propose the use of a specific model of reputation different from the prevalent models of reputation. The major contribution of this paper is an algorithm (called PathTrust) in this model that exploits the graph of relationships among the participants. It strongly emphasizes the transitive model of trust in a web of trust. We evaluate its performance, especially under attack, and show that it provides a clear advantage in the design of a Virtual Organization infrastructure.

Florian Kerschbaum, Jochen Haller, Yücel Karabulut, Philip Robinson
A Versatile Approach to Combining Trust Values for Making Binary Decisions

In open multi-agent systems, agents typically need to rely on others for the provision of information or the delivery of resources. However, since different agents’ capabilities, goals and intentions do not necessarily agree with each other, trust can not be taken for granted in the sense that an agent can not always be expected to be willing and able to perform optimally from a focal agent’s point of view. Instead, the focal agent has to form and update beliefs about other agents’ capabilities and intentions. Many different approaches, models and techniques have been used for this purpose in the past, which generate trust and reputation values. In this paper, employing one particularly popular trust model, we focus on the way an agent may use such trust values in trust-based decision-making about the value of a binary variable.

We use computer simulation experiments to assess the relative efficacy of a variety of decision-making methods. In doing so, we argue for systematic analysis of such methods beforehand, so that, based on an investigation of characteristics of different methods, different classes of parameter settings can be distinguished. Whether,

on average

across many random problem instances, a certain method performs better or worse than alternatives is not the issue, given that the agent using the method always exists in a particular setting. We find that combining trust values using our likelihood method gives performance which is relatively robust to changes in the setting an agent may find herself in.

Tomas Klos, Han La Poutré
Jiminy: A Scalable Incentive-Based Architecture for Improving Rating Quality

In this paper we present the design, implementation, and evaluation of Jiminy: a framework for explicitly rewarding users who participate in reputation management systems by submitting ratings. To defend against participants who submit random or malicious ratings in order to accumulate rewards, Jiminy facilitates a probabilistic mechanism to detect dishonesty and halt rewards accordingly.

Jiminy’s reward model and honesty detection algorithm are presented and its cluster-based implementation is described. The proposed framework is evaluated using a large sample of real-world user ratings in order to demonstrate its effectiveness. Jiminy’s performance and scalability are analysed through experimental evaluation. The system is shown to scale linearly with the on-demand addition of slave machines to the Jiminy cluster, allowing it to successfully process large problem spaces.

Evangelos Kotsovinos, Petros Zerfos, Nischal M. Piratla, Niall Cameron, Sachin Agarwal
Virtual Fingerprinting as a Foundation for Reputation in Open Systems

The lack of available identity information in attribute-based trust management systems complicates the design of the audit and incident response systems, anomaly detection algorithms, collusion detection/prevention mechanisms, and reputation systems taken for granted in traditional distributed systems. In this paper, we show that as two entities in an attribute-based trust management system interact, each learns one of a limited number of

virtual fingerprints

describing their communication partner. We show that these virtual fingerprints can be disclosed to other entities in the open system without divulging any attribute or absolute-identity information, thereby forming an opaque pseudo-identity that can be used as the basis for the above-mentioned types of services. We explore the use of virtual fingerprints as the basis of Xiphos, a system that allows reputation establishment without requiring explicit knowledge of entities’ civil identities. We discuss the trade-off between privacy and trust, examine the impacts of several attacks on the Xiphos system, and discuss the performance of Xiphos in a simulated grid computing system.

Adam J. Lee, Marianne Winslett
Towards Automated Evaluation of Trust Constraints

In this paper we explore a mechanism for, and the limitations of, automation of assessment of trustworthiness of systems. We have implemented a system for checking trust constraints expressed within privacy policies as part of an integrated prototype developed within the EU Framework VI Privacy and Identity Management for Europe (PRIME) project [1]. Trusted computing information [2,3] may be taken into account as part of this analysis. This is the first stage of ongoing research and development within PRIME in this area.

Siani Pearson
Provision of Trusted Identity Management Using Trust Credentials

The Trusted Computing Group (TCG) has developed specifications for computing platforms that create a foundation of trust for software processes, based on a small amount of extra hardware [1,2]. Several million commercial desktop and laptop products have been shipped based upon this technology, and there is increasing interest in deploying further products. This paper presents a mechanism for using trusted computing in the context of identity management to deal with the problem of providing migration of identity and confidential information across users’ personal systems and multiple enterprise IT back-end systems in a safe and trusted way.

Siani Pearson, Marco Casassa Mont
Acceptance of Voting Technology: Between Confidence and Trust

Social aspects of security of information systems are often discussed in terms of “actual security” and “perceived security”. This may lead to the hypothesis that e-voting is controversial because in paper voting, actual and perceived security coincide, whereas they do not in electronic systems. In this paper, we argue that the distinction between actual and perceived security is problematic from a philosophical perspective, and we develop an alternative approach, based on the notion of trust. We investigate the different meanings of this notion in computer science, and link these to the philosophical work of Luhmann, who distinguishes between familiarity, confidence and trust. This analysis yields several useful distinctions for discussing trust relations with respect to information technology. We apply our framework to electronic voting, and propose some hypotheses that can possibly explain the smooth introduction of electronic voting machines in the Netherlands in the early nineties.

Wolter Pieters
B-Trust: Bayesian Trust Framework for Pervasive Computing

Without trust, pervasive devices cannot collaborate effectively, and without collaboration, the pervasive computing vision cannot be made a reality. Distributed trust frameworks may support trust and thus foster collaboration in an hostile pervasive computing environment. Existing frameworks deal with foundational properties of computational trust. We here propose a distributed trust framework that satisfies a broader range of properties. Our framework: (i) evolves trust based on a Bayesian formalization, whose trust metric is expressive, yet tractable; (ii) is lightweight; (iii) protects user anonymity, whilst being resistant to “Sybil attacks” (and enhancing detection of two collusion attacks); (iv) integrates a risk-aware decision module. We evaluate the framework through four experiments.

Daniele Quercia, Stephen Hailes, Licia Capra
TATA: Towards Anonymous Trusted Authentication

Mobile devices may share resources even in the presence of untrustworthy devices. To do so, each device may use a computational model that on input of reputation information produces trust assessments. Based on such assessments, the device then decides with whom to share: it will likely end up sharing only with the most trustworthy devices, thus isolating the untrustworthy ones. All of this is, however, theoretical in the absence of a general and distributed authentication mechanism. Currently, distributed trust frameworks do not offer an authentication mechanism that supports user privacy, whilst being resistant to “Sybil attacks”. To fill the gap, we first analyze the general attack space that relates to anonymous authentication as it applies to distributed trust models. We then put forward a scheme that is based on blinded threshold signature: collections of devices certify pseudonyms without seeing them and without relying on a central authority. We finally discuss how the scheme tackles the authentication attacks.

Daniele Quercia, Stephen Hailes, Licia Capra
The Design, Generation, and Utilisation of a Semantically Rich Personalised Model of Trust

“Trust is a fashionable but overloaded term with lots of intertwined meanings” [1] and it has therefore been argued that trust is bad for security. We have designed, developed and evaluated a rich, semantic, human-centric model of trust that can handle the myriad of terms and intertwined meanings that defining trust has. This model of trust can be personalised on a per user basis and specialised on per domain basis. In this paper we present this model with accompanying experimental evaluation to support it and introduce a mechanism for the generation of personalised models of trust. Furthermore, we describe how this model has been utilised through the combination of a policy and trust sharing mechanism to empower trust based access control.

Karl Quinn, Declan O’ Sullivan, Dave Lewis, Vincent P. Wade
A Trust Assignment Model Based on Alternate Actions Payoff

The human component is a determining factor in the success of the security subsystem. While security policies dictate the set of permissible actions of a user, best practices dictate the efficient mode of execution for these actions. Unfortunately, this efficient mode of execution is not always the easiest to carry out. Users, unaware of the implications of their actions, seek to carry out the easier mode of execution rather than the efficient one, thereby introducing a certain level of uncertainty unacceptable in high assurance information systems. In this paper, we present a dynamic trust assignment model that evaluates the system’s trust on user actions over time. We first discuss the interpretation of trust in the context of the statement “the system trusts the users’ actions” as opposed to “the system trusts the user.” We then derive the intuition of our trust assignment framework from a game-theoretic model, where trust updates are performed through “compensatory transfer.” For each efficient action by a user, we assign a trust value equal to the “best claim for compensation”, defined as the maximum difference between the benefits of an alternate action and the selected efficient action by the user. The users’ initial trust and recent actions are both taken into account and the user is appropriately rewarded or penalized through trust updates. The utility of such a model is two-fold: It helps the system to identify and educate users who consistently avoid (or are unaware of) implementing the organization’s best practices and secondly, in the face of an action whose conformance to the organizational policies is contentious, it provides the system or a monitoring agent with a basis, viz. the trust level, to allow or disallow the action. Finally we demonstrate the application of this model in a Document Management System.

Vidyaraman Sankaranarayanan, Shambhu Upadhyaya
Privacy, Reputation, and Trust: Some Implications for Data Protection

The present contribution analyses the connection between privacy and trust, with regard to data protection. In particular, it shows how the need to facilitate trust-based relationships may justify some limitations of privacy (in the sense of a right to self-determination over personal data), but may also provide some grounds for the protection of privacy.

Giovanni Sartor
A Reputation-Based System for Confidentiality Modeling in Peer-to-Peer Networks

The secure transmission of messages via computer networks is, in many scenarios, considered to be a solved problem. However, a related problem, being almost as crucial, has been widely ignored: To whom to entrust information? We argue that confidentiality modeling is a question of trust. Therefore, the article at hand addresses this problem based on a reputation system. We consider a Peer-to-Peer network whose participants decide on whether or not to make information available to other nodes based on the author’s trust relationships. Documents are only forwarded to another node if, according to the sender’s local view, the recipient is considered to be sufficiently trustworthy. In contrast to most existing reputation systems, trust relationships are considered only with respect to a specific domain. Privacy is preserved by limiting the revelation of trust relationships.

Christoph Sorge, Martina Zitterbart
Robust Reputations for Peer-to-Peer Marketplaces

We have developed a suite of algorithms to address two problems confronting reputation systems for large peer-to-peer markets: data sparseness and inaccurate feedback. To mitigate the effect of inaccurate feedback – particularly retaliatory negative feedback – we propose EM-trust, which uses a latent variable statistical model of the feedback process. To handle sparse data, we propose Bayesian versions of both EM-trust and the well-known Percent Positive Feedback system. Using a marketplace simulator, we demonstrate that these algorithms provide more accurate reputations than standard Percent Positive Feedback.

Jonathan Traupman, Robert Wilensky
From Theory to Practice: Forgiveness as a Mechanism to Repair Conflicts in CMC

In computer-mediated communication (CMC) online members often behave in undesirable ways, therefore creating a need for an active regulating force. Trust and reputation mechanisms have been adopted to address this problem and in doing so have eliminated the high costs of employing a human moderator. However, these systems have emphasized the need to ‘punish’ a given offender, while neglecting to account for alternative ways to repair the offence e.g. by forgiveness. In this paper, we define a theoretical model of forgiveness which is operationalized using a fuzzy logic inference system and then applied in a particular scenario. It is argued that forgiveness in CMC may work as a possible prosocial mechanism, which in the short-term can help resolve a given conflict and in the long-term can add to an increasingly prosocial and homeostatic environment.

Asimina Vasalou, Jeremy Pitt, Guillaume Piolle
A Novel Protocol for Communicating Reputation in P2P Networks

Many reputation systems mainly concentrate on avoiding untrustworthy agents by communicating reputation. Here arises the problem that when an agent does not know another agent very much then there is no way to notice such ambiguity. This paper shows a new protocol in which an agent can notice that ambiguity using the notion of statistics, and illustrates the facility of designing agents’ algorithms as well as existing reputation systems.

Kouki Yonezawa
A Scalable Probabilistic Approach to Trust Evaluation

The Semantic Web will only achieve its full potential when users have trust in its operations and in the quality of services and information provided, so trust is inevitably a high-level and crucial issue. Modeling trust properly and exploring techniques for establishing computational trust is at the heart of the Semantic Web to realize its vision. We propose a scalable probabilistic approach to trust evaluation which combines a variety of sources of information and takes four types of costs (operational, opportunity, service charge and consultant fee) and utility into consider during the process of trust evaluation. Our approach gives trust a strict probabilistic interpretation which can assist users with making better decisions in choosing the appropriate service providers according to their preferences. A formal robust analysis has been made to examine the performance of our method.

Xiaoqing Zheng, Zhaohui Wu, Huajun Chen, Yuxin Mao

Demonstration Overviews

The Agent Reputation and Trust (ART) Testbed

The Agent Reputation and Trust (

ART

) Testbed initiative has been launched with the goal of establishing a testbed for agent reputation- and trust-related technologies. The

art

Testbed serves in two roles: (1) as a competition forum in which researchers can compare their technologies against objective metrics, and (2) as a suite of tools with flexible parameters, allowing researchers to perform customizable, easily-repeatable experiments. In the Testbed’s artwork appraisal domain, agents, who valuate paintings for clients, may purchase opinions and reputation information from other agents to produce accurate appraisals. The

art

Testbed features useful data collection tools for storing, downloading, and replaying game data for experimental analysis.

Karen K. Fullam, Tomas Klos, Guillaume Muller, Jordi Sabater-Mir, K. Suzanne Barber, Laurent Vercouter
Trust Establishment in Emergency Case

Access to medical information, e.g. current medication, blood group, and allergies, is often vital, especially in case of emergency. An emergency physician has to know medication incompatibilities and to access to the patient’s treatment history. It raises the issue of patient’s privacy. Thus a patient grants access to his medical information to his physician because he has a pre-established trust relationship with this physician. But he wants to prevent any other physician to gain access to his medical information. In emergency case, due to the patient’s unconsciousness, it is difficult to establish a trust relationship between patient and emergency physician. In our demonstration, we show how to exploit context information to address the problem of granting access to medical information without a pre-established trust relationship between an emergency physician and a patient.

Laurent Gomez, Ulrich Jansen
Evaluating Trust and Authenticity with Caution

The purpose of this paper is to show how to use

Caution

, a tool for the specification and evaluation of credential networks. The resulting degrees of support and possibility allow to make decisions concerning the authenticity and/or trustworthiness of an unknown entity in an open network. The specification of a credential network and the subsequent computations will be illustrated by examples.

Jacek Jonczy
Using Jiminy for Run-Time User Classification Based on Rating Behaviour

This paper describes an application of our prototype implementation of

Jiminy

, a scalable distributed architecture for providing participation incentives in online rating schemes. Jiminy is based on an incentive model where participants are explicitly

rewarded

for submitting ratings, and are

debited

when they query a participating reputation management system (RMS). Providing explicit incentives increases the quantity of ratings submitted and reduces their bias by removing implicit or hidden rewards, such as those gained through revenge or reciprocal ratings. To prevent participants from submitting arbitrary or dishonest feedback for the purpose of accumulating rewards, Jiminy halts rewards for participants who are deemed dishonest by its probabilistic

honesty estimator

. Using this estimator, Jiminy can also perform

classification

of users based on their rating behaviour, which can be further used as criteria for filtering the rating information that users obtain from the RMS.

More background on the theoretical foundations of Jiminy can be found in [1], while [2] provides details on the system design, implementation and performance evaluation.

Evangelos Kotsovinos, Petros Zerfos, Nischal M. Piratla, Niall Cameron
Traust: A Trust Negotiation Based Authorization Service

In this demonstration, we present Traust, a flexible authorization service for open systems. Traust uses the technique of trust negotiation to map globally meaningful assertions regarding a previously unknown client into security tokens that are meaningful to resources deployed in the Traust service’s security domain. This system helps preserve the privacy of both users and the service, while at the same time automating interactions between security domains that would previously have required human intervention (e.g., the establishment of local accounts). We will demonstrate how the Traust service enables the use of trust negotiation to broker access to resources in open systems without requiring changes to protocol standards or applications software.

Adam J. Lee, Marianne Winslett, Jim Basney, Von Welch
The Interactive Cooperation Tournament
How to Identify Opportunities for Selfish Behavior of Computational Entities

Distributed reputation systems are a self-organizing means of supporting trusting decisions. In general, the robustness of distributed reputation systems to misbehavior is evaluated by the means of computer based simulation. However, the fundamental issue arises of how to anticipate kinds of successful misbehavior. Existing work in this field approaches this issue in an ad-hoc manner. Therefore, in this paper, we propose a methodology that is based on interactive simulation with human subjects. The requirements for such interaction are discussed. We show how they are met by the Interactive Cooperation Tournament, a simulation environment for identifying promising counter-strategies to the distributed reputation system EviDirs which is showcased in our demo.

Philipp Obreiter, Birgitta König-Ries
eTVRA, a Threat, Vulnerability and Risk Assessment Tool for eEurope

Securing the evolving telecommunications environment and establishing trust in its services and infrastructure is crucial for enabling the development of modern public services. The security of the underlying network and services environment for eBusiness is addressed as a crucial area in the eEurope action plan [2]. In response to this Specialist Task Force (STF) 292 associated with the European Telecommunication Standardisation Institute (ETSI) TISPAN [3] under contract from

e

Europe, has developed a threat, vulnerability and risk assessment (

e

TVRA) method and tool for use in standardisation. Using the

e

TVRA method and tool, the threats to a next generation network (NGN) can be analyzed and a set of recommended countermeasures identified that when implemented will reduce the overall risk to users of NGNs. In this paper we present the

e

TVRA method and tool along with the results of using the

e

TVRA for an analysis of a Voice over IP (VoIP) scenario of the NGN.

Judith E. Y. Rossebø, Scott Cadzow, Paul Sijben
Backmatter
Metadata
Title
Trust Management
Editors
Ketil Stølen
William H. Winsborough
Fabio Martinelli
Fabio Massacci
Copyright Year
2006
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-34297-7
Print ISBN
978-3-540-34295-3
DOI
https://doi.org/10.1007/11755593

Premium Partner