Skip to main content

2005 | Buch

Trust Management

Third International Conference, iTrust 2005, Paris, France, May 23-26, 2005. Proceedings

herausgegeben von: Peter Herrmann, Valérie Issarny, Simon Shiu

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume constitutes the proceedings of the 3rd International Conference on Trust Management, held in Paris, France, during 23–26 May 2005. The conf- ence follows successful International Conferences in Crete in 2003 and Oxford in 2004. All conferences were organized by iTrust, which is a working group funded as a thematic network by the Future and Emerging Technologies (FET) unit of the Information Society Technologies (IST) program of the European Union. The purpose of the iTrust working group is to provide a forum for cro- disciplinary investigation of the applications of trust as a means of increasing security, building con?dence and facilitating collaboration in dynamic open s- tems. The notion of trust has been studied independently by di?erent academic disciplines, which has helped us to identify and understand di?erent aspects of trust. Theaimofthisconferencewastoprovideacommonforum,bringingtogether researchers from di?erent academic branches, such as the technology-oriented disciplines, law, social sciences and philosophy, in order to develop a deeper and more fundamental understanding of the issues and challenges in the area of trust management in dynamic open systems. The response to this conference was excellent; from the 71 papers submitted to the conference, we selected 21 full papers and 4 short papers for presentation. The program also included two keynote addresses, given by Steve Marsh from National Research Centre Canada, Institute for Information Technology, and Steve Kimbrough from the University of Pennsylvania; an industrial panel; 7 technology demonstrations; and a full day of tutorials.

Inhaltsverzeichnis

Frontmatter

Third International Conference on Trust Management

Keynote Addresses

Foraging for Trust: Exploring Rationality and the Stag Hunt Game

Trust presents a number of problems and paradoxes, because existing theory is not fully adequate for understanding why there is so much of it, why it occurs, and so forth. These problems and paradoxes of trust are vitally important, for trust is thought to be the essential glue that holds societies together. This paper explores the generation of trust with two simple, but very different models, focusing on repeated play of the Stag Hunt game. A gridscape model examines creation of trust among cognitively basic simple agents. A Markov model examimes play between two somewhat more sophisticated agents. In both models, trust emerges robustly. Lessons are extracted from these findings which point to a new way of conceiving rationality, a way that is broadly applicable and can inform future investigations of trust.

Steven O. Kimbrough
Trust, Untrust, Distrust and Mistrust – An Exploration of the Dark(er) Side

There has been a lot of research and development in the field of computational trust in the past decade. Much of it has acknowledged or claimed that trust is a good thing. We think it’s time to look at the other side of the coin and ask the questions why is it good, what alternatives are there, where do they fit, and is our assumption always correct?

We examine the need for an addressing of the concepts of Trust, Mistrust,

and

Distrust, how they interlink and how they affect what goes on around us and within the systems we create. Finally, we introduce the phenomenon of ‘Untrust,’ which resides in the space between trusting and distrusting. We argue that the time is right, given the maturity and breadth of the field of research in trust, to consider how untrust, distrust and mistrust work, why they can be useful in and of themselves, and where they can shine.

Stephen Marsh, Mark R. Dibben

Full Papers

Security and Trust in the Italian Legal Digital Signature Framework

The early adoption of a national, legal digital signature framework in Italy has brought forth a series of problems and vulnerabilities. In this paper we describe each of them, showing how in each case the issue does not lie in the algorithms and technologies adopted, but either in faulty implementations, bad design choices, or legal and methodological issues. We also show which countermeasures would be appropriate to reduce the risks. We show the reflex of these vulnerabilities on the trust-based framework which gives legal value to digital signatures. We think that this study can help to avoid similar mistakes, now that under EU directives a similar architecture is planned or under development in most EU countries.

Stefano Zanero
Specifying Legal Risk Scenarios Using the CORAS Threat Modelling Language
Experiences and the Way Forward

The paper makes two main contributions: (1) It presents experiences from using the CORAS language for security threat modelling to specify legal risk scenarios. These experiences are summarised in the form of requirements to a more expressive language providing specific support for the legal domain. (2) Its second main contribution is to present ideas towards the fulfilment of these requirements. More specifically, it extends the CORAS conceptual model for security risk analysis with legal concepts and associations. Moreover, based on this extended conceptual model, it introduces a number of promising language constructs addressing some of the identified deficiencies.

Fredrik Vraalsen, Mass Soldal Lund, Tobias Mahler, Xavier Parent, Ketil Stølen
On Deciding to Trust

Participating in electronic markets and conducting business online inadvertedly involves the decision to trust other participants. In this paper we consider trust as a concept that self-interested agents populating an electronic marketplace can use to take decisions on who they are going to transact with. We are interested in looking at the effects that trust and its attributes as well as trust dispositions have on individual agents and the electronic market as a whole. A market scenario is presented which was used to build a simulation program and then run a series of experiments.

Michael Michalakopoulos, Maria Fasli
Trust Management Survey

Trust is an important tool in human life, as it enables people to cope with the uncertainty caused by the free will of others. Uncertainty and uncontrollability are also issues in computer-assisted collaboration and electronic commerce in particular. A computational model of trust and its implementation can alleviate this problem.

This survey is directed to an audience wishing to familiarize themselves with the field, for example to locate a research target or implement a trust management system. It concentrates on providing a general overview of the state of the art, combined with examples of things to take into consideration both when modelling trust in general and building a solution for a certain phase in trust management, be it trust relationship initialization, updating trust based on experience or determining what trust should have an effect on.

Sini Ruohomaa, Lea Kutvonen
Can We Manage Trust?

The term trust management suggests that trust can be managed, for example by creating trust, by assessing trustworthiness, or by determining optimal decisions based on specific levels of trust. The problem to date is that trust management in online environments is a diverse and ill defined discipline. In fact, the term trust management is being used with very different meanings in different contexts. This paper examines various approaches related to online activities where trust is relevant and where there is potential for trust management. In some cases, trust management has been defined with specific meanings. In other cases, there are well established disciplines with different names that could also be called trust management. Despite the confusion in terminology, trust management, as a general approach, represents a promising development for making online transactions more dependable, and in the long term for increasing the social capital of online communities.

Audun Jøsang, Claudia Keser, Theo Dimitrakos
Operational Models for Reputation Servers

This paper devises a classification system for reputation systems based on two axes, namely: who performs the evaluation of a subject’s reputation, and how the information is collected by the reputation system. This leads to 4 possible operational models for reputation systems, termed the Voting Model, the Opinion Poll Model, the MP Model and the Research Model, each of which is then analyzed. Finally, the paper postulates the inherent trustworthiness of each operational model, and concludes with a hypothesis of how these systems might evolve in the future.

D. W. Chadwick
A Representation Model of Trust Relationships with Delegation Extensions

Logic languages establish a formal framework to solve authorization and delegation conflicts. However, we consider that a visual representation is necessary since graphs are more expressive and understandable than logic languages. In this paper, and after overviewing previous works using logic languages, we present a proposal for graph representation of authorization and delegation statements. Our proposal is based on Varadharajan et al. solution, though improve several elements of that work. We also discuss about the possible implementation of our proposal using attribute certificates.

Isaac Agudo, Javier Lopez, Jose A. Montenegro
Affect and Trust

A number of models of trust, particularly vis à vis ecommerce, have been proposed in the literature. While some of these models present intriguing insights, they all assume that trust is based on logical choice. Furthermore, while these models recognize the importance of the subject’s

perception

of reality in its evaluation, none of the proposed models have critically analyzed the nature of affective nature of perception, particularly in light of recent work in neurology and social psychology. This paper examines this concept of affect and then proposes a new, affect-based model in light of modern science. How this new model addresses previous shortcomings is demonstrated. Directions for future research are then proposed.

Lewis Hassell
Reinventing Forgiveness: A Formal Investigation of Moral Facilitation

Reputation mechanisms have responded to the ever-increasing demand for online policing by “collecting, distributing and aggregating feedback about participants’ past behavior”. But unlike in human societies where forbidden actions are coupled with legal repercussions, reputation systems fulfill a socially-oriented duty by alerting the community’s members on one’s good standing. The decision to engage in collaborative efforts with another member is chiefly placed in the hands of each individual. This form of people empowerment sans litigation brings forth a moral concern: in humanhuman interactions, a violation of norms and standards is unavoidable but not unforgivable. Driven by the prosocial benefits of forgiveness, this paper proposes ways of facilitating forgiveness between offender and victim through the use of personal ‘moral’ agents. We suggest that a richer mechanism for regulating online behaviour can be developed, one that integrates trust, reputation

and

forgiveness.

Asimina Vasalou, Jeremy Pitt
Modeling Social and Individual Trust in Requirements Engineering Methodologies

When we model and analyze trust in organizations or information systems we have to take into account two different levels of analysis: social and individual. Social levels define the structure of organizations, whereas individual levels focus on individual agents. This is particularly important when capturing security requirements where a “normally” trusted organizational role can be played by an untrusted individual.

Our goal is to model and analyze the two levels finding the link between them and supporting the automatic detection of conflicts that can come up when agents play roles in the organization. We also propose a formal framework that allows for the automatic verification of security requirements between the two levels by using Datalog and has been implemented in CASE tool.

Paolo Giorgini, Fabio Massacci, John Mylopoulos, Nicola Zannone
Towards a Generic Trust Model – Comparison of Various Trust Update Algorithms

Research in the area of trust and reputation systems has put a lot of effort in developing various trust models and associated trust update algorithms that support users or their agents with different behavioral profiles. While each work on its own is particularly well suited for a certain user group, it is crucial for users employing different trust representations to have a common understanding about the meaning of a given trust statement.

The contributions of this paper are three-fold: Firstly we present the UniTEC generic trust model that provides a common trust representation for the class of trust update algorithms based on experiences. Secondly, we show how several well-known representative trust-update algorithms can easily be plugged into the UniTEC system, how the mappings between the generic trust model and the algorithm-specific trust models are performed, and most importantly, how our abstraction from algorithm-specific details in the generic trust model enables users using different algorithms to interact with each other and to exchange trust statements. Thirdly we present the results of our comparative evaluation of various trust update algorithms under a selection of test scenarios.

Michael Kinateder, Ernesto Baschny, Kurt Rothermel
A Probabilistic Trust Model for Handling Inaccurate Reputation Sources

This research aims to develop a model of trust and reputation that will ensure good interactions amongst software agents in large scale open systems in particular. The following are key drivers for our model: (1) agents may be self-interested and may provide false accounts of experiences with other agents if it is beneficial for them to do so; (2) agents will need to interact with other agents with which they have no past experience. Against this background, we have developed

TRAVOS

(Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent’s trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents. When there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate.

Jigar Patel, W. T. Luke Teacy, Nicholas R. Jennings, Michael Luck
Trust as a Key to Improving Recommendation Systems

In this paper we propose a method that can be used to avoid the problem of sparsity in recommendation systems and thus to provide improved quality recommendations. The concept is based on the idea of using trust relationships to support the prediction of user preferences. We present the method as used in a centralized environment; we discuss its efficiency and compare its performance with other existing approaches. Finally we give a brief outline of the potential application of this approach to a decentralized environment.

Georgios Pitsilis, Lindsay Marshall
Alleviating the Sparsity Problem of Collaborative Filtering Using Trust Inferences

Collaborative Filtering (CF), the prevalent recommendation approach, has been successfully used to identify users that can be characterized as “similar” according to their logged history of prior transactions. However, the applicability of CF is limited due to the

sparsity

problem, which refers to a situation that transactional data are lacking or are insufficient. In an attempt to provide high-quality recommendations even when data are sparse, we propose a method for alleviating sparsity using

trust inferences

. Trust inferences are transitive associations between users in the context of an underlying social network and are valuable sources of additional information that help dealing with the

sparsity

and the

cold-start

problems. A trust computational model has been developed that permits to define the

subjective

notion of trust by applying

confidence

and

uncertainty

properties to network associations. We compare our method with the classic CF that does not consider any transitive associations. Our experimental results indicate that our method of trust inferences significantly improves the quality performance of the classic CF method.

Manos Papagelis, Dimitris Plexousakis, Themistoklis Kutsuras
Experience-Based Trust: Enabling Effective Resource Selection in a Grid Environment

The Grid vision is to allow heterogeneous computational resources to be shared and utilised globally. Grid users are able to submit tasks to remote resources for execution. However, these resources may be unreliable and there is a risk that submitted tasks may fail or cost more than expected. The notion of trust is often used in agent-based systems to manage such risk, and in this paper we apply trust to the problem of resource selection in Grid computing. We propose a number of resource selection algorithms based upon trust, and evaluate their effectiveness in a simulated Grid.

Nathan Griffiths, Kuo-Ming Chao
Interactive Credential Negotiation for Stateful Business Processes

Business Processes for Web Services are the new paradigm for lightweight enterprise integration. They cross organizational boundaries, are provided by entities that see each other just as business partners, and require access control mechanisms based on trust management. Stateful Business Processes, enforcing separation of duties or service limitations based on past or current usage, pose additional research challenges. Clients, which may not know the right set of credentials to supply to each partner, may end up in dead-ends and servers should help them find out what must be revoked and what missing is that grant access to a particular resource.

We propose a logical framework and an interactive algorithm based on negotiation of credentials for access control that works for Stateful Business Processes. We show that our algorithm is sound (no grant is given to unauthorized clients), complete (authorized clients get grant) and resistant against DoS attempt.

Hristo Koshutanski, Fabio Massacci
An Evidence Based Architecture for Efficient, Attack-Resistant Computational Trust Dissemination in Peer-to-Peer Networks

Emerging peer to peer (P2P) applications have a requirement for decentralised access control. Computational trust systems address this, achieving security through collaboration. This paper surveys current work on overlay networks, trust and identity certification. Our focus is on the particular problem of distributing evidence for use in trust-based security decisions. We present a system we have implemented that solves this in a highly scalable way, and resists attacks such as false recommendations and collusion.

David Ingram
Towards an Evaluation Methodology for Computational Trust Systems

Trust-based security frameworks are increasingly popular, yet few evaluations have been conducted. As a result, no guidelines or evaluation methodology have emerged that define the measure of security of such models. This paper discusses the issues involved in evaluating these models, using the

SECURE

trust-based framework as a case study.

Ciarán Bryce, Nathan Dimmock, Karl Krukow, Jean-Marc Seigneur, Vinny Cahill, Waleed Wagealla
Trusted Computing: Strengths, Weaknesses and Further Opportunities for Enhancing Privacy

This paper assesses how trusted computing technology can enhance privacy, both in the short and long term, and provides a variety of examples. In addition, potential negative privacy implications are assessed and outstanding issues are highlighted that need to be addressed before trusted computing could be provided in a privacy-friendly manner within the consumer space.

Siani Pearson
Trust Transfer: Encouraging Self-recommendations Without Sybil Attack

Trading privacy for trust thanks to the linkage of pseudonyms has been proposed to mitigate the inherent conflict between trust and privacy. This necessitates fusionym, that is, the calculation of a unique trust value supposed to reflect the overall trustworthiness brought by the set of linked pseudonyms. In fact, some pieces of evidence may overlap and be overcounted, leading to an incorrect trust value. In this approach, self-recommendations are possible during the privacy/trust trade. However, this means that Sybil attacks, where thousands of virtual identities belonging to the same real-world entity recommend each other, are potentially easier to carry out, as self-recommendations are an integral part of the attack. In this paper, trust transfer is used to achieve safe fusionym and protect against Sybil attacks when pieces of evidence are limited to direct observations and recommendations based on the count of event outcomes. Trust transfer implies that recommendations move some of the trustworthiness of the recommending entity to the trustworthiness of the trustee. It is demonstrated and tailored to email anti-spam settings.

Jean-Marc Seigneur, Alan Gray, Christian Damsgaard Jensen
Privacy-Preserving Search and Updates for Outsourced Tree-Structured Data on Untrusted Servers

Although tree-based index structures have proven their advantages to both traditional and modern database applications, they introduce numerous research challenges as database services are outsourced to untrusted servers. In the outsourced database service model, crucial security research questions mainly relate to data confidentiality, data and user privacy, authentication and data integrity. To the best of our knowledge, however, none of the previous research has radically addressed the problem of preserving privacy for basic operations on such outsourced search trees. Basic operations of search trees/tree-based index structures include

search

(to answer different query types and

updates

(modification, insert, delete). In this paper, we will discuss security issues in outsourced databases that come together with search trees, and present techniques to ensure privacy in the execution of these trees’ basic operations on the untrusted server. Our techniques allow clients to operate on their outsourced tree-structured data on untrusted servers without revealing information about the query, result, and outsourced data itself.

Tran Khanh Dang

Short Papers

Persistent and Dynamic Trust: Analysis and the Related Impact of Trusted Platforms

This paper reviews trust from both a social and technological perspective and proposes a distinction between persistent and dynamic trust. Furthermore, this analysis is applied within the context of trusted computing technology.

Siani Pearson, Marco Casassa Mont, Stephen Crane
Risk Models for Trust-Based Access Control(TBAC)

The importance of risk in trust-based systems is well established. This paper presents a novel model of risk and decision-making based on economic theory. Use of the model is illustrated by way of a collaborative spam detection application.

Nathan Dimmock, Jean Bacon, David Ingram, Ken Moody
Combining Trust and Risk to Reduce the Cost of Attacks

There have been a number of proposals for trust and reputation-based systems. Some have been implemented, some have been analysed only by simulation. In this paper we first present a general architecture for a trust-based system, placing special emphasis on the management of context information. We investigate the effectiveness of our architecture by simulating distributed attacks on a network that uses trust/ reputation as a basis for access control decisions.

Daniel Cvrček, Ken Moody
IWTrust: Improving User Trust in Answers from the Web

Question answering systems users may find answers without any supporting information insufficient for determining trust levels. Once those question answering systems begin to rely on source information that varies greatly in quality and depth, such as is typical in web settings, users may trust answers even less. We address this problem by augmenting answers with optional information about the sources that were used in the answer generation process. In addition, we introduce a trust infrastructure, IWTrust, which enables computations of trust values for answers from the Web. Users of IWTrust have access to sources used in answer computation along with trust values for those source, thus they are better able to judge answer trustworthiness.

Ilya Zaihrayeu, Paulo Pinheiro da Silva, Deborah L. McGuinness

Demonstration Overviews

Trust Record: High-Level Assurance and Compliance

Events such as Enron’s collapse have changed the regulatory and governance trends increasing executive accountable for the way companies are run and therefore for the underlying critical IT systems. Such IT functions are increasingly outsourced yet executives remain accountable. This paper presents a Trust Record demonstrator that provides a real time audit report helping to assure executives that their (outsourced) IT infrastructures are being managed in line with corporate policies and legal regulations.

Adrian Baldwin, Yolanta Beres, David Plaquin, Simon Shiu
Implementation of the SECURE Trust Engine

We present the implementation of

Secure

and a SPAM filter that uses it to classify messages as SPAM based on trust in senders.

Ciarán Bryce, Paul Couderc, Jean-Marc Seigneur, Vinny Cahill
The CORAS Tool for Security Risk Analysis

The CORAS Tool for model-based security risk analysis supports documentation and reuse of risk analysis results through integration of different risk analysis and software development techniques and tools. Built-in consistency checking facilitates the maintenance of the results as the target of analysis and risk analysis results evolve.

Fredrik Vraalsen, Folker den Braber, Mass Soldal Lund, Ketil Stølen
Towards a Grid Platform Enabling Dynamic Virtual Organisations for Business Applications

In this paper we describe the demonstration of selected security & contract management capabilities of an early prototype infrastructure enabling dynamic Virtual Organisations for the purpose of providing virtualised services and resources that can be offered on-demand, following a utility computing model, and integrated into aggregated services whose components may be distributed across enterprise boundaries. The ideas underpinning this prototype have been investigated in the context of European projects TrustCoM [1], ELeGI [2] and GRASP [4] where initial prototype development has taken place.

T. Dimitrakos, G. Laria, I. Djordjevic, N. Romano, F. D’Andria, V. Trpkovski, P. Kearney, M. Gaeta, P. Ritrovato, L. Schubert, B. Serhan, L. Titkov, S. Wesner
Multimedia Copyright Protection Platform Demonstrator

The work presented in this paper consists in the development of a portable platform to protect the copyright and distribution rights of digital contents, and empirically demonstrate the capacity of several marking and tracing algorithms. This platform is used to verify, at a practical level, the strength properties of digital watermarking and fingerprinting marks. Initially, two watermarking algorithms, one based on spread-spectrum techniques and the other based on QIM (Quantization Index Modulation), have been implemented. Moreover, we use these watermarking algorithms to embed a fingerprinting code, based on code concatenation, equipped with an efficient tracing algorithm. In this paper we focus on the implementation issues of the Java-based platform, that consists of three main packages that are fully described.

Miguel Soriano, Marcel Fernandez, Elisa Sayrol, Joan Tomas, Joan Casanellas, Josep Pegueroles, Juan Hernández-Serrano
ST-Tool: A CASE Tool for Modeling and Analyzing Trust Requirements

ST-Tool is a graphical tool integrating an agent-oriented requirements engineering methodology with tools for the formal analysis of models. Essentially, the tool allows designers to draw visual models representing functional, security and trust requirements of systems and, then, to verify formally and automatically their correctness and consistency through different model-checkers.

P. Giorgini, F. Massacci, J. Mylopoulos, A. Siena, N. Zannone
The VoteSecureTM Secure Internet Voting System

We have developed a system that supports remote secure internet voting and ensures the secrecy of the voting process by means of employing advanced cryptography techniques. In this paper, we present this system, called VoteSecure

TM

. The cryptography-related part performs the encryption and “breaking” of the ballots submitted to parts that are distributed to tally members. These ballots can be later reconstructed by means of the cooperation of at least T out of N tally members. We argue that our system is both innovative and useful; it can be used as a stand-alone system or integrated with an existing e-government and/or e-community platform that could take advantage of its functionality.

Periklis Akritidis, Yiannis Chatzikian, Manos Dramitinos, Evangelos Michalopoulos, Dimitrios Tsigos, Nikolaos Ventouras
Backmatter
Metadaten
Titel
Trust Management
herausgegeben von
Peter Herrmann
Valérie Issarny
Simon Shiu
Copyright-Jahr
2005
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32040-1
Print ISBN
978-3-540-26042-4
DOI
https://doi.org/10.1007/b136639

Premium Partner