Skip to main content
main-content

Über dieses Buch

This volume contains the proceedings of IFIPTM 2010, the 4th IFIP WG 11.11 International Conference on Trust Management, held in Morioka, Iwate, Japan during June 16-18, 2010. IFIPTM 2010 provided a truly global platform for the reporting of research, development, policy, and practice in the interdependent arrears of privacy, se- rity, and trust. Building on the traditions inherited from the highly succe- ful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, the IFIPTM 2008 conference in Trondheim, Norway, and the IFIPTM 2009 conference at Purdue University in Indiana, USA, IFIPTM 2010 focused on trust, privacy and security from multidisciplinary persp- tives. The conference is an arena for discussion on relevant problems from both research and practice in the areas of academia, business, and government. IFIPTM 2010 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2010 received 61 submissions from 25 di?erent countries: Japan (10), UK (6), USA (6), Canada (5), Germany (5), China (3), Denmark (2), India (2), Italy (2), Luxembourg (2), The Netherlands (2), Switzerland (2), Taiwan (2), Austria, Estonia, Finland, France, Ireland, Israel, Korea,Malaysia, Norway, Singapore, Spain, Turkey. The Program Committee selected 18 full papers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include two invited papers by academic experts in the ?elds of trust management, privacy and security, namely, Toshio Yamagishi and Pamela Briggs.

Inhaltsverzeichnis

Frontmatter

Privacy and Trust

Schemes for Privately Computing Trust and Reputation

Abstract
Trust and Reputation systems in distributed environments attain widespread interest as online communities are becoming an inherent part of the daily routine of Internet users. Several models for Trust and Reputation have been suggested recently, among them the Knots model [8]. The Knots model provides a member of a community with a method to compute the reputation of other community members. Reputation in this model is subjective and tailored to the taste and choices of the computing member and those members that have similar views, i.e. the computing member’s Trust-Set. A discussion on privately computing trust in the Knots model appears in [16]. The present paper extends and improves [16] by presenting three efficient and private protocols to compute trust in trust based reputation systems that use any trust-sets based model. The protocols in the paper are rigorously proved to be private against a semi-honest adversary given standard assumptions on the existence of an homomorphic, semantically secure, public key encryption system. The protocols are analyzed and compared in terms of their privacy characteristics and communication complexity.
Nurit Gal-Oz, Niv Gilboa, Ehud Gudes

Self-service Privacy: User-Centric Privacy for Network-Centric Identity

Abstract
User privacy has become a hot topic within the identity management arena. However, the field still lacks comprehensive frameworks even though most identity management solutions include built-in privacy features. This study explores how best to set up a single control point for users to manage privacy policies for their personal information, which may be distributed (scattered) across a set of network-centric identity management systems. Our goal is a user-centric approach to privacy management. As the number of schemas and frameworks is very high, we chose to validate our findings with a prototype based on the Liberty Alliance architecture and protocols.
Jose M. del Alamo, Miguel A. Monjas, Juan C. Yelmo, Beatriz San Miguel, Ruben Trapero, Antonio M. Fernandez

Naïve Security in a Wi-Fi World

Abstract
Despite nearly ubiquitous access to wireless networks, many users still engage in risky behaviors, make bad choices, or are seemingly indifferent to the concerns that security and privacy researchers work diligently to address. At present, research on user attitudes toward security and privacy on public Wi-Fi networks is rare. This paper explores Wi-Fi security and privacy by analyzing users’ current actions and reluctance to change. Through interviews and concrete demonstrations of vulnerability, we show that users make security choices based on (often mistaken) analogy to the physical world. Moreover, despite increased awareness of vulnerability, users remain ingenuous, failing to develop a realistic view of risk. We argue that our data present a picture of users engaged in a form of naïve security. We believe our results will be beneficial to researchers in the area of security-tool design, in particular with respect to better informing user choices.
Colleen Swanson, Ruth Urner, Edward Lank

Security Through Trust

Securing Class Initialization

Abstract
Language-based information-flow security is concerned with specifying and enforcing security policies for information flow via language constructs. Although much progress has been made on understanding information flow in object-oriented programs, the impact of class initialization on information flow has been so far largely unexplored. This paper turns the spotlight on security implications of class initialization. We discuss the subtleties of information propagation when classes are initialized and propose a formalization that illustrates how to track information flow in presence of class initialization by a type-and-effect system for a simple language. We show how to extend the formalization to a language with exception handling.
Keiko Nakata, Andrei Sabelfeld

xESB: An Enterprise Service Bus for Access and Usage Control Policy Enforcement

Abstract
Enforcing complex policies that span organizational domains is an open challenge. Current work on SOA policy enforcement splits security in logical components that can be distributed across domains, but does not offer any concrete solution to integrate this security functionality so that it works across security services for organization-wide policies. In this paper, we propose xESB, an enhanced version of an Enterprise Message Bus (ESB), where we monitor and enforce preventive and reactive policies, both for access control and usage control policies, and both inside one domain and between domains. In addition, we introduce indicators that help SOA administrators assess the effectiveness of their policies. Our performance measurements show that policy enforcement at the ESB level comes with only moderate penalties.
Gabriela Gheorghe, Stephan Neuhaus, Bruno Crispo

Metric Strand Spaces for Locale Authentication Protocols

Abstract
Location-dependent services are services that adapt their behavior based on the locations of mobile devices. For many applications, it is critical that location-dependent services use trustworthy device locations, namely locations that are both accurate and recent. These properties are captured by a security goal called locale authentication whereby an entity can authenticate the physical location of a device, even in the presence of malicious adversaries. In this paper, we present a systematic technique for verifying that location discovery protocols satisfy this security goal. We base our work on the strand space theory which provides a framework for determining which security goals a cryptographic protocol achieves. We extend this theory with a metric that captures the geometric properties of time and space. We use the extended theory to prove that several prominent location discovery protocols including GPS do not satisfy the locale authentication goal. We also analyze a location discovery protocol that does satisfy the goal under some reasonable assumptions.
F. Javier Thayer, Vipin Swarup, Joshua D. Guttman

Visitor Access Control Scheme Utilizing Social Relationship in the Real World

Abstract
Access control to resources is one of the most important technologies for supporting human activities in the digital space. To realize the control two schemes were proposed: RBAC (Role-Based Access Control) and TRBAC (Temporal Role-Based Access Control) by adding time constraints and role dependencies to RBAC. However, these methods are not effective for temporal activities such as visitor access because of maintenance costs and inadequacy in safeness. In this paper, we focus on a visitor access control in the real world, by utilizing relationship with users and situations, and propose a novel access control scheme which is effective for temporal activities.
Gen Kitagata, Debasish Chakraborty, Satoshi Ogawa, Atushi Takeda, Kazuo Hashimoto, Norio Shiratori

Trust Models and Management

Impact of Trust Management and Information Sharing to Adversarial Cost in Ranking Systems

Abstract
Ranking systems such as those in product review sites and recommender systems usually use ratings to rank favorite items based on both their quality and popularity. Since higher ranked items are more likely selected and yield more revenues for their owners, providers of unpopular and low quality items have strong incentives to strategically manipulate their ranking. This paper analyzes the adversary cost for manipulating these rankings in a variety of scenarios. Particularly, we analyze and compare the adversarial cost to attack ranking systems that use various trust measures to detect and eliminate malicious ratings to systems that use no such a trust management mechanism. We provide theoretical results showing the relation between the capability of the trust mechanism in detecting malicious ratings and the minimal adversarial cost for successfully changing the ranking. Furthermore, we study the impact of sharing trust information between ranking systems to the adversarial cost. It is proved that sharing information between two ranking systems on common user identities and malicious behaviors detected can increase considerably the minimal adversarial cost to successfully attack the two systems under certain assumptions. The numerical evaluation of our results shows that the estimated adversary cost for manipulating the item ranking can be made significant when proper trust mechanisms are employed or combined.
Le-Hung Vu, Thanasis G. Papaioannou, Karl Aberer

Shinren: Non-monotonic Trust Management for Distributed Systems

Abstract
The open and dynamic nature of modern distributed systems and pervasive environments presents significant challenges to security management. One solution may be trust management which utilises the notion of trust in order to specify and interpret security policies and make decisions on security-related actions. Most logic-based trust management systems assume monotonicity where additional information can only result in the increasing of trust. The monotonic assumption oversimplifies the real world by not considering negative information, thus it cannot handle many real world scenarios. In this paper we present Shinren, a novel non-monotonic trust management system based on bilattice theory and the any-world assumption. Shinren takes into account negative information and supports reasoning with incomplete information, uncertainty and inconsistency. Information from multiple sources such as credentials, recommendations, reputation and local knowledge can be used and combined in order to establish trust. Shinren also supports prioritisation which is important in decision making and resolving modality conflicts that are caused by non-monotonicity.
Changyu Dong, Naranker Dulay

Modeling and Analysis of Trust Management Protocols: Altruism versus Selfishness in MANETs

Abstract
Mobile ad hoc and sensor networks often contain a mixture of nodes, some of which may be selfish and non-cooperative in providing network services such as forwarding packets in order to conserve energy. Existing trust management protocols for mobile ad hoc networks (MANETs) advocate isolating selfish nodes as soon as they are detected. Further, altruistic behaviors are encouraged with incentive mechanisms. In this paper, we propose and analyze a trust management protocol based on the demand and pricing theory for managing group communication systems where system survivability is highly critical to mission execution. Rather than always encouraging altruistic behaviors, we consider the tradeoff between a node’s individual welfare (e.g., saving energy for survivability) versus global welfare (e.g., providing service availability) and identify the best design condition so that the system lifetime is maximized while the mission requirements are satisfied.
Jin-Hee Cho, Ananthram Swami, Ing-Ray Chen

Trust Models

Trustworthiness in Networks: A Simulation Approach for Approximating Local Trust and Distrust Values

Abstract
Trust is essential for most social and business networks in the web, and determining local trust values between two unfamiliar users is an important issue. However, many existing approaches to calculating these values have limitations in various constellations or network characteristics. We therefore propose an approach that interprets trust as probability and is able to estimate local trust values on large networks using a Monte Carlo simulation method. The estimation is based on existing indirect trust statements between two unfamiliar users. This approach is then extended to the SimTrust algorithm that incorporates both trust and distrust values. It is implemented and discussed in detail with examples. Our main contribution is a new approach which incorporates all available trust and distrust information in such a way that basic trust properties are satisfied.
Khrystyna Nordheimer, Thimo Schulze, Daniel Veit

Design of Graded Trusts by Using Dynamic Path Validation

Abstract
In modern information service architectures, security is one of the most critical criteria. Almost every standard on information security is concerned with internal control of an organization, and particularly with authentication. If an RP (relying party) has valuable information assets, and requires a high level to authentication for accepting access to the valuable assets, then a strong mechanism is required. Here, we focus on a trust model of certificate authentication. Conventionally, a trust model of certificates is defined as a validation of chains of certificates. However, today, this trust model does not function well because of complexity of paths and of requirement of security levels. In this paper, we propose “dynamic path validation,” together with another trust model of PKI for controlling this situation. First, we propose Policy Authority. Policy Authority assigns a level of compliance (LoC) to CAs in its domain. LoC is evaluated in terms of a common criteria of Policy Authority. Moreover, it controls the path building with considerations of LoC. Therefore, we can flexibly evaluate levels of CP/CPS’s in one server. In a typical bridge model, we need as many bridge CAs as the number of required levels of CP/CPS’s. In our framework, instead, we can do the same task in a single server, by which we can save the cost of maintaining lists of trust anchors of multiple levels.
Akira Kubo, Hiroyuki Sato

Implementation and Performance Analysis of the Role-Based Trust Management System, RT C

Abstract
We present representations and algorithms for the implementation of RT C , a role-based trust management language, and announce an open-source implementation available to the public. We also design and perform large-scale performance tests on policies closely modeled after possible applications of RT in the real world. These tests aim to determine the viability of RT as an authorization solution for large and potentially complex policies in a decentralized environment; the results of the tests are analyzed to identify what policy characteristics most strongly affect the performance of RT and develop strategies to achieve the rapid response times required in real-world authorization systems.
Tyler L. Hobbs, William H. Winsborough

A Formal Notion of Trust – Enabling Reasoning about Security Properties

Abstract
Historically, various different notions of trust can be found, each addressing particular aspects of ICT systems, e.g. trust in electronic commerce systems based on reputation and recommendation, or trust in public key infrastructures. While these notions support the understanding of trust establishment and degrees of trustworthiness in their respective application domains, they are insufficient for the more general notion of trust needed when reasoning about security in ICT systems. In this paper we present a formal definition of trust to be able to exactly express trust requirements from the view of different entities involved in the system and to support formal reasoning such that security requirements, security and trust mechanisms and underlying trust assumptions can be formally linked and made explicit. Integrated in our Security Modeling Framework this formal definition of trust can support security engineering processes and formal validation and verification by enabling reasoning about security properties w.r.t. trust.
Andreas Fuchs, Sigrid Gürgens, Carsten Rudolph

Experimental and Experiential Trust

Leveraging a Social Network of Trust for Promoting Honesty in E-Marketplaces

Abstract
In this paper, we examine a trust-based framework for promoting honesty in e-marketplaces that relies on buyers forming social networks to share reputation ratings of sellers and sellers rewarding the buyers that are most respected within their social networks. We explore how sellers reason about expected future profit when offering particular rewards for buyers. We theoretically prove that in a marketplace operating with our mechanism: i) buyers will be better off honestly reporting seller ratings and ii) sellers are better off being honest, to earn better profit. Experiments confirm the robustness of the approach, in dynamically changing environments. With rational agents preferring to be honest, the buyer and seller strategies as specified constitute an effective approach for the design of e-marketplaces.
Jie Zhang, Robin Cohen, Kate Larson

Does Trust Matter for User Preferences? A Study on Epinions Ratings

Abstract
Recommender systems have evolved during the last few years into useful online tools for assisting the daily e-commerce activities. The majority of recommender systems predict user preferences relating users with similar taste. Prior research has shown that trust networks improve the performance of recommender systems, predominantly using algorithms devised by individual researchers. In this work, omitting any specific trust inference algorithm, we investigate how useful it might be if explicit trust relationships (expressed by users for others) are used to select the best neighbours (or predictors), for the provision of accurate recommendations. We conducted our experiments using data from Epinions.com, a popular recommender system. Our analysis indicates that trust information can be helpful to provide a slight performance gain in a few cases especially when it comes to the less active users.
Georgios Pitsilis, Pern Hui Chia

Bringing the Virtual to the Farmers’ Market: Designing for Trust in Pervasive Computing Systems

Abstract
Since pervasive computing applications are mostly designed to enhance existing social situations, such applications should take account of the trust relationships within the situation in their design. In this paper we describe the ethnographic approach we used to explore how trust is formed and maintained within a farmers’ market, and how this understanding can be applied in the design of supporting applications. We then evaluate the applications using the same ethnographic approach, uncovering problems which would not have been visible with other evaluation techniques.
Ian Wakeman, Ann Light, Jon Robinson, Dan Chalmers, Anirban Basu

Incorporating Interdependency of Trust Values in Existing Trust Models for Trust Dynamics

Abstract
Many models of trust consider the trust an agent has in another agent (the trustee) as the result of experiences with that specific agent in combination with certain personality attributes. For the case of multiple trustees, there might however be dependencies between the trust levels in different trustees. In this paper, two alternatives are described to model such dependencies: (1) development of a new trust model which incorporates dependencies explicitly, and (2) an extension of existing trust models that is able to express these interdependencies using a translation mechanism from objective experiences to subjective ones. For the latter, placing the interdependencies in the experiences enables the reuse of existing trust models that typically are based upon certain experiences over time as input. Simulation runs are performed using the two approaches, showing that both are able to generate realistic patterns of interdependent trust values.
Mark Hoogendoorn, S. Waqar Jaffry, Jan Treur

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise