Skip to main content
Top

2010 | Book

Privacy Enhancing Technologies

10th International Symposium, PETS 2010, Berlin, Germany, July 21-23, 2010. Proceedings

Editors: Mikhail J. Atallah, Nicholas J. Hopper

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 10th International Symposium, PETS 2010, held in Berlin, Germany in July 2010. The 16 revised full papers were carefully reviewed and selected from 57 submissions for inclusion in the book. The papers handle topics such as access control, privacy of web based search, anonymus webs of trust, security attacks, active timing attacks in lo-latency anonymus communication, network topology and web search with malicious adversaries

Table of Contents

Frontmatter

Privacy Enhancing Technologies Symposium

How Unique Is Your Web Browser?
Abstract
We investigate the degree to which modern web browsers are subject to “device fingerprinting” via the version and configuration information that they will transmit to websites upon request. We implemented one possible fingerprinting algorithm, and collected these fingerprints from a large sample of browsers that visited our test side, panopticlick.eff.org . We observe that the distribution of our fingerprint contains at least 18.1 bits of entropy, meaning that if we pick a browser at random, at best we expect that only one in 286,777 other browsers will share its fingerprint. Among browsers that support Flash or Java, the situation is worse, with the average browser carrying at least 18.8 bits of identifying information. 94.2% of browsers with Flash or Java were unique in our sample.
By observing returning visitors, we estimate how rapidly browser fingerprints might change over time. In our sample, fingerprints changed quite rapidly, but even a simple heuristic was usually able to guess when a fingerprint was an “upgraded” version of a previously observed browser’s fingerprint, with 99.1% of guesses correct and a false positive rate of only 0.86%.
We discuss what privacy threat browser fingerprinting poses in practice, and what countermeasures may be appropriate to prevent it. There is a tradeoff between protection against fingerprintability and certain kinds of debuggability, which in current browsers is weighted heavily against privacy. Paradoxically, anti-fingerprinting privacy technologies can be self-defeating if they are not used by a sufficient number of people; we show that some privacy measures currently fall victim to this paradox, but others do not.
Peter Eckersley
On the Privacy of Web Search Based on Query Obfuscation: A Case Study of TrackMeNot
Abstract
Web Search is one of the most rapidly growing applications on the internet today. However, the current practice followed by most search engines – of logging and analyzing users’ queries – raises serious privacy concerns. One viable solution to search privacy is query obfuscation, whereby a client-side software attempts to mask real user queries via injection of certain noisy queries. In contrast to other privacy-preserving search mechanisms, query obfuscation does not require server-side modifications or a third party infrastructure, thus allowing for ready deployment at the discretion of privacy-conscious users. In this paper, our higher level goal is to analyze whether query obfuscation can preserve users’ privacy in practice against an adversarial search engine. We focus on TrackMeNot (TMN) [10,20], a popular search privacy tool based on the principle of query obfuscation. We demonstrate that a search engine, equipped with only a short-term history of a user’s search queries, can break the privacy guarantees of TMN by only utilizing off-the-shelf machine learning classifiers.
Sai Teja Peddinti, Nitesh Saxena
Private Information Disclosure from Web Searches
Abstract
As the amount of personal information stored at remote service providers increases, so does the danger of data theft. When connections to remote services are made in the clear and authenticated sessions are kept using HTTP cookies, intercepting private traffic becomes easy to achieve. In this paper, we focus on the world largest service provider – Google. First, with the exception of a few services only accessible over HTTPS (e.g., Gmail), we find that many Google services are vulnerable to simple session hijacking attacks. Next, we present the Historiographer, a novel attack that reconstructs the web search history of Google users – Google’s Web History – even though this service is supposedly protected from session hijacking by a stricter access control policy. The Historiographer uses a reconstruction technique inferring search history from the personalized suggestions fed by the Google search engine. We validate our technique through experiments conducted over real network traffic and discuss possible countermeasures. Our attacks are general and not only specific to Google, and highlight privacy concerns of mixed architectures mixing secure and insecure connections.
Claude Castelluccia, Emiliano De Cristofaro, Daniele Perito
Collaborative, Privacy-Preserving Data Aggregation at Scale
Abstract
Combining and analyzing data collected at multiple administrative locations is critical for a wide variety of applications, such as detecting malicious attacks or computing an accurate estimate of the popularity of Web sites. However, legitimate concerns about privacy often inhibit participation in collaborative data aggregation. In this paper, we design, implement, and evaluate a practical solution for privacy-preserving data aggregation (PDA) among a large number of participants. Scalability and efficiency is achieved through a “semi-centralized” architecture that divides responsibility between a proxy that obliviously blinds the client inputs and a database that aggregates values by (blinded) keywords and identifies those keywords whose values satisfy some evaluation function. Our solution leverages a novel cryptographic protocol that provably protects the privacy of both the participants and the keywords, provided that proxy and database do not collude, even if both parties may be individually malicious. Our prototype implementation can handle over a million suspect IP addresses per hour when deployed across only two quad-core servers, and its throughput scales linearly with additional computational resources.
Benny Applebaum, Haakon Ringberg, Michael J. Freedman, Matthew Caesar, Jennifer Rexford
Privacy-Preserving Queries over Relational Databases
Abstract
We explore how Private Information Retrieval (PIR) can help users keep their sensitive information from being leaked in an SQL query. We show how to retrieve data from a relational database with PIR by hiding sensitive constants contained in the predicates of a query. Experimental results and microbenchmarking tests show our approach incurs reasonable storage overhead for the added privacy benefit and performs between 7 and 480 times faster than previous work.
Femi Olumofin, Ian Goldberg
Achieving Efficient Query Privacy for Location Based Services
Abstract
Mobile smartphone users frequently need to search for nearby points of interest from a location based service, but in a way that preserves the privacy of the users’ locations. We present a technique for private information retrieval that allows a user to retrieve information from a database server without revealing what is actually being retrieved from the server. We perform the retrieval operation in a computationally efficient manner to make it practical for resource-constrained hardware such as smartphones, which have limited processing power, memory, and wireless bandwidth. In particular, our algorithm makes use of a variable-sized cloaking region that increases the location privacy of the user at the cost of additional computation, but maintains the same traffic cost. Our proposal does not require the use of a trusted third-party component, and ensures that we find a good compromise between user privacy and computational efficiency. We evaluated our approach with a proof-of-concept implementation over a commercial-grade database of points of interest. We also measured the performance of our query technique on a smartphone and wireless network.
Femi Olumofin, Piotr K. Tysowski, Ian Goldberg, Urs Hengartner
Making a Nymbler Nymble Using VERBS
Abstract
We propose a new system modeled after Nymble. Like Nymble, our scheme provides a privacy-preserving analog of IP address blocking for anonymizing networks. However, unlike Nymble, the user in our scheme need not trust third parties to maintain their anonymity. We achieve this while avoiding the use of trusted hardware and without requiring an offline credential issuing authority to guarantee that users do not obtain multiple credentials.
We use zero-knowledge proofs to reduce the capabilities of colluding third parties, and introduce a new cryptographic technique that we call verifier-efficient restricted blind signatures, or VERBS, to maintain efficiency. Signature verification with our VERBS are 1–2 orders of magnitude faster than existing restricted blind signatures.
Ryan Henry, Kevin Henry, Ian Goldberg
Anonymous Webs of Trust
Abstract
Webs of trust constitute a decentralized infrastructure for establishing the authenticity of the binding between public keys and users and, more generally, trust relationships among users. This paper introduces the concept of anonymous webs of trust – an extension of webs of trust where users can authenticate messages and determine each other’s trust level without compromising their anonymity. Our framework comprises a novel cryptographic protocol based on zero-knowledge proofs, a symbolic abstraction and formal verification of our protocol, and a prototypical implementation based on the OpenPGP standard. The framework is capable of dealing with various core and optional features of common webs of trust, such as key attributes, key expiration dates, existence of multiple certificate chains, and trust measures between different users.
Michael Backes, Stefan Lorenz, Matteo Maffei, Kim Pecina
Taming Big Brother Ambitions: More Privacy for Secret Handshakes
Abstract
In Secret Handshakes (SH) and Affiliation-Hiding Authenticated Key Exchange (AH-AKE) schemes, users become group members by registering with Group Authorities (GAs) and obtaining membership credentials. Group members then use their membership credentials to privately authenticate each other and communicate securely. The distinguishing privacy property of SH and AH-AKE is that parties learn each other’s groups affiliations and compute common session keys only if their groups match. Current SH and AH-AKE schemes consider GAs to be fully trusted, especially, with regard to (i) security of the registration phase (no phantom members), (ii) secrecy of established session keys, and (iii) privacy. The impact of possible “big brother” ambitions of malicious GAs has not been investigated so far. In this paper, we discuss implications on group members’ privacy and security of their communication in the presence of possible GA corruptions. We demonstrate problems arising from relaxed GA trust assumptions and propose an efficient — yet provably secure — AH-AKE protocol with enhanced privacy properties.
Mark Manulis, Bertram Poettering, Gene Tsudik
Preventing Active Timing Attacks in Low-Latency Anonymous Communication
(Extended Abstract)
Abstract
Low-latency anonymous communication protocols in general, and the popular onion-routing protocol in particular, are broken against simple timing attacks. While there have been few proposed solutions to this problem when the adversary is active, several padding schemes have been proposed to defend against a passive adversary that just observes timing patterns. Unfortunately active adversaries can break padding schemes by inserting delays and dropping messages.
We present a protocol that provides anonymity against an active adversary by using a black-box padding scheme that is effective against a passive adversary. Our protocol reduces, in some sense, providing anonymous communication against active attacks to providing a padding scheme against passive attacks.
Our analytical results show that anonymity can be made arbitrarily good at the cost of some added latency and required bandwidth. We also perform measurements on the Tor network to estimate the real-world performance of our protocol, showing that the added delay is not excessive.
Joan Feigenbaum, Aaron Johnson, Paul Syverson
Impact of Network Topology on Anonymity and Overhead in Low-Latency Anonymity Networks
Abstract
Low-latency anonymous communication networks require padding to resist timing analysis attacks, and dependent link padding has been proven to prevent these attacks with minimal overhead. In this paper we consider low-latency anonymity networks that implement dependent link padding, and examine various network topologies. We find that the choice of the topology has an important influence on the padding overhead and the level of anonymity provided, and that Stratified networks offer the best trade-off between them. We show that fully connected network topologies (Free Routes) are impractical when dependent link padding is used, as they suffer from feedback effects that induce disproportionate amounts of padding; and that Cascade topologies have the lowest padding overhead at the cost of poor scalability with respect to anonymity. Furthermore, we propose an variant of dependent link padding that considerably reduces the overhead at no loss in anonymity with respect to external adversaries. Finally, we discuss how Tor, a deployed large-scale anonymity network, would need to be adapted to support dependent link padding.
Claudia Diaz, Steven J. Murdoch, Carmela Troncoso
$\text{Drac}$ : An Architecture for Anonymous Low-Volume Communications
Abstract
We present \(\text{Drac}\), a system designed to provide anonymity and unobservability for real-time instant messaging and voice-over-IP communications against a global passive adversary. The system uses a relay based anonymization mechanism where circuits are routed over a social network in a peer-to-peer fashion, using full padding strategies and separate epochs to hide connection and disconnection events. Unlike established systems, \(\text{Drac}\) gives away the identity of a user’s friends to guarantee the unobservability of actual calls, while still providing anonymity when talking to untrusted third parties. We present the core design and components of \(\text{Drac}\), we discuss the key ways in which it challenges our current concepts of anonymity and provide an initial simulation-based security analysis.
George Danezis, Claudia Diaz, Carmela Troncoso, Ben Laurie
Private Web Search with Malicious Adversaries
Abstract
Web search has become an integral part of our lives and we use it daily for business and pleasure. Unfortunately, however, we unwittingly reveal a huge amount of private information about ourselves when we search the web. A look at a user’s search terms over a period of a few months paints a frighteningly clear and detailed picture about the user’s life. In this paper, we build on previous work by Castell\(\grave{\rm a}\)-Roca et al. (Computer Communications 2009) and show how to achieve privacy in web searches efficiently and practically without resorting to full-blown anonymous routing. In contrast to previous work, our protocol is secure in the presence of malicious adversaries.
Yehuda Lindell, Erez Waisbard
unFriendly: Multi-party Privacy Risks in Social Networks
Abstract
As the popularity of social networks expands, the information users expose to the public has potentially dangerous implications for individual privacy. While social networks allow users to restrict access to their personal data, there is currently no mechanism to enforce privacy concerns over content uploaded by other users. As group photos and stories are shared by friends and family, personal privacy goes beyond the discretion of what a user uploads about himself and becomes an issue of what every network participant reveals. In this paper, we examine how the lack of joint privacy controls over content can inadvertently reveal sensitive information about a user including preferences, relationships, conversations, and photos. Specifically, we analyze Facebook to identify scenarios where conflicting privacy settings between friends will reveal information that at least one user intended remain private. By aggregating the information exposed in this manner, we demonstrate how a user’s private attributes can be inferred from simply being listed as a friend or mentioned in a story. To mitigate this threat, we show how Facebook’s privacy model can be adapted to enforce multi-party privacy. We present a proof of concept application built into Facebook that automatically ensures mutually acceptable privacy restrictions are enforced on group content.
Kurt Thomas, Chris Grier, David M. Nicol
The Impact of Unlinkability on Adversarial Community Detection: Effects and Countermeasures
Abstract
We consider the threat model of a mobile-adversary drawn from contemporary computer security literature, and explore the dynamics of community detection and hiding in this setting. Using a real-world social network, we examine the extent of network topology information an adversary is required to gather in order to accurately ascertain community membership information. We show that selective surveillance strategies can improve the adversary’s efficiency over random wiretapping. We then consider possible privacy preserving defenses; using anonymous communications helps, but not much; however, the use of counter-surveillance techniques can significantly reduce the adversary’s ability to learn community membership. Our analysis shows that even when using anonymous communications an adversary placing a selectively chosen 8% of the nodes of this network under surveillance (using key-logger probes) can de-anonymize the community membership of as much as 50% of the network. Uncovering all community information with targeted selection requires probing as much as 75% of the network. Finally, we show that a privacy conscious community can substantially disrupt community detection using only local knowledge even while facing up to the asymmetry of a completely knowledgeable mobile-adversary.
Shishir Nagaraja
How to Share Your Favourite Search Results while Preserving Privacy and Quality
Abstract
Personalised social search is a promising avenue to increase the relevance of search engine results by making use of recommendations made by friends in a social network. More generally a whole class of systems take user preferences, aggregate and process them, before providing a view of the result to others in a social network. Yet, those systems present privacy risks, and could be used by spammers to propagate their malicious preferences. We present a general framework to preserve privacy while maximizing the benefit of sharing information in a social network, as well as a concrete proposal making use of cohesive social group concepts from social network analysis. We show that privacy can be guaranteed in a k-anonymity manner, and disruption through spam is kept to a minimum in a real world social network.
George Danezis, Tuomas Aura, Shuo Chen, Emre Kıcıman
Erratum to: Privacy Enhancing Technologies
Abstract
Erratum to: M.J. Atallah and N. Hopper (Eds.) Privacy Enhancing Technologies DOI: 10.​1007/​978-3-642-14527-8
The book was inadvertently published with an incorrect name of the copyright holder. The name of the copyright holder for this book is: © Springer-Verlag Berlin Heidelberg. The book has been updated with the changes.
Mikhail J. Atallah, Nicholas J. Hopper
Backmatter
Metadata
Title
Privacy Enhancing Technologies
Editors
Mikhail J. Atallah
Nicholas J. Hopper
Copyright Year
2010
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-14527-8
Print ISBN
978-3-642-14526-1
DOI
https://doi.org/10.1007/978-3-642-14527-8

Premium Partner