Skip to main content

2008 | Buch

Computer Security - ESORICS 2008

13th European Symposium on Research in Computer Security, Málaga, Spain, October 6-8, 2008. Proceedings

herausgegeben von: Sushil Jajodia, Javier Lopez

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

These proceedings contain the papers selected for presentation at the 13th European Symposium on Research in Computer Security––ESORICS 2008––held October 6–8, 2008 in Torremolinos (Malaga), Spain, and hosted by the University of Malaga, C- puter Science Department. ESORICS has become the European research event in computer security. The symposium started in 1990 and has been organized on alternate years in different European countries. From 2002 it has taken place yearly. It attracts an international audience from both the academic and industrial communities. In response to the call for papers, 168 papers were submitted to the symposium. These papers were evaluated on the basis of their significance, novelty, and technical quality. Each paper was reviewed by at least three members of the Program Comm- tee. The Program Committee meeting was held electronically, holding intensive d- cussion over a period of two weeks. Finally, 37 papers were selected for presentation at the symposium, giving an acceptance rate of 22%.

Inhaltsverzeichnis

Frontmatter

Session 1: Intrusion Detection and Network Vulnerability Analysis

Multiprimary Support for the Availability of Cluster-Based Stateful Firewalls Using FT-FW

Many research has been done with regards to firewalls during the last decade. Specifically, the main research efforts have focused on improving the computational complexity of packet classification and ensuring the rule-set consistency. Nevertheless, other aspects such as fault-tolerance of stateful firewalls still remain open. Continued availability of firewalls has become a critical factor for companies and public administration. Classic fault-tolerant solutions based on redundancy and health checking mechanisms does not success to fulfil the requirements of stateful firewalls. In this work we detail FT-FW, a scalable software-based transparent flow failover mechanism for stateful firewalls, from the multiprimary perspective. Our solution is a reactive fault-tolerance approach at application level that has a negligible impact in terms of network latency. On top of this, quick recovery from failures and fast responses to clients are guaranteed. The solution is suitable for low cost off-the-shelf systems, it supports multiprimary workload sharing scenarios and no extra hardware is required.

P. Neira, R. M. Gasca, L. Lefèvre
Identifying Critical Attack Assets in Dependency Attack Graphs

Attack graphs have been proposed as useful tools for analyzing security vulnerabilities in network systems. Even when they are produced efficiently, the size and complexity of attack graphs often prevent a human from fully comprehending the information conveyed. A distillation of this overwhelming amount of information is crucial to aid network administrators in efficiently allocating scarce human and financial resources. This paper introduces AssetRank, a generalization of Google’s PageRank algorithm which ranks web pages in web graphs. AssetRank addresses the unique semantics of dependency attack graphs and incorporates vulnerability data from public databases to compute metrics for the graph vertices (representing attacker privileges and vulnerabilities) which reveal their importance in attacks against the system. The results of applying the algorithm on a number of network scenarios show that the numeric ranks computed are consistent with the intuitive importance that the privileges and vulnerabilities have to an attacker. The vertex ranks can be used to prioritize countermeasures, help a human reader to better comprehend security problems, and provide input to further security analysis tools.

Reginald E. Sawilla, Xinming Ou
Online Risk Assessment of Intrusion Scenarios Using D-S Evidence Theory

In the paper, an online risk assessment model based on D-S evidence theory is presented. The model can quantitate the risk caused by an intrusion scenario in real time and provide an objective evaluation of the target security state. The results of the online risk assessment show a clear and concise picture of both the intrusion progress and the target security state. The model makes full use of available information from both IDS alerts and protected targets. As a result, it can deal with uncertainties and subjectiveness very well in its evaluation process. In IDAM&IRS, the model serves as the foundation for intrusion response decision-making.

C. P. Mu, X. J. Li, H. K. Huang, S. F. Tian

Session 2: Network Security

Strongly-Resilient and Non-interactive Hierarchical Key-Agreement in MANETs

Key agreement is a fundamental security functionality by which pairs of nodes agree on shared keys to be used for protecting their pairwise communications. In this work we study key-agreement schemes that are well-suited for the mobile network environment. Specifically, we describe schemes with the following characteristics:

Non-interactive:

any two nodes can compute a unique shared secret key without interaction;

Identity-based:

to compute the shared secret key, each node only needs its own secret key and the identity of its peer;

Hierarchical:

the scheme is decentralized through a hierarchy where intermediate nodes in the hierarchy can derive the secret keys for each of its children without any limitations or prior knowledge on the number of such children or their identities;

Resilient:

the scheme is fully resilient against compromise of

any number of leaves

in the hierarchy, and of a threshold number of nodes in each of the upper levels of the hierarchy.

Several schemes in the literature have three of these four properties, but the schemes in this work are the first to possess all four. This makes them well-suited for environments such as MANETs and tactical networks which are very dynamic, have significant bandwidth and energy constraints, and where many nodes are vulnerable to compromise. We provide rigorous analysis of the proposed schemes and discuss implementations aspects.

Rosario Gennaro, Shai Halevi, Hugo Krawczyk, Tal Rabin, Steffen Reidt, Stephen D. Wolthusen
Efficient Handling of Adversary Attacks in Aggregation Applications

Current approaches to handling adversary attacks against data aggregation in sensor networks either aim exclusively at the detection of aggregate data corruption or provide rather inefficient ways to identify the nodes captured by an adversary. In contrast, we propose a distributed algorithm for efficient identification of captured nodes over a constant number of rounds, for an arbitrary number of captured nodes. We formulate our problem as a combinatorial group testing problem and show that this formulation leads not only to efficient identification of captured nodes but also to a precise cost-based characterization of when in-network aggregation retains its assumed benefits in a sensor network operating under persistent attacks.

Gelareh Taban, Virgil D. Gligor
Symmetric Key Approaches to Securing BGP – A Little Bit Trust Is Enough

The Border Gateway Protocol (BGP) is the de facto inter-domain routing protocol that connects autonomous systems (ASes). Despite its importance for the Internet infrastructure, BGP is vulnerable to a variety of attacks due to lack of security mechanisms in place. Many BGP security mechanisms have been proposed, however, none of them has been deployed because of either high cost or high complexity. The right trade-off between efficiency and security has been ever challenging.

In this paper, we attempt to trade-off between efficiency and security by giving a little dose of trust to BGP routers. We present a new flexible threat model that assumes for any path of length

h

, at least one BGP router is trustworthy, where

h

is a parameter that can be tuned according to security requirements. Based on this threat model, we present two new symmetric key approaches to securing BGP: the centralized key distribution approach and the distributed key distribution approach. Comparing our approaches to the previous SBGP scheme, our centralized approach has a 98% improvement in signature verification. Our distributed approach has equivalent signature generation cost as in SBGP and an improvement of 98% in signature verification. Comparing our approaches to the previous SPV scheme, our centralized approach has a 42% improvement in signature generation and a 96% improvement in signature verification. Our distributed approach has a 90% improvement on signature generation cost and a 95% improvement in signature verification cost. By combining our approaches with previous public key approaches, it is possible to simultaneously provide an increased level of security and reduced computation cost.

Bezawada Bruhadeshwar, Sandeep S. Kulkarni, Alex X. Liu

Session 3: Smart Cards and Identity Management

Dismantling MIFARE Classic

The

mifare

Classic is a contactless smart card that is used extensively in access control for office buildings, payment systems for public transport, and other applications. We reverse engineered the security mechanisms of this chip: the authentication protocol, the symmetric cipher, and the initialization mechanism. We describe several security vulnerabilities in these mechanisms and exploit these vulnerabilities with two attacks; both are capable of retrieving the secret key from a genuine reader. The most serious one recovers the secret key from just one or two authentication attempts with a genuine reader in less than a second on ordinary hardware and without any pre-computation. Using the same methods, an attacker can also eavesdrop the communication between a tag and a reader, and decrypt the whole trace, even if it involves multiple authentications. This enables an attacker to clone a card or to restore a real card to a previous state.

Flavio D. Garcia, Gerhard de Koning Gans, Ruben Muijrers, Peter van Rossum, Roel Verdult, Ronny Wichers Schreur, Bart Jacobs
A Browser-Based Kerberos Authentication Scheme

When two players wish to share a security token (e.g., for the purpose of authentication and accounting), they call a trusted third party. This idea is the essence of Kerberos protocols, which are widely deployed in a large scale of computer networks. Browser-based Kerberos protocols are the derivates with the exception that the Kerberos client application is a commodity Web browser. Whereas the native Kerberos protocol has been repeatedly peer-reviewed without finding flaws, the history of browser-based Kerberos protocols is tarnished with negative results due to the fact that subtleties of browsers have been disregarded. We propose a browser-based Kerberos protocol based on client certificates and prove its security in the extended formal model for browser-based mutual authentication introduced at ACM ASIACCS’08.

Sebastian Gajek, Tibor Jager, Mark Manulis, Jörg Schwenk
CROO: A Universal Infrastructure and Protocol to Detect Identity Fraud

Identity fraud (IDF) may be defined as unauthorized exploitation of credential information through the use of false identity. We propose

CROO

, a universal (i.e. generic) infrastructure and protocol to either prevent IDF (by detecting attempts thereof), or limit its consequences (by identifying cases of previously undetected IDF).

CROO

is a capture resilient one-time password scheme, whereby each user must carry a personal trusted device used to generate one-time passwords (OTPs) verified by online trusted parties. Multiple trusted parties may be used for increased scalability. OTPs can be used regardless of a transaction’s purpose (e.g. user authentication or financial payment), associated credentials, and online or on-site nature; this makes

CROO

a universal scheme. OTPs are not sent in cleartext; they are used as keys to compute MACs of hashed transaction information, in a manner allowing OTP-verifying parties to confirm that given user credentials (i.e. OTP-keyed MACs) correspond to claimed hashed transaction details. Hashing transaction details increases user privacy. Each OTP is generated from a PIN-encrypted non-verifiable key; this makes users’ devices resilient to off-line PIN-guessing attacks.

CROO

’s credentials can be formatted as existing user credentials (e.g. credit cards or driver’s licenses).

D. Nali, P. C. van Oorschot

Session 4: Data and Applications Security

Disclosure Analysis and Control in Statistical Databases

Disclosure analysis and control are critical to protect sensitive information in statistical databases when some statistical moments are released. A generic question in disclosure analysis is whether a data snooper can deduce any sensitive information from available statistical moments. To address this question, we consider various types of possible disclosure based on the exact bounds that a snooper can infer about any protected moments from available statistical moments. We focus on protecting static moments in two-dimensional tables and obtain the following results. For each type of disclosure, we reveal the distribution patterns of protected moments that are subject to disclosure. Based on the disclosure patterns, we design efficient algorithms to discover all protected moments that are subject to disclosure. Also based on the disclosure patterns, we propose efficient algorithms to eliminate all possible disclosures by combining a minimum number of available moments. We also discuss the difficulties of executing disclosure analysis and control in high-dimensional tables.

Yingjiu Li, Haibing Lu
TRACE: Zero-Down-Time Database Damage Tracking, Quarantine, and Cleansing with Negligible Run-Time Overhead

As Web services gain popularity in today’s E-Business world, surviving DBMSs from an attack is becoming crucial because of the increasingly critical role that database servers are playing. Although a number of research projects have been done to tackle the emerging data corruption threats, existing mechanisms are still limited in meeting four highly desired requirements:

near-zero-run-time overhead

,

zero-system-down time

,

zero-blocking-time

for read-only transactions,

minimal-delay-time

for read-write transactions. In this paper, we propose

TRACE

, a zero-system-down-time database damage tracking, quarantine, and recovery solution with negligible run time overhead. TRACE consists of a family of new database damage tracking, quarantine, and cleansing techniques. We built TRACE into the kernel of PostgreSQL. Our experimental results demonstrated that TRACE is the first solution that can simultaneously satisfy the first two requirements aforementioned and the first solution that can satisfy all the four requirements.

Kun Bai, Meng Yu, Peng Liu
Access Control Friendly Query Verification for Outsourced Data Publishing

Outsourced data publishing is a promising approach to achieve higher distribution efficiency, greater data survivability, and lower management cost. In outsourced data publishing (sometimes referred to as third-party publishing), a data owner gives the content of databases to multiple publishers which answer queries sent by clients. In many cases, the trustworthiness of the publishers cannot be guaranteed; therefore, it is important for a client to be able to verify the correctness of the query results. Meanwhile, due to privacy concerns, it is also required that such verification does not expose information that is outside a client’s access control area. Current approaches for verifying the correctness of query results in third-party publishing either do not consider the privacy preserving requirement, or are limited to one dimensional queries. In this paper, we introduce a new scheme for verifying the correctness of query results while preserving data privacy. Our approach handles multi-dimensional range queries. We present both theoretical analysis and experimental results to demonstrate that our approach is time and space efficient.

Hong Chen, Xiaonan Ma, Windsor Hsu, Ninghui Li, Qihua Wang

Session 5: Privacy Enhancing Technologies

Sharemind: A Framework for Fast Privacy-Preserving Computations

Gathering and processing sensitive data is a difficult task. In fact, there is no common recipe for building the necessary information systems. In this paper, we present a provably secure and efficient general-purpose computation system to address this problem. Our solution—

Sharemind

—is a virtual machine for privacy-preserving data processing that relies on share computing techniques. This is a standard way for securely evaluating functions in a multi-party computation environment. The novelty of our solution is in the choice of the secret sharing scheme and the design of the protocol suite. We have made many practical decisions to make large-scale share computing feasible in practice. The protocols of

Sharemind

are information-theoretically secure in the honest-but-curious model with three computing participants. Although the honest-but-curious model does not tolerate malicious participants, it still provides significantly increased privacy preservation when compared to standard centralised databases.

Dan Bogdanov, Sven Laur, Jan Willemson
Modeling Privacy Insurance Contracts and Their Utilization in Risk Management for ICT Firms

The rapid expansion of Internet based services has created opportunities for ICT firms to collect and use, in an unauthorized way, information about individuals (e.g. customers, partners, employees etc.). Therefore, privacy issues are becoming increasingly important. In this paper we model the risk that an IT firm is exposed to, as a result of potential privacy violation incidents. The proposed model is based on random utility modeling and aims at capturing the subjective nature of the question: ”how important is a privacy violation incident to someone?”. Furthermore, we propose a collective risk model for the economic exposure of the firm due to privacy violation. These models are useful for the design and valuation of optimal privacy related insurance contracts for the firm and are supportive to its risk management process.

Athanassios N. Yannacopoulos, Costas Lambrinoudakis, Stefanos Gritzalis, Stylianos Z. Xanthopoulos, Sokratis N. Katsikas
Remote Integrity Check with Dishonest Storage Server

We are interested in this problem: a verifier, with a small and reliable storage, wants to periodically check whether a remote server is keeping a large file

x

. A dishonest server, by adapting the challenges and responses, tries to discard partial information of

x

and yet evades detection. Besides the security requirements, there are considerations on communication, storage size and computation time. Juels et al. [10] gave a security model for

Proof of Retrievability

(

$\mathcal{POR}$

) system. The model imposes a requirement that the original

x

can be recovered from multiple challenges-responses. Such requirement is not necessary in our problem. Hence, we propose an alternative security model for

Remote Integrity Check

(

$\mathcal{RIC}$

). We study a few schemes and analyze their efficiency and security. In particular, we prove the security of a proposed scheme

HENC

. This scheme can be deployed as a

$\mathcal{POR}$

system and it also serves as an example of an effective

$\mathcal{POR}$

system whose “extraction” is not verifiable. We also propose a combination of the RSA-based scheme by Filho et al. [7] and the ECC-based authenticator by Naor et al. [12], which achieves good asymptotic performance. This scheme is not a

$\mathcal{POR}$

system and seems to be a secure

$\mathcal{RIC}$

. In-so-far, all schemes that have been proven secure can also be adopted as

$\mathcal{POR}$

systems. This brings out the question of whether there are fundamental differences between the two models. To highlight the differences, we introduce a notion,

trap-door compression

, that captures a property on compressibility.

Ee-Chien Chang, Jia Xu

Session 6: Anonymity and RFID Privacy

A Low-Variance Random-Walk Procedure to Provide Anonymity in Overlay Networks

An alternative to guarantee anonymity in overlay networks may be achieved by building a multi-hop path between the origin and the destination. However, one hop in the overlay network can consist of multiple Internet Protocol (IP) hops. Therefore, the length of the overlay multi-hop path must be reduced in order to maintain a good balance between the cost and the benefit provided by the anonymity facility. Unfortunately, the simple Time-To-Live (TTL) algorithm cannot be directly applied here since its use could reveal valuable information to break anonymity. In this paper, a new mechanism which reduces the length of the overlay multi-hop paths is presented. The anonymity level is evaluated by means of simulation and good results are reported

J. P. Muñoz-Gea, J. Malgosa-Sanahuja, P. Manzanares-Lopez, J. C. Sanchez-Aarnoutse, J. Garcia-Haro
RFID Privacy Models Revisited

In Asiacrypt 2007, Vaudenay proposed a formal model addressing privacy in RFID, which separated privacy into eight classes. One important conclusion in the paper is the impossibility of achieving strong privacy in RFID. He also left an open question whether forward privacy without PKC is possible. In our paper, first we revisit the eight RFID privacy classes and simplify them into three classes that will address the same goal. Second, we show that strong privacy in RFID is achievable. Third, we answer the open question by pointing out the possibility to achieve forward privacy without PKC both within Vaudenay’s model and in practice.

Ching Yu Ng, Willy Susilo, Yi Mu, Rei Safavi-Naini
A New Formal Proof Model for RFID Location Privacy

The privacy and security problems in RFID systems have been extensively studied. However, less research has been done on formal analysis of RFID security. The existing adversarial models proposed in the literature have limitations for analyzing RFID location privacy. In this paper, we propose a new formal proof model based on random oracle and indistinguishability. It not only considers passive/active attacks to the message flows between RFID reader and tag, but also takes into account physical attacks for disclosing tag’s internal state, thus making it more suitable for real RFID systems. We further apply our model to analyze location privacy of an existing RFID protocol.

JungHoon Ha, SangJae Moon, Jianying Zhou, JaeCheol Ha

Session 7: Access Control and Trust Negotiation

Distributed Authorization by Multiparty Trust Negotiation

Automated trust negotiation (ATN) is a promising approach to establishing trust between two entities without any prior knowledge of each other. However, real-world authorization processes often involve online input from third parties, which ATN does not support. In this paper, we introduce

multiparty trust negotiation

(MTN) as a new approach to distributed authorization. We define a Datalog-based policy language, Distributed Authorization and Release Control Logic (DARCL), to specify both authorization and release control policies. DARCL suits the needs of MTN and can also serve as a powerful general-purpose policy language for authorization. To orchestrate the negotiation process among multiple parties without a centralized moderator, we propose the

diffusion negotiation protocol

, a set of message-passing conventions that allows parties to carry out a negotiation in a distributed fashion. Building on top of the diffusion negotiation protocol, we propose two negotiation strategies, both safe and complete, to drive MTN with different tradeoffs between privacy and negotiation speed.

Charles C. Zhang, Marianne Winslett
Compositional Refinement of Policies in UML – Exemplified for Access Control

The UML is the

de facto

standard for system specification, but offers little specialized support for the specification and analysis of policies. This paper presents Deontic STAIRS, an extension of the UML sequence diagram notation with customized constructs for policy specification. The notation is underpinned by a denotational trace semantics. We formally define what it means that a system satisfies a policy specification, and introduce a notion of policy refinement. We prove that the refinement relation is transitive and compositional, thus supporting a stepwise and modular specification process. The approach is exemplified with access control policies.

Bjørnar Solhaug, Ketil Stølen
On the Security of Delegation in Access Control Systems

Delegation is a mechanism that allows a user

A

to act on another user

B

’s behalf by making

B

’s access rights available to

A

. It is well recognized as an important mechanism to provide resiliency and flexibility in access control systems, and has gained popularity in the research community. However, most existing literature focuses on modeling and managing delegations. Little work has been done on understanding the impact of delegation on the security of existing access control systems. In particular, no formal notion of security with respect to delegation has been proposed. Many existing access control systems are designed without having delegation in mind. Simply incorporating a delegation module into those systems may cause security breaches.

This paper focuses on the security aspect of delegation in access control systems. We first give examples on how colluding users may abuse the delegation support of access control systems to circumvent security policies, such as separation of duty. As a major contribution, we propose a formal notion of security with respect to delegation in access control systems. After that, we discuss potential mechanisms to enforce security. In particular, we design a novel source-based enforcement mechanism for workflow authorization systems so as to achieve both security and efficiency.

Qihua Wang, Ninghui Li, Hong Chen

Session 8: Information Flow and Non-transferability

Termination-Insensitive Noninterference Leaks More Than Just a Bit

Current tools for analysing information flow in programs build upon ideas going back to Denning’s work from the 70’s. These systems enforce an imperfect notion of information flow which has become known as

termination-insensitive noninterference

. Under this version of noninterference, information leaks are permitted if they are transmitted purely by the program’s termination behaviour (i.e., whether it terminates or not). This imperfection is the price to pay for having a security condition which is relatively liberal (e.g. allowing while-loops whose termination may depend on the value of a secret) and easy to check. But what is the price exactly? We argue that, in the presence of output, the price is higher than the “one bit” often claimed informally in the literature, and effectively such programs can leak all of their secrets. In this paper we develop a definition of termination-insensitive noninterference suitable for reasoning about programs with outputs. We show that the definition generalises “batch-job” style definitions from the literature and that it is indeed satisfied by a Denning-style program analysis with output. Although more than a bit of information can be leaked by programs satisfying this condition, we show that the best an attacker can do is a brute-force attack, which means that the attacker cannot reliably (in a technical sense) learn the secret in polynomial time in the size of the secret. If we further assume that secrets are uniformly distributed, we show that the advantage the attacker gains when guessing the secret after observing a polynomial amount of output is negligible in the size of the secret.

Aslan Askarov, Sebastian Hunt, Andrei Sabelfeld, David Sands
Security Provisioning in Pervasive Environments Using Multi-objective Optimization

Pervasive computing applications involve information flow across multiple organizations. Thus, any security breach in an application can have far-reaching consequences. However, effective security mechanisms can be quite different from those typically deployed in conventional applications since these mechanisms are constrained by various factors in a pervasive environment. In this paper, we propose a methodology to perform a cost-benefit analysis under such circumstances. Our approach is based on the formulation of a set of constrained multi-objective optimization problems to minimize the residual damage and the cost of security provisioning. We propose the use of workflow profiles to capture the contexts in which a communication channel is used in a pervasive environment. This is used to minimize the cost that the underlying business entity will have to incur in order to keep the workflow secure and running.

Rinku Dewri, Indrakshi Ray, Indrajit Ray, Darrell Whitley
Improved Security Notions and Protocols for Non-transferable Identification

Different security notions and settings for identification protocols have been proposed so far, considering different adversary models where the main objective is the non-transferability of the proof.

In this paper we consider one of the strongest non-transferability notions, namely resettable non-transferable identification introduced by Bellare et al. This notion aim at capturing security with respect to powerful adversaries that have physical access to the device that proves its identity, and thus can potentially reset its internal state. We discuss some limitations of existing notions for secure identification protocols as well as different impossibility results for strong notions of non-transferability. We introduce a new strong and achievable notion for resettable non-transferable identification that reflects real scenarios more adequately and present a generic protocol that satisfies this notion. We then show how to efficiently instantiate our construction and discuss how our protocol can improve the current proposals for the next generation of electronic passports (e-passports).

Carlo Blundo, Giuseppe Persiano, Ahmad-Reza Sadeghi, Ivan Visconti

Session 9: Secure Electronic Voting and Web Applications Security

Human Readable Paper Verification of Prêt à Voter

The Prêt à Voter election scheme provides high assurance of accuracy and secrecy, due to the high degree of transparency and auditability. However, the assurance arguments are subtle and involve some understanding of the role of cryptography. As a result, establishing public understanding and trust in such systems remains a challenge. It is essential that a voting system be not only trustworthy but also widely trusted.

In response to this concern, we propose to add a mechanism to Prêt à Voter to generate a conventional (i.e. human readable) paper audit trail that can be invoked should the outcome of the cryptographic count be called into question. It is hoped that having such a familiar mechanism as a safety net will encourage public confidence. Care has to be taken to ensure that the mechanism does not undermine the carefully crafted integrity and privacy assurances of the original scheme.

We show that, besides providing a confidence building measure, this mechanism brings with it a number of interesting technical features: it allows extra audits of mechanisms that capture and process the votes to be performed. In particular, the mechanism presented here allows direct auditing of ballots that are actually used to cast votes. This is in contrast to previous versions of Prêt à Voter, and indeed other verifiable schemes, that employ a cut-and-choose mechanism. The mechanism proposed also has the benefit of providing a robust counter to the danger of voters undermining the receipt-freeness property by trying to retain the candidate list.

David Lundin, Peter Y. A. Ryan
A Distributed Implementation of the Certified Information Access Service

In this paper we consider the problem of securely outsourcing computation on private data. We present a protocol for securely distributing the computation of the data structures used by current implementations of the

Certified Information Access

primitive. To this aim, we introduce the concept of a

Verifiable Deterministic Envelope

- that may be of independent interest - and we provide practical implementations, based on standard cryptographic assumptions.

We also discuss a prototype implementation of our proposal based on the PUB-WCL framework developed at the University of Paderborn for sharing computation across a network.

Carlo Blundo, Emiliano De Cristofaro, Aniello Del Sorbo, Clemente Galdi, Giuseppe Persiano
Exploring User Reactions to New Browser Cues for Extended Validation Certificates

With the introduction of Extended Validation SSL certificates in Internet Explorer 7.0, web browsers are introducing new indicators to convey status information about different types of certificates. We carried out a user study which compared a proposed new interface in the Mozilla Firefox browser with an alternative interface of our own design to investigate how users react to these new indicators. Our study included eye tracking data which provided empirical evidence with respect to which parts of the browser interface users tended to look at during the study and which areas went unnoticed. Our results show that, while the new interface features in the unmodified Firefox browser went unnoticed by all users in our study, the modified design was noticed by over half of the participants, and most users show a willingness to adopt these features once made aware of their functionality.

Jennifer Sobey, Robert Biddle, P. C. van Oorschot, Andrew S. Patrick
A Framework for the Analysis of Mix-Based Steganographic File Systems

The goal of Steganographic File Systems (SFSs) is to protect users from coercion attacks by providing plausible deniability on the existence of hidden files. We consider an adversary who can monitor changes in the file store and use this information to look for hidden files when coercing the user. We outline a high-level SFS architecture that uses a local mix to relocate files in the remote store, and thus prevent known attacks [TDDP07] that rely on low-entropy relocations. We define probabilistic metrics for unobservability and (plausible) deniability, present an analytical framework to extract evidence of hidden files from the adversary’s observation (before and after coercion,) and show in a experimental setup how this evidence can be used to reduce deniability. This work is a first step towards understanding and addressing the security requirements of SFSs operating under the considered threat model, of relevance in scenarios such as remote stores managed by semi-trusted parties, or distributed peer-to-peer SFSs.

Claudia Diaz, Carmela Troncoso, Bart Preneel

Session 10: VoIP Security, Malware, and DRM

An Adaptive Policy-Based Approach to SPIT Management

Voice over IP (VoIP) is a key enabling technology, which provides new ways of communication. VoIP technologies take advantage of existing data networks to provide inexpensive voice communications world-wide as a promising alternative to the traditional telephone service. At the same time, VoIP provides the means for transmitting bulk unsolicited calls, namely SPam over Internet Telephony (SPIT). SPIT is, up to a given extend, similar to email spam. However, it is expected to be more frustrating because of the real-time processing requirements of voice calls. In this paper we set the foundations of an adaptive approach that handles SPIT through a policy-based management approach (aSPM). aSPM incorporates a set of rules for SPIT attacks detections, together with appropriate actions and controls that should be enforced so as to counter these attacks. Furthermore, the policy is formally described through an XML schema, which refers to both, the attack detection rules, and the corresponding defense actions.

Yannis Soupionis, Stelios Dritsas, Dimitris Gritzalis
Structured Peer-to-Peer Overlay Networks: Ideal Botnets Command and Control Infrastructures?

Botnets, in particular the Storm botnet, have been garnering much attention as vehicles for Internet crime. Storm uses a modified version of Overnet, a structured peer-to-peer (P2P) overlay network protocol, to build its command and control (C&C) infrastructure. In this study, we use simulation to determine whether there are any significant advantages or disadvantages to employing structured P2P overlay networks for botnet C&C, in comparison to using unstructured P2P networks or other complex network models. First, we identify some key measures to assess the C&C performance of such infrastructures, and employ these measures to evaluate Overnet, Gnutella (a popular, unstructured P2P overlay network), the Erdős-Rényi random graph model and the Barabási-Albert scale-free network model. Further, we consider the three following disinfection strategies: a) a

random

strategy that, with effort, can remove randomly selected bots and uses no knowledge of the C&C infrastructure, b) a

tree-like

strategy where local information obtained from a disinfected bot (e.g. its peer list) is used to more precisely disinfect new machines, and c) a

global

strategy, where global information such as the degree of connectivity of bots within the C&C infrastructure, is used to target bots whose disinfection will have maximum impact. Our study reveals that while Overnet is less robust to random node failures or disinfections than the other infrastructures modelled, it outperforms them in terms of resilience against the targeted disinfection strategies introduced above. In that sense, Storm designers seem to have made a prudent choice! This work underlines the need to better understand how P2P networks are used, and can be used, within the botnet context, with this domain being quite distinct from their more commonplace usages.

Carlton R. Davis, Stephen Neville, José M. Fernandez, Jean-Marc Robert, John McHugh
Eureka: A Framework for Enabling Static Malware Analysis

We introduce Eureka, a framework for enabling static analysis on Internet malware binaries. Eureka incorporates a novel binary unpacking strategy based on statistical bigram analysis and coarse-grained execution tracing. The Eureka framework uniquely distinguishes itself from prior work by providing effective evaluation metrics and techniques to assess the quality of the produced unpacked code. Eureka provides several Windows API resolution techniques that identify system calls in the unpacked code by overcoming various existing control flow obfuscations. Eureka’s unpacking and API resolution capabilities facilitate the structural analysis of the underlying malware logic by means of micro-ontology generation that labels groupings of identified API calls based on their functionality. They enable a visual means for understanding malware code through the automated construction of annotated control flow and call graphs.Our evaluation on multiple datasets reveals that Eureka can simplify analysis on a large fraction of contemporary Internet malware by successfully unpacking and deobfuscating API references.

Monirul Sharif, Vinod Yegneswaran, Hassen Saidi, Phillip Porras, Wenke Lee
New Considerations about the Correct Design of Turbo Fingerprinting Codes

Since the introduction of turbo codes in 1993, many new applications for this family of codes have been proposed. One of the latest, in the context of digital fingerprinting, is called turbo fingerprinting codes and was proposed by Zhang

et al.

. The main idea is a new fingerprinting code composed of an outer turbo code and an inner code based on the Boneh-Shaw model. The major contribution of this paper is a new analysis of this new family of codes that shows its drawbacks. These drawbacks must be considered in order to perform a correct design of a turbo fingerprinting scheme otherwise the scheme cannot retrieve the traitor users which is the main goal of digital fingerprinting scheme. Moreover, the identification of these drawbacks allows to discuss an entirely new construction of fingerprinting codes based on turbo codes.

Joan Tomàs-Buliart, Marcel Fernández, Miguel Soriano

Session 11: Formal Models and Cryptographic Protocols

Formally Bounding the Side-Channel Leakage in Unknown-Message Attacks

We propose a novel approach for quantifying a system’s resistance to unknown-message side-channel attacks. The approach is based on a measure of the secret information that an attacker can extract from a system from a given number of side-channel measurements. We provide an algorithm to compute this measure, and we use it to analyze the resistance of hardware implementations of cryptographic algorithms with respect to timing attacks. In particular, we show that message-blinding – the common countermeasure against timing attacks – reduces the rate at which information about the secret is leaked, but that the complete information is still eventually revealed. Finally, we compare information measures corresponding to unknown-message, known-message, and chosen-message attackers and show that they form a strict hierarchy.

Michael Backes, Boris Köpf
Cryptographic Protocol Explication and End-Point Projection

Cryptographic protocols are useful for engineering trust in transactions. There are several languages for describing these protocols, but these tend to capture the communications from the perspective of an individual role. In contrast, traditional protocol descriptions as found in a state of nature tend to employ a whole-protocol description, resulting in an impedance mismatch.

In this paper we present two results to address this gap between human descriptions and deployable specifications. The first is an end-point projection technique that consumes an explicit whole-protocol description and generates specifications that capture the behavior of each participant role. In practice, however, many whole-protocol descriptions contain idiomatic forms of implicit specification. We therefore present our second result, a transformation that identifies and eliminates these implicit patterns, thereby preparing protocols for end-point projection.

Concretely, our tools consume protocols written in our whole-protocol language,

wppl

, and generate role descriptions in the cryptographic protocol programming language,

cppl

. We have formalized and established properties of the transformations using the Coq proof assistant. We have validated our transformations by applying them successfully to most of the protocols in the

spore

repository.

Jay McCarthy, Shriram Krishnamurthi
State Space Reduction in the Maude-NRL Protocol Analyzer

The Maude-NRL Protocol Analyzer (Maude-NPA) is a tool and inference system for reasoning about the security of cryptographic protocols in which the cryptosystems satisfy different equational properties. It both extends and provides a formal framework for the original NRL Protocol Analyzer, which supported equational reasoning in a more limited way. Maude-NPA supports a wide variety of algebraic properties that includes many crypto-systems of interest such as, for example, one-time pads and Diffie-Hellman. Maude-NPA, like the original NPA, looks for attacks by searching backwards from an insecure attack state, and assumes an unbounded number of sessions. Because of the unbounded number of sessions and the support for different equational theories, it is necessary to develop ways of reducing the search space and avoiding infinite search paths. As a result, we have developed a number of state space reduction techniques. In order for the techniques to prove useful, they need not only to speed up the search, but should not violate soundness so that failure to find attacks still guarantees security. In this paper we describe the state space reduction techniques we use. We also provide soundness proofs, and experimental evaluations of their effect on the performance of Maude-NPA.

Santiago Escobar, Catherine Meadows, José Meseguer

Session 12: Language-Based and Hardware Security

Code-Carrying Authorization

In authorization, there is often a wish to shift the burden of proof to those making requests, since they may have more resources and more specific knowledge to construct the required proofs. We introduce an extreme instance of this approach, which we call Code-Carrying Authorization (CCA). With CCA, access-control decisions can partly be delegated to untrusted code obtained at run-time. The dynamic verification of this code ensures the safety of authorization decisions. We define and study this approach in the setting of a higher-order spi calculus. The type system of this calculus provides the needed support for static and dynamic verification.

Sergio Maffeis, Martín Abadi, Cédric Fournet, Andrew D. Gordon
CPU Bugs, CPU Backdoors and Consequences on Security

In this paper, we present the consequences on the security of operating systems and virtual machine monitors of the presence of a bug or a backdoor in x86 processors. We will not try to determine whether the backdoor threat is realistic or not, but we will assume that a bug or a backdoor exists and analyse the consequences on systems. We will show how it is possible for an attacker to implement a simple and generic CPU backdoor to be later able to bypass mandatory security mechanisms with very limited initial privileges. We will explain practical difficulties and show proof of concept schemes using a modified Qemu CPU emulator. Backdoors studied in this paper are all usable from the software level without any physical access to the hardware.

Loïc Duflot
Backmatter
Metadaten
Titel
Computer Security - ESORICS 2008
herausgegeben von
Sushil Jajodia
Javier Lopez
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-88313-5
Print ISBN
978-3-540-88312-8
DOI
https://doi.org/10.1007/978-3-540-88313-5