Skip to main content
Top

2014 | Book

Security Protocols XVIII

18th International Workshop, Cambridge, UK, March 24-26, 2010, Revised Selected Papers

Editors: Bruce Christianson, James Malcolm

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the thoroughly refereed post-workshop proceedings of the 18th International Workshop on Security Protocols, held in Cambridge, UK, in March 2010. After an introduction the volume presents 16 revised papers and one abstract, each followed by a revised transcript of the discussion ensuing the presentation at the event. The theme of this year's workshop was "Virtually Perfect Security".

Table of Contents

Frontmatter
Introduction: Virtually Perfect Security (Transcript of Discussion)

Hello everyone, and welcome to the 18th Security Protocols Workshop. Our theme this year is “Virtually Perfect Security”, which is an attempt to tie together three slightly different interlocking strands. The first is the fact that although we talk about security as if it were some sort of metaphysical property (so that a system is either secure or isn’t), we all know that really whether a system is secure or not depends on the context which you put it, and you can move a system to a different context and change whether it’s secure or not. In practice, we also usually prove security relative to a particular abstraction, and the danger is that we have a system that “really” is secure, and then we discover that the attacker is using a different abstraction. Our attempt to find abstractions which the attacker can’t fool with this trick with has pushed us into talking about security using abstractions that are further and further away from anything that a user might think of as comprehensible or convenient.

Bruce Christianson
Caught in the Maze of Security Standards

We analyze the interactions between several national and international standards for smart card based applications, noting deficiencies in those standards, or at least deficiencies in the documentation of their dependencies. We show that currently smart card protocols are specified in a way that standard compliant protocol implementations may be vulnerable to attacks. We further show that attempts to upgrade security by increasing the length of cryptographic keys may fail when message formats in protocols are not re-examined at the same time.

Jan Meier, Dieter Gollmann
Caught in the Maze of Security Standards (Transcript of Discussion)

In the spirit of the workshop, this will be a talk on work in progress, or maybe more precisely, work that has started, and we have yet to see where it is going to take us to.

It will be a talk about security protocols, but not a talk about why the design of security protocols is difficult; that’s rubbish, it’s not difficult, it’s fairly easy if you know what to do. If you insist to do it when you don’t know what you’re doing, then you might see the customary mistakes.

Dieter Gollmann
Blood in the Water
Are there Honeymoon Effects Outside Software?

In a previous paper at this workshop (and in a forthcoming full paper), we observed that software systems enjoy a security “honeymoon period” in the early stages of their life-cycles. Attackers take considerably longer to make their first discoveries of exploitable flaws in software systems than they do to discover flaws as systems mature. This is true even though the first flaws, presumably, actually represent the easiest bugs to find and even though the more mature systems tend to be more intrinsically robust.

Sandy Clark, Matt Blaze, Jonathan Smith
Blood in the Water (Transcript of Discussion)

A couple of years ago when we were analysing voting machines we came across a question for which we didn’t have an answer, namely if these machines are so bad why aren’t they being attacked left and right? These machines were full of vulnerabilities, they were trivial to exploit, and yet it strikes me now there’s been no documented case of an attack on any voting system by exploiting a software or a hardware vulnerability.

Sandy Clark
Digital Immolation
New Directions for Online Protest

The current literature and experience of online activism assumes two basic uses of the Internet for social movements: straightforward extensions of offline organising and fund-raising using online media to improve efficiency and reach, or “hacktivism” using technical knowledge to illegally deface or disrupt access to online resources. We propose a third model which is non-violent yet proves commitment to a cause by enabling a group of activists to temporarily or permanently sacrifice valuable online identities such as email accounts, social networking profiles, or gaming avatars. We describe a basic cryptographic framework for enabling such a protest, which provides an additional property of binding solidarity which is not normally possible offline.

Joseph Bonneau
Digital Immolation (Transcript of Discussion)

How did I get to this topic? I thought about the biggest problems in the world: inequality around the globe, climate change, baby seals being hit over the head by Canadians, and Conan O’Brian, my favourite talk-show host, fired by NBC. I only see two people nodding so I guess he didn’t catch on internationally, but in America people were really upset about this. So seriously (and sadly) young people in America, especially college-aged students, were probably equally upset about Conan being fired and the Haiti earthquake, which happened around the same time and generated a similar online buzz.

Joseph Bonneau
Relay-Proof Channels Using UWB Lasers

Consider the following situation: Alice is a hand-held device, such as a PDA. Bob is a device providing a service, such as an ATM, an automatic door, or an anti-aircraft gun pointing at the gyro-copter in which Alice is travelling.

Bob and Alice have never directly met before, but share a key as a result of secure hand-offs. Alice has used this key to request a service from Bob (dispense cash, open door, don’t shoot). Before complying, Bob needs to be sure that it really is Alice that he can see in front of him, and not Mort.

Bruce Christianson, Alex Shafarenko, Frank Stajano, Ford Long Wong
Relay-Proof Channels Using UWB Lasers (Transcript of Discussion)

This talk is about the mechanics of security as well as protocols for security; what I am trying to do is to work out some general principles of a certain technology that is necessary for a certain type of security protocol.

This is an authentication problem with a twist. There’s a prover and a verifier, they talk to each other, they use standard protocols for the prover to prove its identity to the verifier. The twist is that both the prover and the verifier have spatial coordinates, and the goal is not just to verify that the prover is who he says he is, but also that he is there in person.

Alex Shafarenko
Using Dust Clouds to Enhance Anonymous Communication

Cloud computing platforms, such as Amazon EC2 [1], enable customers to lease several

virtual machines

(VMs) on a per-hour basis. The customer can now obtain a dynamic and diverse collection of machines spread across the world. In this paper we consider how this aspect of cloud computing can facilitate anonymous communications over untrusted networks such as the Internet, and discuss some of the challenges that arise as a result.

Richard Mortier, Anil Madhavapeddy, Theodore Hong, Derek Murray, Malte Schwarzkopf
Using Dust Clouds to Enhance Anonymous Communication (Transcript of Discussion)

This presentation is about dust clouds in third party anonymity, and the fundamental technology that we’re building on is mix networks, which I assume most of you are familiar with. The idea is Alice wants to anonymously communicate with Bob. She does not want to be anonymous to Bob, but wants to be anonymous to some evil Eve that’s observing the network, and to do that she sends her data to a mix network, it goes into hiding from the ingress point to the egress point, and then comes up to Bob, and Eve observing parts of the mix network can’t tell what’s being said or who’s talking to whom, because it all lies encrypted in the network. However, that does not solve the problem if Eve is able to look at both the ingress and egress point of the network, then she can still see what’s going on, so if some evil overlord has a view of the entire Internet (a global passive adversary), we can’t be using a mix network. That’s the basic setting we’re looking at, and the specific mix network that we are concerned with is Tor; we think that our strategy would pretty much work with any mix network, but Tor is the example that we used in the paper.

Malte Schwarzkopf
Generating Channel Ids in Virtual World Operating Systems (Extended Abstract)

Two of the most popular software platforms for creating virtual worlds-Second Life and Metaplace-both use a message-passing architecture. Objects communicate by sending

messages on channels

. The channel’s name or identifier acts as a capability: a program that knows the channel’s identifier can send and receive from the channel.

Michael Roe
Generating Channel Ids in Virtual World Operating Systems (Transcript of Discussion)

This is Michael Roe’s work, but Mike has kindly asked me to present it so that I can be here today, and so that he can also present some different work tomorrow. You can ask questions, and check how well I understand the discussions we’ve mostly had by the coffee machine over the last years.

George Danezis
Censorship-Resilient Communications through Information Scattering

The aim of this paper is to present a new idea on censorship-resilient communication Internet services, like blogs or web publishing. The motivation of this idea comes from the fact that in many situations guaranteeing this property is even matter of personal freedom. Our idea leverages: i) to split the actual content of a message and to scatter it through different points of retrieval; ii) to hide the content of a splitted message in a way that is clearly unidentifiable—hence involving encryption and steganography; iii) to allow the intended message recipient to correctly retrieve the original message. A further extension on this idea allows the recipient of the message to retrieve the message even if: i) some of the retrieval point are not available; ii) some retrieved data have been tampered with—their integrity has been violated.

Stefano Ortolani, Mauro Conti, Bruno Crispo
Censorship-Resilient Communications through Information Scattering (Transcript of Discussion)

The problem that we are trying to address is to have an anonymous and censorship-resilient communication, and we know that these kinds of communications are important. We had different talks here yesterday about anonymity, for example, how to improve the anonymity of Tor, but anonymity is not the only the important factor, because in many scenarios what happens is that there is also a censorship authority that can ban the message; there is no point being anonymous if the message is not conveyed to the intended set of recipients. We have a different motivational example, of course there is nothing political around this example, but just to motivate that we know that in many situations it is necessary to have a mechanism to communicate in an anonymous way, and to have a mechanism to avoid that a censorship authority can then ban our message. Here we have a censorship authority that is basically a government, but I saw that there is another paper this afternoon on censorship of eBooks, and in that example the censorship is not by a government. So in many scenarios what happens is that we need to communicate in an anonymous way, and we need to let the message reach the destination without being banned.

Mauro Conti
On Storing Private Keys in the Cloud

Many future applications, such as distributed social networks, will rely on public-key cryptography, and users will want to access them from many locations. Currently, there is no way to store private keys “in the cloud” without placing complete faith in a centralised operator. We propose a protocol that can be used to share secrets such as private keys among several key recovery agents, using a weak password, in a way that prevents insiders from recovering either the private key or the password without significant collusion. This protocol will enable the safe storage of private keys online, which will facilitate the advent of secure, decentralized, globally-accessible systems.

Jonathan Anderson, Frank Stajano
On Storing Private Keys in the Cloud (Transcript of Discussion)

Hello, I’m Jonathan Anderson, a PhD student here in the security group in Cambridge, and these are some thoughts I’ve been having with my supervisor Frank, who couldn’t be here today. Since it seems to be very much ‘a la mode to have disclaimers at the beginning of the Protocols Workshop talks this year, well I disclaim you have no right to quiet enjoyment of the work I’m presenting. This is a work in progress, ideas, there’s no implementation, and you may very well find some points we haven’t thought of, so let’s have some hearty discussion.

Jonathan Anderson
More Security or Less Insecurity

We depart from the conventional quest for ‘Completely Secure Systems’ and ask ‘How can we be more Secure’. We draw heavily from the evolution of the Theory of Justice and the arguments against the institutional approach to Justice. Central to our argument is the identification of redressable insecurity, or weak links. Our contention is that secure systems engineering is not really about building perfectly secure systems but about redressing manifest insecurities.

Partha Das Chowdhury, Bruce Christianson
More Security or Less Insecurity (Transcript of Discussion)

This is actually work done by Partha, it’s his talk, but the UKBA1 decided we could do without him, which is why it’s me talking rather than him. The purpose of this talk is to explore the possibility of an exploitable analogy between approaches to secure system design and theories of jurisprudence. The prevailing theory of jurisprudence in the West at the moment goes back to Hobbes. It was developed by Immanuel Kant and later by Rousseau, and is sometimes called the contractarian model after Rousseau’s idea of the social contract. It’s not the sort of contract that you look at and think, oh gosh, that might be nice, I might think about opting in to that, it’s more like a pop up licence agreement that says, do you want to comply with this contract, or would you rather be an outlaw. So you don’t get a lot of choice about it. Sometimes the same theory, flying the flag of Immanuel Kant, is called transcendental institutionalism, because the basic approach says, you identify the legal institutions that in a perfect world would govern society, and then you look at the processes and procedures, the protocols that everyone should follow in order to enable those institutions to work, and then you say, right, that can’t be transcended, so therefore there’s a moral imperative for everyone to do it. So this model doesn’t pay any attention to the actual society that emerges, or to the incentives that these processes actually place on various people to act in a particular way. It doesn’t look at any interaction effects, it simply says, well you have to behave in this particular way because that’s what the law says you have to do, and the law is the law, and anybody who doesn’t behave in that way is a criminal, or (in our terms) is an attacker.

Bruce Christianson
It’s the Anthropology, Stupid!

Imagine a world five or ten years from now where virtualisation has become pervasive. Rather than doing your work on a personal computer, you have a laptop (or a tablet or a virtual reality headset) with a number of virtual machines — say one for work, one for play, one for serious personal things like banking, and one for the classified work on the defence contract your employer picked up.

Ross Anderson, Frank Stajano
It’s the Anthropology, Stupid! (Transcript of Discussion)

We have talked about the interactions between security and economics, security and psychology, and Bruce has been talking about the interaction with jurisprudence. So in this talk I thought we would try and colonise yet one more of the University’s disparate departments, and see if we can steal anything interesting from the anthropologists. I’m going to be agnostic about whether we’re talking about the biological anthropologists or the social anthropologists, as you may know, these are two warring tribes.

Ross Anderson
Security Design in Human Computation Games

We consider a binary labelling problem: for some machine learning applications, two types of distinct objects are required to be labeled respectively, before a classifier can be trained. We show that the famous ESP game and variants would not work well on this binary labelling problem. We discuss how to design a new human computation game to solve this problem. It turns out that interesting but subtle security issues emerge in the new game. We introduce novel gaming mechanisms, such as ‘guess disagreement’, which improve the game’s security, usability and productivity simultaneously.

Su-Yang Yu, Jeff Yan
Security Design in Human Computation Games (Transcript of Discussion)

This is a joint work with my student Su-Yang, he is actually in the audience just over there, and his research is sponsored by Microsoft Research here in Cambridge.

So what is human computation about? Basically it is about getting voluntary human efforts to solve some difficult computational problems where no known algorithms exist to solve those kind of problems efficiently. And human computation systems are typically designed as computer games, because games offer independence, this can actually use, these can be used as incentives to attract volunteers. And in turn, the volunteers’ game play collectively solves a computationally hard problem. That is the basic idea.

Jeff Yan
Virtually Perfect Democracy

In the 2009 Security Protocols Workshop, the Pretty Good Democracy scheme was presented. This scheme has the appeal of allowing voters to cast votes remotely, e.g. via the Internet, and confirm correct receipt in a single session. The scheme provides a degree of end-to-end verifiability: receipt of the correct acknowledgement code provides assurance that the vote will be accurately included in the final tally. The scheme does not require any trust in a voter client device. It does however have a number of vulnerabilities: privacy and accuracy depend on vote codes being kept secret. It also suffers the usual coercion style threats common to most remote voting schemes.

In this paper we investigate how to counter the above threats by introducing modest cryptographic capabilities, and modest trust assumptions, to the voting client. Of course, we are simply shifting trust, but we are transforming it and, arguably, making the trusted devices more accountable. Which design is deemed more secure will depend on the threat environment.

Giampaolo Bella, Peter Y. A. Ryan, Vanessa Teague
Virtually Perfect Democracy (Transcript of Discussion)

This is joint work with Vanessa Teague from Melbourne, and Giampaolo Bella from Catania, neither of whom could unfortunately make it. Well first of all the title is a bit contrived, I should probably apologise for it, it kind of fits the theme. I’m not sure the content actually fits so well, we’ll see about that. Sandy, yesterday you saying something about you managed to get out of voting; I must ask you more about why you were so keen to.

Peter Y. A. Ryan
Security Protocols for Secret Santa

The motivation for the current report lies in a number of questions concerning the current state of the art in security protocol design and analysis. Why is it so hard to develop a complete set of security requirements for a particular problem? For instance, even in the seemingly simple case of defining the secrecy property of an encryption algorithm it has taken decades to reach a degree of consensus, and even now new definitions are required in specific contexts. Similarly, even after more than twenty years of research on e-voting, there is still a lack of consensus as to which properties an e-voting protocol should satisfy. What did we learn from this research experience on e-voting? Can we apply the knowledge accumulated in this domain to similar domains and problems? Is there a methodology underlying the process of understanding a security protocol domain?

Sjouke Mauw, Saša Radomirović, Peter Y. A. Ryan
Security Protocols for Secret Santa (Transcript of Discussion)

Designing security protocols is not complicated, unless you insist on making the same mistakes over and over. It depends on what you call designing; once you have the requirements it might be not too hard to make security protocols. But the complications are in finding the right requirements, and that’s maybe the biggest question that I want to approach today: what to do with requirements in security protocols. My presentation is divided in two parts. First some general question asking; it’s not answers that I will give today. Second I will look at a simple example, the example of Secret Santa. This is joint work with Saša Radomirović and Peter Ryan.

Sjouke Mauw
Censorship of eBooks (Extended Abstract)

“In a century when the most dangerous books come into the hands of children as easily as into those of their fathers and their guardians, when reckless systematizing can pass itself off as philosophy, unbelief as strength of mind, and libertinage as imagination . . . ”

Michael Roe
Censorship of eBooks (Transcript of Discussion)

The problem: paper books are being replaced by electronic versions, in quite a widespread way. Various companies have brought out eBook readers: there’s the Amazon Kindle reader, the Sony reader, the Apple iPad can be used as an eBook reader as well. There are lots of hardware devices out there that people use for reading books. As for library books, Google is scanning a very large number of library books, and putting the scans online. Libraries have obviously become very tempted to get rid of all these nasty, expensive to store paper books, and just use Google. The question is: does this big move from paper to electronic form make censorship easier or will it make it harder?

Michael Roe
On the Value of Hybrid Security Testing

We propose a framework for designing a security tool that can take advantages from current approaches while increasing precision, scalability and debuggability. This could enable software developers to conduct comprehensive security testing automatically. The approaches we utilise are static, dynamic and taint analysis along with fuzzing. The rationale behind this is that the complexity of today’s applications makes the discovery of their vulnerabilities difficult using a single approach. Therefore, a combination of them is what is needed to move towards efficient security checking.

Saad Aloteibi, Frank Stajano
On the Value of Hybrid Security Testing (Transcript of Discussion)

The first bug we found, we sent it to Open Office, they solved it, and they solved it in their own way, but if you have exactly 65K characters, then you will undoubtedly have the same problem, and that’s basically Sandy’s talk about the honeymoon effect1. Finally it was solved. The other interesting thing about this is, we tried it with Writer (which is the Microsoft Word of Open Office), and Writer handled it correctly, so I asked the Open Office people, well did you have a problem with the Writer, and they said, well yes, we did have this vulnerability with the Writer, but we have fixed it. So conceptually they should have done it for the other applications as well because they are the same, and this is again about the honeymoon effect, you just need one “+4”, such as this, and check it with the pertinent applications.

Saad Aloteibi
Security Made, Not Perfect, But Automatic

Threats to computer systems have been increasing over the past few years. Given the dependence of society and businesses on computers, we have been spending every day more to make computer systems and networks secure enough. Yet, current practice and technology are based on intrusion prevention, and incorporate a lot of ad hoc procedures and man power, without being anywhere near perfect, for reasonable scale systems. Maybe the next quantum leap in computer systems security is to make it automatic, so that it can be cheap and effective. The first possibility that comes to mind is to make systems out of tamper-proof components, also said fully trustworthy: perfect components → perfect security, all else being correct. Though this lied at the basis of the trusted computing base work in the eighties, it is known today that it is impossible in practice to implement reasonably complex systems whose components are vulnerability free. This implies that systems in general cannot be made perfectly secure under the prevention paradigm. One interesting approach relies on providing some isolation between virtual machines residing on a same hardware machine, which can then act as if they were separate computers (see Figure 1).

Paulo Verissimo
Security Made, Not Perfect, But Automatic (Transcript of Discussion)

I decided to take the challenge of the organisers, the perfection of security, turn it around, and propose: if not perfect why not automatic? So we might ask if preventative security, meaning the canonical way of security, the only way to get it? Let me share a few thoughts about that. If we think that there is no trustworthiness, meaning a holistic perfection about what secure systems are, that behind our nice algorithms there’s machinery, so there’s no secure application without regard to the platform, or there is no effective policy in isolation of enforcing machinery, or that there is no information security in the absence of infrastructure security, then we might think, do we need at least some auxiliary paradigms to build secure systems. I’m going to talk about stuff that can be used in anything, servers, etc, but using it on the client side, maybe then there is no future in expensive security. Maybe we need, at least on the client side, to bring security to be sort of a commodity, something that is adaptive to the context where you’re working on, and several people talked about that yesterday. And it should be automatic, getting out of our way.

Paulo Verissimo
Security Limitations of Virtualization and How to Overcome Them

To be useful, security primitives must be available on commodity computers with demonstrable assurance and understandable by ordinary users with minimum effort. Trusted computing bases comprising a hypervisor, which implements the reference monitor, and virtual machines whose layered operating system services are formally verified, will continue to fail these criteria for client-side commodity computers. We argue that demonstrable high assurance will continue to elude commodity computers, and complex policies that require management of multiple subjects, object types, and permissions will continue to be misunderstood and misused by most users. We also argue that high-assurance, usable commodity computers require only two security primitives:

partitions for isolated code execution

, and

trustworthy communication

between partitions and between users and partitions. Usability requirements for isolated partitions are modest: users need to know when to use a small trusted system partition and when to switch to a larger untrusted one; developers need to isolate and assure only few security-sensitive code modules within an application; and security professionals needed to maintain only the trusted partition and a few isolated modules in the untrusted one. Trustworthy communication, which requires partitions and users to decide whether to accept input from or provide output to others, is more challenging because it requires trust, not merely secure (i.e., confidential and authentic) communication channels.

Virgil Gligor
Security Limitations of Virtualization and How to Overcome Them (Transcript of Discussion)

Many of the ideas I will present were developed in collaboration with Jonathan McCune, Bryan Parno, Adrian Perrig, Amit Vasudevan and Zongwei Zhou over the past couple of years. I will begin the presentation with my “axioms” of insecurity and usable security. These axioms are in fact observations that I believe will be true in the future. Then I will review virtualization for security and experiences that we have had with it practically since day one. I will also review the limitations of virtual-machine isolation for application-level code and usable security. And finally, the main proposition of this presentation is that we should switch our attention from virtualization and virtual-machine isolation, to redgreen machine partitions, which is somewhat of a new area, and to

trustworthy communication

. I will argue that trustworthy communication requires more than secure-channel protocols.

Virgil Gligor
Recapitulation

Those definitions again in full:

1. Virtual

(a) not real, merely apparent

(b) powerful, effective (vir-tus)

2. Perfect

(a) flawless, without blemish

(b) complete, lacking nothing

3. Secure

(a) safe, unthreatened

(b) careless (se-cure)

Bruce Christianson
Backmatter
Metadata
Title
Security Protocols XVIII
Editors
Bruce Christianson
James Malcolm
Copyright Year
2014
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-45921-8
Print ISBN
978-3-662-45920-1
DOI
https://doi.org/10.1007/978-3-662-45921-8

Premium Partner