Skip to main content

2009 | Buch

Security Protocols

14th International Workshop, Cambridge, UK, March 27-29, 2006, Revised Selected Papers

herausgegeben von: Bruce Christianson, Bruno Crispo, James A. Malcolm, Michael Roe

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Welcome back to the International Security Protocols Workshop. Our theme for this, the 14th workshop in the series, is “Putting the Human Back in the Protocol”. We’ve got into the habit of saying “Of course, Alice and Bob aren’t really people. Alice and Bob are actually programs running in some computers.” But we build computer systems in order to enable people to interact in accordance with certain social protocols. So if we’re serious about system services being end-to-end then, at some level of abstraction, the end points Alice and Bob are humanafterall.Thishascertainconsequences.Weexploresomeoftheminthese proceedings, in the hope that this will encourage you to pursue them further. Is Alice talking to the correct stranger? Our thanks to Sidney Sussex College, Cambridge for the use of their faci- ties, and to the University of Hertfordshire for lending us several of their sta?. Particular thanks once again to Lori Klimaszewska of the University of C- bridge Computing Service for transcribing the audio tapes, and to Virgil Gligor for acting as our advisor.

Inhaltsverzeichnis

Frontmatter
Putting the Human Back in the Protocol
(Transcript of Discussion)

Hello, everyone, and welcome to the 14th International Security Protocols Workshop. I’m going to start with a quotation from someone who, at least in principle, is in charge of a very different security community than ours:

Our enemies are innovative and resourceful, and so are we. They never stop thinking about new ways to harm our country and our people, and neither do we.

Bruce Christianson
Composing Security Metrics
(Transcript of Discussion)

I have to apologise that, having been asked to set the pace, I have done something inadvertently terrible: I have prepared a presentation and a paper that’s approximately in keeping with the theme of the workshop; that is entirely an accident, I have never looked at what the theme of the workshop was, so I apologise for any confusion, please assume that I’m speaking on a completely different topic if you’re interested in understanding the theme. I’m going to talk about composing security metrics, and that does have something to do with putting the human back in protocols, and thinking about protocols on a human scale. This is joint work with Sandy Clark, Eric Cronin, Gaurav Shah, and Micah Sherr. Sandy and Eric are here, and this is a result of long conversations, and meetings, and trying to shape something out of what looks like a very difficult subject. We’ve made very little progress, but we have some pretty pictures.

Matt Blaze
Putting the Human Back in Voting Protocols

Cryptographic voting schemes strive to provide high assurance of accuracy and secrecy with minimal trust assumptions, in particular, avoiding the need to trust software, hardware, suppliers, officials etc. Ideally we would like to make a voting process as transparent as possible and so base our assurance purely on the vigilance of the electorate at large, via suitable cryptographic algorithms and protocols. However, it is important to recognize that election systems are above all socio-technical systems: they must be usable by the electorate at large. As a result, it may be necessary to trade-off technical perfection against simplicity and usability. We illustrate this tension via design decisions in the Prêt à Voter scheme.

Peter Y. A. Ryan, Thea Peacock
Putting the Human Back in Voting Protocols
(Transcript of Discussion)

I’d like to talk about the role of the human in voting protocols. Basically I want to argue that voting protocols seem to be particularly interesting from the point of view of the theme of this workshop, in the sense that the users of the system actually play a particularly important role in trying to maintain the assurance of the system itself. I’m interested in a particular class of voting protocols, so-called voter verifiable schemes, which aim to allow the voter to play an active role in contributing to the dependability and assurance of the system. In designing these systems, clearly that we want high assurance of accuracy, but on the other hand we have to balance that with maintaining the ballot secrecy, so that nobody can work out which way a particular individual voter voted, and we want to do it in such a way that we place minimal, or ideally, zero trust in components, such as, hardware, software, the voting officials, and so on, and suppliers.

Peter Y. A. Ryan
Towards a Secure Application-Semantic Aware Policy Enforcement Architecture

Even though policy enforcement has been studied from different angles including notation, negotiation and enforcement, the development of an application-semantic aware enforcement architecture remains an open problem. In this paper we present and discuss the design of such an architecture.

Srijith K. Nair, Bruno Crispo, Andrew S. Tanenbaum
Towards a Secure Application-Semantic Aware Policy Enforcement Architecture
(Transcript of Discussion)

Matt Blaze

: How do you stop me from photographing the screen that displays my mail message?

Reply

: The analogue hole is always a problem unless congress does something about it. You could play the music, put a microphone in front of the speaker, and record it; that’s always going to be a problem which I don’t think it’s technically feasible to solve.

Srijith K. Nair
Phish and Chips
Traditional and New Recipes for Attacking EMV

This paper surveys existing and new security issues affecting the EMV electronic payments protocol. We first introduce a new price/effort point for the cost of deploying eavesdropping and relay attacks – a microcontroller-based interceptor costing less than $100. We look next at EMV protocol failures in the back-end security API, where we describe two new attacks based on chosen-plaintext CBC weaknesses, and on key separation failues. We then consider future modes of attack, specifically looking at combining the phenomenon of

phishing

(sending unsolicited messages by email, post or phone to trick users into divulging their account details) with chip card sabotage. Our proposed attacks exploit covert channels through the payments network to allow sabotaged cards to signal back their PINS. We hope these new recipes will enliven the debate about the pros and cons of Chip and PIN at both technical and commercial levels.

Ben Adida, Mike Bond, Jolyon Clulow, Amerson Lin, Steven Murdoch, Ross Anderson, Ron Rivest
Phish and Chips
(Transcript of Discussion)

Chris Mitchell

: From your paper I got the impression you were implying there was only one API, I think actually different banks use different HSMs, with different APIs.

Reply

: Yes, I’m going to talk about one API that we looked at, the API produced by IBM, and some of the issues we found with that. We have found issues also with an API from another manufacturer, Thales, and we’ve got a much larger paper, which is really rather long and tedious, describing all the different APIs we have looked at, the APIs we don’t know about, and what we suspect are the cases with those, and I think there’s a link to that towards the end.

Mike Bond
Where Next for Formal Methods?

In this paper we propose a novel approach to the analysis of security protocols, using the process algebra CSP to model such protocols and verifying security properties using a combination of the FDR model checker and the PVS theorem prover. Although FDR and PVS have enjoyed success individually in this domain, each suffers from its own deficiency: the model checker is subject to state space explosion, but superior in finding attacks in a system with finite states; the theorem prover can reason about systems with massive or infinite states spaces, but requires considerable human direction. Using FDR and PVS together makes for a practical and interesting way to attack problems that would remain out of reach for either tool on its own.

James Heather, Kun Wei
Where Next for Formal Methods?
(Transcript of Discussion)

Tuomas Aura

: Why do you need to model the medium separately, why not just merge Alice and the medium?

Reply

: You could merge them. If you were doing this in the model checker, that’s certainly what you would do, because you increase the state space by having the medium done as a separate process, you’re right. When you’re doing it in PVS you don’t have the problem of increasing the state space, so it doesn’t really make a lot of difference whether you merge Alice with the medium or not. But yes, you could easily do that.

James Heather
Cordial Security Protocol Programming
The Obol Protocol Language

Obol

is a protocol programming language. The language is domain specific, and has been designed to facilitate error-free implementation of security protocols.

Selecting the primitives of the language is, basically, concerned with determining which issues needs to be visible to the protocol programmer, and which can be left to the runtime without further ado.

The basic abstractions of Obol has been modelled after the ones offered by the

ban

logic of authentication. By building on these abstractions Obol makes it less hard to bridge the gap between logical analysis and implementation.

Obol has been designed with the implementation of security protocols in mind, but the language can be used to implement also other types of protocols.

At the core of the design and implementation is pattern-matching machinery enabling the runtime to parse packets as they arrive in order to free the programmer from a wide range of low-level issues know to foster all sorts of implementation difficulties.

Per Harald Myrvang, Tage Stabell-Kulø
Cordial Security Protocol Programming
(Transcript of Discussion)

Chris Mitchell

: Arguably, different encryption primitives have different properties, for example they may or may not offer non-malleability. That’s an important distinction, because some protocols require non-malleable encryption, and some don’t.

Reply

: Yes, this is a good point. I have been told by my advisor earlier in my life, never say, this is a good question, because you must assume that, but anyway, this is a good question. If you want everything, you end up getting nothing, and you will see we have found the trade-off, and I will return to the metric by which we measured the trade-off, but there will always be cases which cannot be met, because this is a programming language.

Tage Stabell-Kulø
Privacy-Sensitive Congestion Charging

National-scale congestion charging schemes are increasingly viewed as the most viable long-term strategy for controlling congestion and maintaining the viability of the road network. In this paper we challenge the widely held belief that enforceable and economically viable congestion charging schemes require drivers to give up their location privacy to the government. Instead we explore an alternative scheme where privately-owned cars enforce congestion charge payments by using an on-board vehicle unit containing a camera and wireless communications. Our solution prevents centralised tracking of vehicle movements but raises an important issue: should we trust our neighbours with a little personal information in preference to entrusting it all to the government?

Alastair R. Beresford, Jonathan J. Davies, Robert K. Harle
Privacy-Sensitive Congestion Charging
(Transcript of Discussion)

Tuomas Aura

: How do you measure the cost of congestion?

Reply

: It’s estimated by economists. Traffic congestion is estimated to cost the UK economy about

$\pounds$

15bn, and this impact is going to increase. I’m not entirely sure on the details of how they’ve made that estimate, but you could imagine saying, how much extra journey time is involved in, for example, truck journeys, than otherwise would be if there was nobody else on the road, etc.

Alastair R. Beresford
The Value of Location Information
A European-Wide Study

The value attached to privacy has become a common notion in the press, featuring frequent stories of people selling sensitive personal information for a couple of dollars. Syverson argues [1] that we should incorporate the risk of data misuse into our reasoning about privacy valuations. Yet there are doubts as to whether people can, and do, value their privacy correctly and appropriately.

Privacy is a complex notion and as such it is very difficult to valuate it taking into account its full complexity. In this experiment we consider one aspect of privacy, namely location privacy, that can be compromised through mobile phone network data. We performed a European-wide study to assess the value that people attach to their location privacy using tools from experimental psychology and economics. We present the first results here.

Dan Cvrcek, Marek Kumpost, Vashek Matyas, George Danezis
The Value of Location Information
(Transcript of Discussion)

Bruno Crispo

: Was the usual behaviour of the participants affected by the experiment, did they have to do anything specific?

Reply

: No, they didn’t have to do anything except if they are used to switching off their mobile for the night they were told that they have to have their mobile on all the time, that might be the case for a few of them, other than that, nothing.

Vashek Matyas
Update on PIN or Signature
(Transcript of Discussion)

We promised a year back some data on the experiment that we ran with chip and PIN. If you recall, it was the first phase that we reported on here last year, where we used the University bookstore, and two PIN pads, one with very solid privacy shielding, the other one without any. We ran 17 people through the first one, 15 people through the second one, and we also had the students do, about half of them forging the signature, half of them signing their own signature, on the back of the card that is used for purchasing books, or whatever.We had a second phase of the experiment, after long negotiations, and very complicated logistics, with a supermarket in Brno where we were able to do anything that we wanted through the experiment for five hours on the floor, with only the supermarket manager, the head of security, and the camera operators knowing about the experiment. So the shop assistants, the ground floor security, everybody basically on the floor, did not know about the experiment. That was one of the reasons why the supermarket, or management, agreed to take part, they wanted to control their own internal security procedures.

Vashek Matyas
Innovations for Grid Security from Trusted Computing
Protocol Solutions to Sharing of Security Resource

A central problem for Grid (or web) services is how to gain confidence that a remote principal (user or system) will behave as expected. In Grid security practice at present, issues of confidentiality and data integrity rely on weak social trust mechanisms of “reputation maintenance”: a principal who is introduced by a reputable party should hopefully behave in “best effort” to maintain the reputation of the introducer. As will be discussed in this paper, this gentleman’s notion of trust is insufficient for a large class of problems in Grid services.

The emerging Trusted Computing (TC) technologies offer great potential to improve this situation. The TC initiative developed by the Trusted Computing Group (TCG) takes a distributed-system-wide approach to the provisions of integrity protection for systems, resources and services. Trust established from TC is much stronger than that described above: it is about conformed behaviors of a principal such that the principal is prohibited from acting against the granted interests of other principals it serves.

We consider that this stronger notion of trust from TC naturally suits the security requirements for Grid services or science collaborations. We identify and discuss in this paper a number of innovations that the TC technologies could offer for improving Grid security.

Wenbo Mao, Andrew Martin, Hai Jin, Huanguo Zhang
Innovations for Grid Security from Trusted Computing
(Transcript of Discussion)

Bruno Crispo

: But why do you need to chain the certificates, I don’t understand. Usually I look for, for example, storage, and then I go find somewhere that can provide the storage I need, but why do I need a chain?

Reply

: Do you mean, you do it yourself?

Wenbo Mao
The Man-in-the-Middle Defence

Eliminating middlemen from security protocols helps less than one would think. EMV electronic payments, for example, can be made fairer by adding an electronic attorney – a middleman which mediates access to a customer’s card. We compare middlemen in crypto protocols and APIs with those in the real world, and show that a man-in-the-middle defence is helpful in many circumstances. We suggest that the middleman has been unfairly demonised.

Ross Anderson, Mike Bond
The Man-in-the-Middle Defence
(Transcript of Discussion)

The man-in-the-middle defence is all about rehabilitating Charlie. For 20 years we’ve worried about this guy in the middle, Charlie, who’s forever intercalating himself into the communications between Alice and Bob, and people have been very judgemental about poor Charlie, saying that Charlie is a wicked person. Well, we’re not entirely convinced.

Ross Anderson
Using Human Interactive Proofs to Secure Human-Machine Interactions via Untrusted Intermediaries

This paper explores ways in which Human Interactive Proofs (HIPs),

i.e.

problems which are easy for humans to solve but are intractable for computers, can be used to improve the security of human-machine interactions. The particular focus of this paper is the case where these interactions take place via an untrusted intermediary device, and where the use of HIPs can be used to establish a secure channel between the human and target machine. A number of application scenarios of this general type are considered, and in each case the possible use of HIPs to improve interaction security is explored.

Chris J. Mitchell
Using Human Interactive Proofs to Secure Human-Machine Interactions via Untrusted Intermediaries
(Transcript of Discussion)

It’s ironic that these hard problems, such as character recognition, have been known to be hard for a long, long time, and yet almost as soon as people make crypto things out of them, they get solved. Actually it’s not quite the way you think because what’s happened is that for the examples that get automatically generated, there are special techniques which work just because they’ve been created deliberately. It’s not trivial to produce things that are really hard to solve, and some of the ideas for distorting characters have been quickly broken, but I believe there are some around which are quite robust.

Chris J. Mitchell
Secure Distributed Human Computation
(Extended Abstract)

In Peha’s Financial Cryptography 2004 invited talk, he described the Cyphermint PayCash system (see www.cyphermint.com), which allows people without bank accounts or credit cards (a sizeable segment of the U.S. population) to automatically and instantly cash checks, pay bills, or make Internet transactions through publicly-accessible kiosks. Since PayCash offers automated financial transactions and since the system uses (unprotected) kiosks, security is critical. The kiosk must decide whether a person cashing a check is really the person to whom the check was made out, so it takes a digital picture of the person cashing the check and transmits this picture electronically to a central office, where a human worker compares the kiosk’s picture to one that was taken when the person registered with Cyphermint. If both pictures are of the same person, then the human worker authorizes the transaction.

Craig Gentry, Zulfikar Ramzan, Stuart Stubblebine
Secure Distributed Human Computation
(Transcript of Discussion)

The premise is that humans can solve certain problems that computers can’t, at least for the time being. There are basically two main applications of this idea: the talks on either side of mine are talking about using this premise as an automated Turing test, what I’ll be talking about is a little bit different, using it as a motivation for actually harnessing human intelligence to solve problems that computers can’t solve.

Craig Gentry
Bot, Cyborg and Automated Turing Test
(Or “Putting the Humanoid in the Protocol”)

The Automated Turing test (ATT) is almost a standard security technique for addressing the threat of undesirable or malicious bot programs. In this paper, we motivate an interesting adversary model, cyborgs, which are either humans assisted by bots or bots assisted by humans. Since there is always a human behind these bots, or a human can always be available on demand, ATT fails to differentiate such cyborgs from humans. The notion of “telling humans and cyborgs apart” is novel, and it can be of practical relevance in network security. Although it is a challenging task, we have had some success in telling cyborgs and humans apart automatically.

Jeff Yan
Bot, Cyborg and Automated Turing Test
(Transcript of Discussion)

Ross Anderson

: Bot tending might be an attractive activity for children, because children could receive the challenges on their mobile phones, to which they are almost physiologically attached these days, and they’re perhaps used to relatively smaller amounts of pocket money.

Mike Bond

: You talked about routes for sending CAPTCHAs which go outside the game; given that the bot has control of the client, what about sending the CAPTCHA back into the game to a human player who is maybe indifferent about bots, and then paying him a virtual currency to solve it? The client would have both the infrastructure to reinsert the CAPTCHA, and to make a payment, there and then.

Jeff Yan
A 2-Round Anonymous Veto Protocol

The dining cryptographers network (or DC-net) is a seminal technique devised by Chaum to solve the dining cryptographers problem — namely, how to send a boolean-OR bit anonymously from a group of participants. In this paper, we investigate the weaknesses of DC-nets, study alternative methods and propose a new way to tackle this problem. Our protocol, Anonymous Veto Network (or AV-net), overcomes all the major limitations of DC-nets, including the complex key setup, message collisions and susceptibility to disruptions. While DC-nets are unconditionally secure, AV-nets are computationally secure under the Decision Diffie-Hellman (DDH) assumption. An AV-net is more efficient than other techniques based on the same public-key primitives. It requires only two rounds of broadcast and the least computational load and bandwidth usage per participant. Furthermore, it provides the strongest protection against collusion — only full collusion can breach the anonymity of message senders.

Feng Hao, Piotr Zieliński
A 2-Round Anonymous Veto Protocol
(Transcript of Discussion)

The Chancellor is making a speech in the Galactic Security Council, listen up everyone, he says, I propose we should send troops to that enemy planet and occupy it; now is the time for the security council to take a vote.

You are the members of the security council, each one of you has the veto power, and it’s time for you to cast your vote. The setting of this problem is that you don’t have any private channels, the only way for you to express your opinion is through a public announcement, however, if you publicly declare that you want to veto you would have offended some people, and you will be punished. So the question is, if everything is public, and everything you say, or you send, can be traced back to you as the data origin, how do you send an anonymous message in such an environment. It is mind-boggling this is possible in the first place, but with public key cryptography it is possible. In my talk I am going to present a solution to this puzzle, in addition I will show that the solution is very efficient in almost every aspect.

Feng Hao
How to Speak an Authentication Secret Securely from an Eavesdropper

When authenticating over the telephone or mobile headphone, the user cannot always assure that no eavesdropper hears the password or authentication secret. We describe an eavesdropper-resistant, challenge-response authentication scheme for spoken authentication where an attacker can hear the user’s voiced responses. This scheme entails the user to memorize a small number of plaintext-ciphertext pairs. At authentication, these are challenged in random order and interspersed with camouflage elements. It is shown that the response can be made to appear random so that no information on the memorized secret can be learned by eavesdroppers. We describe the method along with parameter value tradeoffs of security strength, authentication time, and memory effort. This scheme was designed for user authentication of wireless headsets used for hands-free communication by healthcare staff at a hospital.

Lawrence O’Gorman, Lynne Brotman, Michael Sammon
How to Speak an Authentication Secret Securely from an Eavesdropper
(Transcript of Discussion)

Matt Blaze

: The model is assuming only one side of the channel will be used liked that?

Reply

: Yes, I should reiterate that, because that’s very important. The attacker model is, the eavesdropper hears one side, Brutus can attack from the other side, and these guys can collude, but they can’t hear both the challenge and the response.

Lawrence O’Gorman
Secret Public Key Protocols Revisited

Password-based protocols are important and popular means of providing human-to-machine authentication. The concept of secret public keys was proposed more than a decade ago as a means of securing password-based authentication protocols against off-line password guessing attacks, but was later found vulnerable to various attacks. In this paper, we revisit the concept and introduce the notion of identity-based secret public keys. Our new identity-based approach allows secret public keys to be constructed in a very natural way using arbitrary random strings, eliminating the structure found in, for example, RSA or ElGamal keys. We examine identity-based secret public key protocols and give informal security analyses, indicating that they are secure against off-line password guessing and other attacks.

Hoon Wei Lim, Kenneth G. Paterson
Secret Public Key Protocols Revisited
(Transcript of Discussion)

The concept of using secret public keys in designing security protocols is not new. It was first proposed more than ten years ago, and today we’re going to revisit the concept, discuss a problem with the concept, and propose some fixes with identity-based cryptography.

Hoon Wei Lim
Vintage Bit Cryptography

We propose to use a Random High-Rate Binary (RHRB) stream for the purpose of key distribution. The idea is as follows. Assume availability of a high-rate (terabits per second) broadcaster sending random content. Members of the key group (e.g. {Alice, Bob}) share a weak secret (at least 60 bits) and use it to make a selection of bits from the RHRB stream at an extremely low rate (1 bit out of 10

16

to 10

18

). By the time that a strong key of reasonable size has been collected (1,000 bits), an enormous amount of data has been broadcast (10

19

 − 10

21

bits). This is 10

6

to 10

8

times current hard drive capacity, which makes it infeasible for the interceptor (Eve) to store the stream for subsequent cryptanalysis, which is what the interceptor would have to do in the absence of the shared secret. Alternatively Eve could record the selection of bits that correspond to every value of the weak shared secret, which under the above assumptions requires the same or greater amount of storage

i.e.

2

60

×10

3

. The members of the key group have no need to capture the whole stream, but store only the tiny part of it that is the key. Effectively this allows a pseudo-random sequence generated from a weak key to be leveraged up into a strong genuinely random key.

Bruce Christianson, Alex Shafarenko
Vintage Bit Cryptography
(Transcript of Discussion)

This may be a highly controversial talk, because this is an area where things are periodically rediscovered. But in the process of reinventing it, I think we’ve found a few interesting protocol issues, and a few interesting technological issues, which make it worth revisiting.

Alex Shafarenko
Usability of Security Management:Defining the Permissions of Guests

Within the scenario of a Smart Home, we discuss the issues involved in allowing limited interaction with the environment for unidentified principals, or guests. The challenges include identifying and authenticating guests on one hand and delegating authorization to them on the other. While the technical mechanisms for doing so in generic distributed systems have been around for decades, existing solutions are in general not applicable to the smart home because they are too complex to manage. We focus on providing both security and usability; we therefore seek simple and easy to understand approaches that can be used by a normal computer-illiterate home owner, not just by a trained system administrator. This position paper describes ongoing research and does not claim to have all the answers.

Matthew Johnson, Frank Stajano
Usability of Security Management: Defining the Permissions of Guests
(Transcript of Discussion)

George Danezis

: So would you tell the user, the particular guest, that for some things you need authorisation, some things you don’t?

Reply

: If they don’t get authorisation it will pop up and say, sorry, they didn’t let you do this.

Matthew Johnson
The Last Word
Eve
Backmatter
Metadaten
Titel
Security Protocols
herausgegeben von
Bruce Christianson
Bruno Crispo
James A. Malcolm
Michael Roe
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-04904-0
Print ISBN
978-3-642-04903-3
DOI
https://doi.org/10.1007/978-3-642-04904-0