Skip to main content

2011 | Buch

Security Protocols XIX

19th International Workshop, Cambridge, UK, March 28-30, 2011, Revised Selected Papers

herausgegeben von: Bruce Christianson, Bruno Crispo, James Malcolm, Frank Stajano

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-workshop proceedings of the 19th International Workshop on Security Protocols, held in Cambridge, UK, in March 2011. Following the tradition of this workshop series, each paper was revised by the authors to incorporate ideas from the workshop, and is followed in these proceedings by an edited transcription of the presentation and ensuing discussion. The volume contains 17 papers with their transcriptions as well as an introduction, i.e. 35 contributions in total. The theme of the workshop was "Alice doesn't live here anymore".

Inhaltsverzeichnis

Frontmatter
Introduction: Alice Doesn’t Live Here Anymore (Transcript)

Hello everyone and welcome to the 19th Security Protocols Workshop. The theme this year, which it is traditional to mention in the first session (and then never refer to again), is “Alice doesn’t live here anymore”.

One of the perennial problems in analysing security protocols is how we distinguish Alice from not-Alice.

The prevailing wisdom is that Alice possesses something which not-Alice does not. It might be knowledge of something that is used as a key. It might be that Alice possesses some physical characteristic, such as a biometric, or that there is something about the hardware that Alice is running on that is difficult to replicate. Or it might be that Alice possesses exclusive access to the interface to some distinguishing piece of hardware, like a dongle, although such hardware (when we think about it) usually belongs to some other security domain anyway.

Bruce Christianson
His Late Master’s Voice: Barking for Location Privacy

Bob died suddenly leaving his treasure to sister Alice. Moriarty will do anything to get it, so Alice hides the treasure together with Nipper, and promptly departs. Nipper is a low-cost RFID device that responds

only

to Alice’s calls—making it possible for Alice to locate the hidden treasure later (she is quite forgetful) when Moriarty is not around. We study the design of Nipper, the cryptographic mechanisms that support its functionality and the security of the application.

Mike Burmester
His Late Master’s Voice (Transcript of Discussion)

My name is Mike Burmester and I will be talking about

localization

privacy, which I will distinguish from

location

privacy. First I will try to motivate the topic—this is a novel application. Then I will explain the title of this talk: “His Late Master’s Voice”, and I will say something about RFID technologies, which I will be using.

I will present three protocols: the idea is to try to capture the essence of this distinctive steganographic attribute which is

localization

. I will talk about the adversarial model towards the end; clearly this is not the way to design good (secure) protocols, but because my application is novel I will break the rules.

Mike Burmester
Can We Fix the Security Economics of Federated Authentication?

There has been much academic discussion of federated authentication, and quite some political manoeuvring about ‘e-ID’. The grand vision, which has been around for years in various forms but was recently articulated in the US National Strategy for Trustworthy Identities in Cyberspace (NSTIC), is that a single logon should work everywhere [1]. You should be able to use your identity provider of choice to log on anywhere; so you might use your driver’s license to log on to Gmail, or use your Facebook logon to file your tax return. More restricted versions include the vision of governments of places like Estonia and Germany (and until May 2010 the UK) that a government-issued identity card should serve as a universal logon. Yet few systems have been fielded at any scale.

Ross Anderson
Can We Fix the Security Economics of Federated Authentication? (Transcript of Discussion)

OK, so the talk that I’ve got today is entitled “Can We Fix the Security Economics of Federated Authentication?” and some of this is stuff that I did while I was at Google in January and February. I’m on sabbatical this year and so I’m visiting various places, and doing various things that I don’t normally do.

Let’s go back 25 years. When I was a youngster working in industry the sort of problem that you got if you were working with a bank was this. People joined and had to be indoctrinated into four or five systems and get four or five different passwords. You might find that your branch banking system ran under MVS, so you needed a RACF password; your general ledger ran under DB2, so you needed a DB2 password – and that was a different lady in a different building – and if you became the branch’s foreign exchange deputy clerk then you needed a SWIFT password, which was yet another administrative function in yet another building. And of course this played havoc with usability: post-it notes with passwords are a natural reaction to having to remember six of them. And if you tell your staff to change passwords every month, of course they’ll just rotate them.

Ross Anderson
Pico: No More Passwords!

From a usability viewpoint, passwords and PINs have reached the end of their useful life. Even though they are convenient for implementers, for users they are increasingly unmanageable. The demands placed on users (passwords that are unguessable, all different, regularly changed and never written down) are no longer reasonable now that each person has to manage dozens of passwords. Yet we can’t abandon passwords until we come up with an alternative method of user authentication that is both usable and secure.

We present an alternative design based on a hardware token called Pico that relieves the user from having to remember passwords and PINs. Unlike most alternatives, Pico doesn’t merely address the case of

web

passwords: it also applies to all the other contexts in which users must at present remember passwords, passphrases and PINs. Besides relieving the user from memorization efforts, the Pico solution scales to thousands of credentials, provides “continuous authentication” and is resistant to brute force guessing, dictionary attacks, phishing and keylogging.

Frank Stajano
Pico: No More Passwords! (Transcript of Discussion)

Virgil Gligor (session chair):

We have a session about passwords and you will hear at least two different points of view—possibly even two contradictory points of view, which is par for the course for this workshop. Our first speaker is Frank Stajano who argues that there should be no more passwords.

Frank Stajano:

My title should give you a hint about my position towards this problem. What’s a password? A password is a way to drive users crazy!

Passwords were not so bad when you had only one or two of them, and when a password of eight or nine characters was considered a safe password. Nowadays computers have grown so powerful that ten character passwords can be brute-forced with the kind of computer you buy in the supermarket next to your groceries. And you don’t just have one or two passwords: you have dozens of them, because there are so many more services that now require you to have a password.

Frank Stajano
Getting Web Authentication Right A Best-Case Protocol for the Remaining Life of Passwords

We outline an end-to-end password authentication protocol for the web designed to be stateless and as secure as possible given legacy limitations of the web browser and performance constraints of commercial web servers. Our scheme is secure against very strong but passive attackers able to observe both network traffic and the server’s database state. At the same time, our scheme is simple for web servers to implement and requires no changes to modern, HTML5-compliant browsers. We assume TLS is available for initial login and no other public-key cryptographic operations, but successfully defend against cookie-stealing and cookie-forging attackers and provide strong resistance to password guessing attacks.

Joseph Bonneau
Getting Web Authentication Right (Transcript of Discussion)

What an honour to be the person not talking about how to replace passwords [laughter]. I was struggling to come up with a metaphor for why I’m giving this talk after Frank’s talk, this was the best I could do. Does anybody know why I chose this?

Jonathan Anderson:

Is that the fighter they had at the end of the war?

Reply: It is, yes. Towards the end of World War 2, the Americans started putting Merlin engines on to the P-51, which is still the most popular propeller plane for hobby pilots trying to acquire, and produced essentially the highest-performance propeller plane ever built This was in 1943, and it was the only time that we really perfected the art of getting a piston engine to make a plane go very, very fast. The Germans hadn’t invested in making a fighter of this generation because they were so caught up in making jet fighters, which you see coming down in flames in the background. By this point in history designers knew that propeller planes would be gone within ten years and jets would replace them for all the reasons that we use jets now, but in that window propeller planes were still a lot better: there were huge reliability problems and other issues with jets, and they couldn’t quite get to the performance that the best propeller planes had.

Joseph Bonneau
When Context Is Better Than Identity: Authentication by Context Using Empirical Channels

In mobile computing applications the traditional name-based concept of identity is both difficult to support and frequently inappropriate. The natural alternative is to use the context in which parties sit to identify them. We discuss this issue and the ways in which Human Interactive Security Protocols (HISPs) can play a role in enabling this.

Bangdao Chen, Long Hoang Nguyen, Andrew William Roscoe
When Context Is Better Than Identity (Transcript of Discussion)

My talk today is when context is better than identity. Imagine we are making a cash payment to a small shop, we do not care who is standing in that shop, or we do not know the name of the small shop, it is the location and the environment, or the condition that we have already received the goods makes us believe that the risks of making this payment is very low. When you have made payment to this small shop, it gives you assurance that you have paid the correct instance of that shop. So we know that when authenticating the small shop it is frequently the best to run a test of the context. And when authenticating the customer or the payer, for example, when you are using your credit card, the current payment infrastructure allows this to be done very easily, and in this case, the shop acts as a proxy for the banks.

Bangdao Chen
Selective Location Blinding Using Hash Chains

Location-based applications require a user’s movements and positions to provide customized services. However, location is a sensitive piece of information that should not be revealed unless strictly necessary. In this paper we propose a procedure that allows a user to control the precision in which his location information is exposed to a service provider, while allowing his location to be certified by a location verifier. Our procedure makes use of a hash chain to certify the location information in such a way that the hashes of the chain correspond to an increasing level of precision.

Gabriele Lenzini, Sjouke Mauw, Jun Pang
Selective Location Blinding Using Hash Chains (Transcript of Discussion)

The work that I am reporting on is work that we recently started with a Luxembourgish company, itrust. Together with this company we are working on the design of a security architecture and security protocols to enable location-based services.

The general architecture of the system consists of a satellite and a local user device, such as a mobile phone. This user device helps you to use location-based services offered by a service provider. Because the service provider needs to be sure about your location, we have also introduced a location verifier. You receive the data from the satellite, calculate your location and forward the data and the calculated location to the location verifier. In some way, the location verifier then validates your location and returns a certificate. Then you can use this certificate to convince the service provider that you are really at the location where you claim to be.

Sjouke Mauw
Risks of Blind Controllers and Deaf Views in Model View Controller Patterns for Multitag User Interfaces

Electronic tags such as 2D bar codes and NFC are used to connect physical and virtual worlds. Beyond pure information augmentation of physical objects, this gives rise to new user interfaces, so called multitag interfaces. A salient feature of these interfaces is that the user sees a physical object, a poster, but interacts with its electronic augmentation. We present two attacks that exploit this feature along with first thoughts of how these attacks may be countered. We analyze the possibility to introduce secure bindings into these novel user interfaces.

Alf Zugenmaier
Risk of Blind Controller Patterns for Multitag User Interfaces (Transcript of Discussion)

I think I made mistakes with this presentation: one is that I actually think it might have something to do with the theme, and secondly, I don’t really have a protocol to solve my problem. When I wrote this up I thought, ah I’ve got a problem here, and this is just an indicative submission, and by the time the workshop is there I’d have a nice protocol that can be destroyed in a discussion, and then creatively reconstructed. I didn’t even get that far, and I’m very sorry, so it’s more of a problem statement than an actual protocol solution.

Alf Zugenmaier
How to Sync with Alice

This paper explains the sync problem and compares solutions in Firefox 4 and Chrome 10. The sync problem studies how to securely synchronize data across different computers. Google has added a built-in sync function in Chrome 10, which uses a user-defined password to encrypt bookmarks, history, cached passwords etc. However, due to the low-entropy of passwords, the encryption is inherently weak – anyone with access to the ciphertext can easily uncover the key (and hence disclose the plaintext). Mozilla used to have a very similar sync solution in Firefox 3.5, but since Firefox 4 it has made a complete change of how sync works in the browser. The new solution is based on a security protocol called J-PAKE, which is a balanced Password Authenticated Key Exchange (PAKE) protocol. To our best knowledge, this is the first large-scale deployment of the PAKE technology. Since PAKE does not require a PKI, it has compelling advantages than PKI-based schemes such as SSL/TLS in many applications. However, in the past decade, deploying PAKE has been greatly hampered by the patent and other issues. With the rise of patent-free solutions such as J-PAKE and also that the EKE patent will soon expire in October, 2011, we believe the PAKE technology will be more widely adopted in the near future.

Feng Hao, Peter Y. A. Ryan
How to Sync with Alice (Transcript of Discussion)

This talk is about How to Sync with Alice. It is joint work with Peter Ryan. Life used to be simple; you have only one desktop computer. Then you have laptop, which is more convenient, and is becoming inexpensive. In the past five years you’ve seen the rise of smartphones, and tablets. So the computer has been evolving. It used to be bulky, and fixed at a permanent location, but now it is mobile and can be anywhere. A person commonly owns more than one computer.

Back to the theme of this workshop, Alice Doesn’t Live Here Anymore. First, who is Alice? Alice could be a PC, a smartphone, or a tablet, or anything with a chip. Her location is not important, because she can be anywhere. Identity is not important whether it is PC, laptop, or mobile phone. The device is only a platform for you to access Internet. With the cloud computing you no longer store data on the laptop; you store data in the cloud.

Feng Hao
Attack Detection vs. Privacy – How to Find the Link or How to Hide It?

Wireless sensor networks often have to be protected not only against an active attacker who tries to disrupt a network operation, but also against a passive attacker who tries to get sensitive information about the location of a certain node or about the movement of a tracked object. To address such issues, we can use an intrusion detection system and a privacy mechanism simultaneously. However, both of these often come with contradictory aims. A privacy mechanism typically tries to hide a relation between various events while an intrusion detection system tries to link the events up. This paper explores some of the problems that might occur when these techniques are brought together and we also provide some ideas how these problems could be solved.

Jiří Kůr, Vashek Matyáš, Andriy Stetsko, Petr Švenda
Attack Detection vs Privacy – How to Find the Link or How to Hide It (Transcript of Discussion)

Alf Zugenmaier:

What exactly does the IDS try to detect, what kind of intrusions?

Jiří Kůr:

The IDS tries to detect the malicious nodes and the malicious activity of these nodes.

Mike Burmester:

So anomalous behaviour?

Jiří Kůr:

Yes, in principle. The particular examples may be packet dropping, packet injection, packet modification, jamming, and so on.

Alf Zugenmaier:

Do you have a list of that coming up, because these are so various examples. How far do you want to push the IDS, what it is supposed to detect and what it is not supposed to detect?

Jiří Kůr, Andriy Stetsko
The Sense of Security and a Countermeasure for the False Sense

In this paper, we report the two issues from our recent research on the human aspect of security. One is the sense of security and the other is a warning interface for security threats. We look into the emotional aspect of security technology and investigate the factors of users’ feelings based on the user surveys and statistical analysis. We report the difference in those factors of the sense of security in the U.S.A. and Japan as well. We also introduce the multi-facet concept of trust which includes security, safety, privacy, reliability, availability and usability. According to the results of our surveys, no matter how secure systems and services are, the users may not get the sense of security at all. On the contrary, the users may well feel secure with insecure systems and services. It suggests that we would need another type of protocols and interfaces than merely secure protocols, to provide the users with secure feelings. We propose an interface causing discomfort — a warning interface for insecure situations. A user could be aware of security threats and risks by a slight disturbance. Such an interface has been researched to a great extent in the safety area for protection from human errors.

Yuko Murayama, Yasuhiro Fujihara, Dai Nishioka
The Sense of Security and a Countermeasure for the False Sense (Transcript of Discussion)

This talk has two parts, I think: what makes users of computer systems feel secure

and

a countermeasure for when that sense of security is unjustified. I’m clearly not Yuko; Yuko Murayama was actually a colleague of mine many years ago, and of Mike shortly after that, but she’s unable to travel to Europe to present this paper because of the earthquake in Japan. So I’m going to do my best to explain the work that she’s done.

James Malcolm
Towards a Theory of Trust in Networks of Humans and Computers

We argue that a general theory of trust in networks of humans and computers must be build on both a theory of

behavioral

trust

and a theory of

computational

trust

. This argument is motivated by increased participation of people in social networking, crowdsourcing, human computation, and socio-economic protocols, e.g., protocols modeled by trust and gift-exchange games [3,10,11], norms-establishing contracts [1], and scams [6,35,33]. User participation in these protocols relies primarily on trust, since on-line verification of protocol compliance is often impractical; e.g., verification can lead to undecidable problems, co-NP complete test procedures, and user inconvenience. Trust is captured by participant preferences (i.e., risk and betrayal aversion) and beliefs in the trustworthiness of other protocol participants [11,10]. Both preferences and beliefs can be enhanced whenever protocol noncompliance leads to punishment of untrustworthy participants [11,23]; i.e., it seems natural that betrayal aversion can be decreased and belief in trustworthiness increased by properly defined punishment [1]. We argue that a general theory of trust should focus on the establishment of new trust relations where none were possible before. This focus would help create new economic opportunities by increasing the pool of usable services, removing cooperation barriers among users, and at the very least, taking advantage of “network effects.” Hence a new theory of trust would also help focus security research in areas that promote trust-enhancement infrastructures in human and computer networks. Finally, we argue that a general theory of trust should mirror, to the largest possible extent, human expectations and mental models of trust without relying on false methaphors and analogies with the physical world.

Virgil Gligor, Jeannette M. Wing
Towards a Theory of Trust in Networks of Humans and Computers (Transcript of Discussion)

When I first noticed this year’s SPW theme I realized that not only Alice isn’t living here anymore, but that we do not really know who Alice is, after all these years. Is she a fictitious character of Oscar Wilde’s

Importance

of

being

Earnest

(1895) who tries to avoid the obligations of Victorian-era social protocols? Or maybe the fickle character of the Duke of Mantua’s aria in Verdi’s

Rigoletto

(1851)? Or, perhaps the fictitious and equally fickle character of our past Security Protocols Workshops who changes her goals and behavior from year to year? Whichever the case may be, one thing is clear: Alice may not always be trustworthy—as she sometimes seems to be involved in shady activities—but she must always be accountable in our protocols. Hence, we must look into what makes Alice accountable in networks that do not provide accountability for protocol participants by default; e.g., in the Internet. In particular, I argue that we could locate Alice in a multi-dimensional accountability space similar in spirit to that used in

online

behavioral

advertising

(OBA). Since in security protocols we often deal with networks of computers

and

humans, it seems useful to look at OBA, which also captures the behaviour of humans and computers well enough to become a source of (possibly anonymous) identity.

Virgil Gligor
Gearing Up: How to Eat Your Cryptocake and Still Have It

Often Alice and Bob share a fixed quantity of master key and subsequently need to agree a larger amount of session key material. At present, they are inclined to be cautious about generating too much session key material from a single master key. We argue that this caution arises from their familiarity with keys consisting of a few dozen bytes, and may be misplaced when keys consist of many billions of bytes. In particular, if the proof that the master key was securely distributed depends on a bounded-memory assumption for Moriarty, then the same assumption also imposes constraints upon the cryptanalysis which Moriarty can apply to the generated session material. Block ciphers with (effectively) Terabit blocks allow a much higher ratio of session to master key than can be countenanced with current key lengths, and we construct one such cypher.

Alex Shafarenko, Bruce Christianson
Gearing Up: How to Eat Your Cryptocake and Still Have It (Transcript of Discussion)

This talk has to do with big, or rather huge numbers of bits, and how it affects security. I’m going to start with the observation that shared keys are not always small. Very long keys can be shared using the so-called beacon method, which is well-known in various shapes and forms. The principle is always the same, you have a high rate source of random data, by random I mean as random as you can get. This is the single vulnerability point, the source of data, if you compromise it you compromise the whole system, but you can secure that physically, just don’t let Moriarty come anywhere near it, that’s all you need. The high rate data source creates and broadcasts an enormous amount of data, exabytes. Then there are customers of the system, Alice and Bob, maybe George as well, and Charlie. The method is not sensitive to how many customers there are.

Alex Shafarenko
Make Noise and Whisper: A Solution to Relay Attacks

In this paper we propose a new method to detect relay attacks. The relay attacks are possible in many communication systems, and are easy to put in practice since the attackers don’t require any knowledge about the underlying protocols or the cryptographic keys.

So far the most practical solutions against relay attacks rely on distance-bounding protocols. These protocols can provide an estimated maximum distance between two communicating devices.

We provide a different solution that can detect a relay attack regardless of the distance between the devices. Our solution relies on introducing intentional errors in the communication, providing a kind of hop-count metric.

In order to illustrate our idea we describe two idealized example implementations and we assess their theoretical performance with simulation experiments. There are several limitations in these two examples but we hope that the ideas presented in this paper will contribute towards practical implementations against relay attacks.

Omar Choudary, Frank Stajano
Make Noise and Whisper: A Solution to Relay Attacks (Transcript of Discussion)

In the uninterrupted part of my presentation I explained the core of our solution and presented one example for method 1. The solution is described in detail in the paper. In the next paragraphs the discussion continues at the point where I present one example of transaction between Alice and Bob (see the image below).

OK, what happened in a different case? So in this case Bob will insert an error. Alice again sends the same bit, the white ball, which is a one, and Bob will insert an error this time. What would you expect to happen? Well, after the computation what you can see is that on the channel we get the black ball. What means that? From Alice’s perspective that means that she can detect that Bob has inserted an error because the output was different than the input. From Bob’s perspective, it’s not very clear what happened, because he inserted a black ball which means that he was inserting an error. Therefore he has no idea if Alice was inserting a white ball or a black ball, so this was the same if Alice was inserting a black ball as well.

Omar Choudary
Scrambling for Lightweight Censorship Resistance

In this paper we propose scrambling as a lightweight method of censorship resistance, in place of the traditional use of encryption. We consider a censor which can only block banned content by scanning it while in transit (for example using deep-packet inspection), instead of attacking the communication endpoints (for example using address filtering or taking servers offline). Our goal is to greatly increase the workload of the censor by scrambling all data during communication, while maintaining reasonable workloads for the endpoints of the communication network. In particular, our goal is to make it impossible for the censor to effectively accelerate the de-scrambling procedure over what may be achieved by commodity PCs or mobile phones at the endpoints, a goal which we term

high-inertia

scrambling. We also aim to achieve this using the standard JavaScript runtime environment of modern browsers, requiring no distribution or installation of censorship-resistance software.

Joseph Bonneau, Rubin Xu
Scrambling for Lightweight Censorship Resistance (Transcript of Discussion)

Hello everyone, today I will talk about new ideas about censorship resistance. First of all, what threat models are we assuming, and what kind of censors are we talking about. In this paper we are assuming a passive global censor, that basically means there is some well-founded organization who would be able to sit at the backbone of some internet communication, possibly on the outgoing router of some internet infrastructure, and watch all communication going in and out of the domain. And what they do is, they will inspect the packet contents, possibly deep-packet inspection on the TCP session, etc, and detect any content which is in blacklist. If any of the blacklisted keywords is being detected in that TCP session, then the adversary will try to block the connection by various means. What the censor will not do is trying to actively modify the communication channel, it will only observe passively. One readily available example in the real world is the Great Firewall project of the Chinese government. Basically it observes all traffic goes in and out of China, uses deep-packet inspection techniques to detect any blacklist keywords, and if any of them is detected then it will inject a malformed packet to disrupt the TCP session, and cause the connection to reset.

Rubin Xu
The Metaplace Security Model

As part of an ongoing project on the security of online games and virtual reality applications, we joined the open beta test of

Metaplace

, to carry out our own analysis of Metaplace’s security mechanisms, and to observe what went wrong in practise during the beta test.

The beta test version of Metaplace is particularly interesting because it went further than most online games in allowing “user generated content”. For example, users were able to customize the game (or effectively, build their own game) by writing code that was run on the game server. This clearly has serious security implications, and Metaplace had its own unique security mechanisms to address the resulting issues. At the end of the beta test, Metaplace (then renamed

Island Life

) was changed to be more modest in the forms of user generated content that were permitted. The beta test was therefore a one-off opportunity to see if these mechanisms worked in practise.

We found that some well-known operating systems security issues reappeared in new forms in Metaplace: anyone who in the future would like to build a game with this degree of user-generated content in their game would do well to be aware of these issues.

The obvious competitor to Metaplace was Linden Lab’s

Second Life

, which also permits advanced forms of user-generated content. Second Life’s approach to security is significantly different from Metaplace, and there both advantages and disadvantages: we give a more detailed comparison later in the paper.

Michael Roe
The Metaplace Security Model (Transcript of Discussion)

OK, so I’m Michael Roe, my new organisational affiliation is the University of Hertfordshire, and this talk is going to be about the beta test of the Metaplace, which was an online game that was in test a year or so ago. This is a case study that is part of a larger project on the security of online games. I wasn’t the one who developed this game, I just volunteered as a beta tester in the open beta so that I could see what their security problems were.

Michael Roe
One-Way Cryptography

In a forthcoming paper [2], we examine the security of the

APCO

Project

25 (“P25”)[3] two-way digital voice radio system. P25 is a suite of digital protocols and standards designed for use in narrowband short-range (VHF and UHF) land-mobile wireless two-way communications systems. The system is used by law enforcement, national security, public safety, and other government users in the United States and several other countries.

Because two-way radio traffic is easily intercepted, P25 includes a number of security features, including encryption of voice and data under a variety of cipher algorithms and keying schemes. It is regarded as being sufficiently secure to carry highly sensitive traffic, including confidential law enforcement criminal surveillance operations and to support classified national security investigations, and is extensively used for these purpose by the various U.S. federal agencies that conduct such activities.

Sandy Clark, Travis Goodspeed, Perry Metzger, Zachary Wasserman, Kevin Xu, Matt Blaze
One-Way Cryptography (Transcript of Discussion)

So what do I mean by one-way cryptography? We were actually given an example here on the projector: this message, “Protected for education use only, if you see this message call the Police.” All of the security functions of this are decided on in advance by the people who configured the projector and sent it out into the world, and by the time the receiver of this message sees it, it’s too late to change anything about the protocol, there’s no interaction involved, the damage is done; everything that you need to do to secure this has to have been done in advance, there’s no negotiation.

Matt Blaze
How to Keep Bad Papers Out of Conferences (with Minimum Reviewer Effort)

Reviewing conference submissions is both labour-intensive and diffuse. A lack of focus leads to reviewers spending much of their scarce time on papers which will not be accepted, which can prevent them from identifying several classes of problems with papers that will be. We identify opportunities for automation in the review process and propose protocols which allow human reviewers to better focus their limited time and attention, making it easier to select only the best “genetic” material to incorporate into their conference’s “DNA.” Some of the protocols that we propose are difficult to “game” without uneconomic investment on the part of the attacker, and successfully attacking others requires attackers to provide a positive social benefit to the wider research community.

Jonathan Anderson, Frank Stajano, Robert N. M. Watson
How to Keep Bad Papers Out of Conferences (Transcript of Discussion)

So the problem with social networks and the talks that I’ve given about those is they’re not controversial enough; people just don’t have enough opinions about social networks. So let’s talk about something that absolutely everybody has an opinion on, which is keeping bad papers out of conferences. This is based on some thinking that Frank and Robert and I have been doing, and I promise that there will be lots of pictures. I know it’s the last talk, but you have opinions, I have pictures, so let’s all stay awake.

Jonathan Anderson
Backmatter
Metadaten
Titel
Security Protocols XIX
herausgegeben von
Bruce Christianson
Bruno Crispo
James Malcolm
Frank Stajano
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-25867-1
Print ISBN
978-3-642-25866-4
DOI
https://doi.org/10.1007/978-3-642-25867-1

Premium Partner