Skip to main content

2020 | Buch

Security Protocols XXVII

27th International Workshop, Cambridge, UK, April 10–12, 2019, Revised Selected Papers

herausgegeben von: Dr. Jonathan Anderson, Prof. Frank Stajano, Prof. Dr. Bruce Christianson, Vashek Matyáš

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The volume LNCS 12287 constitutes the proceedings of the 27th International Workshop on Security Protocols, held in Cambridge, UK, in April 2019.

The volume consists of 16 thoroughly revised invited papers presented together with the respective transcripts of discussions. The theme of this year's workshop was “Security Protocols for Humans" The topics covered included Designing for Humans and Understanding Humans, Human Limitations in Security, Secure sharing and collaboration and much more.

Inhaltsverzeichnis

Frontmatter

Designing for Humans

Frontmatter
Transparency Enhancing Technologies to Make Security Protocols Work for Humans
Abstract
As computer systems are increasingly relied on to make decisions that will have significant consequences, it has also become important to provide not only standard security guarantees for the computer system but also ways of explaining the output of the system in case of possible errors and disputes. This translates to new security requirements in terms of human needs rather than technical properties. For some context, we look at prior disputes regarding banking security and the ongoing litigation concerning the Post Office’s Horizon system, discussing the difficulty in achieving meaningful transparency and how to better evaluate available evidence.
Alexander Hicks, Steven J. Murdoch
Transparency Enhancing Technologies to Make Security Protocols Work for Humans (Transcript of Discussion)
Abstract
If you’ve heard some of my talks before then most of the time I’ve been banging on about Chip & PIN. I’m not, in this talk – this is about another dispute, another court case, and this is Bates and Others vs. Post Office Ltd. And in my first five minutes I’m hoping to tell you why this is interesting.
Steven J. Murdoch
Audio CAPTCHA with a Few Cocktails: It’s so Noisy I Can’t Hear You
Abstract
With crime migrating to the web, the detection of abusive robotic behaviour is becoming more important. In this paper, we propose a new audio CAPTCHA construction that builds upon the Cocktail Party problem (CPP) to detect robotic behaviour. We evaluate our proposed solution in terms of both performance and usability. Finally, we explain how to deploy such an acoustic CAPTCHA in the wild with strong security guarantees.
Benjamin Maximilian Reinheimer, Fairooz Islam, Ilia Shumailov
Audio CAPTCHA with a Few Cocktails: It’s So Noisy I Can’t Hear You (Transcript of Discussion)
Abstract
Everyone, my name is Benjamin, and from being a Postmaster, which, probably, most of you aren’t, at the moment, to a topic that, I guess, most of you dealed in the past, CAPTCHAs, and our topic is, audio CAPTCHAs, with a few cocktails, it’s so noisy I can’t hear you.
Benjamin Maximilian Reinheimer, Fairooz Islam, Ilia Shumailov

Understanding Humans

Frontmatter
Shaping Our Mental Model of Security
Abstract
The IT industry’s need to distinguish new products with new looks, new experiences, and new user interface designs is bad for cybersecurity. It robs users of the chance to transfer previously acquired security-relevant knowledge to new products and leaves them with a poor mental model of security.
Starting from a comparison with physical safety, we explore and sketch a method to help users develop a useful mental model of security in cybersystems. A beneficial side-effect of our methodology is that it makes precise what security requirements the user expects the system to fulfill. This can be used to formally verify the system’s compliance with the user’s expectation.
Saša Radomirović
Shaping Our Mental Model of Security (Transcript of Discussion)
Abstract
This talk is not on a completed piece of research, it is a work in progress. One of our biggest challenges at the moment is how to improve cyber security and people are at the heart of it. A lot has been written about safety and human error. There’s also been, for a long time, this thought that humans are the weakest link in security. If we could just take the human out of the loop, we would have safer systems, or more secure systems. Whether we are better off with or without the human in the loop, we cannot avoid people being part of security. And one way to keep people secure is to teach them something, and this is where the mental model comes in.
Saša Radomirović
Social Constructionism in Security Protocols
A Position on Human Experience, Psychology and Security
Abstract
Understanding the human in computer security through Qualitative Research aims at a conceptual repositioning. The aim is to leverage individual human experience to understand and improve the impact of humans in computer security. Embracing what is particular, complex and subtle in the human social experience means understanding precisely what is happening when people transgress protocols. Repositioning transgression as normal, by researching what people working in Computer Network Defense do, how they construct an understanding of what they do, and why they do it, facilitates addressing the human aspects of this work on its own terms. Leveraging the insights developed through Qualitative Research means that it is possible to envisage and develop appropriate remedies using Applied Psychology, and thereby improve computer security.
Simon N. Foley, Vivien M. Rooney
Social Constructionism in Security Protocols (Transcript of Discussion)
Abstract
This is joint work between myself and my co-author, Vivien Rooney. I’m a computer scientist, and Vivien’s an applied psychologist. We’re interested in understanding how humans experience working with security protocols. And, when I use the word “security protocol”, I mean it in the most general sense: a set of rules that people and machines are supposed to follow.
Simon N. Foley, Vivien M. Rooney

Fresh Perspectives

Frontmatter
Bounded Temporal Fairness for FIFO Financial Markets
Abstract
Financial exchange operators cater to the needs of their users while simultaneously ensuring compliance with the financial regulations. In this work, we focus on the operators’ commitment for fair treatment of all competing participants. We first discuss unbounded temporal fairness and then investigate its implementation and infrastructure requirements for exchanges. We find that these requirements can be fully met only under ideal conditions and argue that unbounded fairness in FIFO markets is unrealistic. To further support this claim, we analyse several real-world incidents and show that subtle implementation inefficiencies and technical optimizations suffice to give unfair advantages to a minority of the participants. We finally introduce, \(\epsilon \)-fairness, a bounded definition of temporal fairness and discuss how it can be combined with non-continuous market designs to provide equal participant treatment with minimum divergence from the existing market operation.
Vasilios Mavroudis
Bounded Temporal Fairness for FIFO Financial Markets (Transcript of Discussion)
Abstract
I’ll take advantage of the five-minute grace period to make my main points early on. Then, I’ll move on and introduce some fundamental concepts as I understand that not everyone is necessarily familiar with all the details of modern exchanges.
Vasilios Mavroudis
Mismorphism: The Heart of the Weird Machine
Abstract
Mismorphisms—instances where predicates take on different truth values across different interpretations of reality (notably, different actors’ perceptions of reality and the actual reality)—are the source of weird instructions. These weird instructions are tiny code snippets or gadgets that present the exploit programmer with unintended computational capabilities. Collectively, they constitute the weird machine upon which the exploit program runs. That is, a protocol or parser vulnerability is evidence of a weird machine, which, in turn, is evidence of an underlying mismorphism. This paper seeks to address vulnerabilities at the mismorphism layer.
The work presented here connects to our prior work in language-theoretic security (LangSec). LangSec provides a methodology for eliminating weird machines: By limiting the expressiveness of the input language, separating and constraining the parser code from the execution code, and ensuring only valid input makes its way to the execution code, entire classes of vulnerabilities can be avoided. Here, we go a layer deeper with our investigation of the mismorphisms responsible for weird machines.
In this paper, we re-introduce LangSec and mismorphisms, and we develop a logical representation of mismorphisms that complements our previous semiotic-triad-based representation. Additionally, we develop a preliminary set of classes for expressing LangSec mismorphisms, and we use this mismorphism-based scheme to classify a corpus of LangSec vulnerabilities.
Prashant Anantharaman, Vijay Kothari, J. Peter Brady, Ira Ray Jenkins, Sameed Ali, Michael C. Millian, Ross Koppel, Jim Blythe, Sergey Bratus, Sean W. Smith
Mismorphism: The Heart of the Weird Machine (Transcript of Discussion)
Abstract
As Jon mentioned, this is one of our two talks at this workshop. My colleagues Vijay and Michael, over here, will be presenting later in the morning. In this talk, I’ll introduce what mismorphisms are, and some of the things that we work on, which are LangSec and weird machines. And I’ll talk a bit more about some of the work we did in this paper. We want some insight from all of you about what we can do to improve our work. And we have some holes that we’ve identified that we want your help fixing.
Prashant Anantharaman

Human Limitations in Security

Frontmatter
Affordable Security or Big Guy vs Small Guy
Does the Depth of Your Pockets Impact Your Protocols?
Abstract
When we design a security protocol we assume that the humans (or organizations) playing Alice and Bob do not make a difference. In particular, their financial capacity seems to be irrelevant.
In the latest trend to guarantee that secure multi-party computation protocols are fair and not vulnerable to malicious aborts, a slate of protocols has been proposed based on penalty mechanisms. We look at two well-known penalty mechanisms, and show that the so-called see-saw mechanism (Kumaresan et al., CCS 15), is only fit for people with deep pockets, well beyond the stake in the multi-party computation itself.
Depending on the scheme, fairness is not affordable by everyone which has several policy implications on protocol design. To explicitly capture the above issues, we introduce a new property called financial fairness.
Daniele Friolo, Fabio Massacci, Chan Nam Ngo, Daniele Venturi
Affordable Security or Big Guy vs Small Guy (Transcript of Discussion)
Abstract
Our talk today is about “Affordable Security” and we dub it “Big Guy versus Small Guy” in the sense that Big Guys are the one who have a lot of money and Small Guys are the poor guys.
Chan Nam Ngo
Human-Computability Boundaries
Abstract
Human understanding of protocols is central to protocol security. The security of a protocol rests on its designers, its implementors, and, in some cases, its users correctly conceptualizing how it should work, understanding how it actually works, and predicting how others will think it works. Ensuring these conceptualizations are correct is difficult. A complementary field, however, provides some inspiration on how to proceed: the field of language-theoretic security (LangSec) promotes the adoption of a secure design-and-development methodology that emphasizes the existence of certain computability boundaries that must never be crossed during parser and protocol construction to ensure correctness of design and implementation. We propose supplementing this work on classical computability boundaries with exploration of human-computability boundaries. Classic computability research has focused on understanding what problems can be solved by machines or idealized human computers—that is, computational models that behave like humans carrying out rote computational tasks in principle but that are not subject to the natural limitations that humans face in practice. Humans are often subject to a variety of deficiencies, e.g., constrained working memories, short attention spans, misperceptions, and cognitive biases. We argue that such realities must be taken into consideration if we are to be serious about securing protocols. A corollary is that while the traditional computational models and hierarchies built using them (e.g., the Chomsky hierarchy) are useful for securing protocols and parsers, they alone are inadequate as they neglect the human-computability boundaries that define what humans can do in practice. In this position paper, we advocate for the discovery of human-computability boundaries, present challenges with precisely and accurately finding those boundaries, and outline future paths of inquiry.
Vijay Kothari, Prashant Anantharaman, Ira Ray Jenkins, Michael C. Millian, J. Peter Brady, Sameed Ali, Sergey Bratus, Jim Blythe, Ross Koppel, Sean W. Smith
Human-Computability Boundaries (Transcript of Discussion)
Abstract
The origin of many protocol vulnerabilities is the human. Humans fail predictably, and they fail often. This paper is mostly dealing with how do we acknowledge these failures. Moreover, how do we start designing protocols in such ways that humans are less likely to fail and cause these vulnerabilities down the road?
Vijay Kothari, Michael C. Millian

Secure Sharing and Collaboration

Frontmatter
Challenges in Designing a Distributed Cryptographic File System
Abstract
Online social networks, censorship resistance systems, document redaction systems and health care information systems have disparate requirements for confidentiality, integrity and availability. It is possible to address all of these, however, by combining elements of research in both filesystems and security protocols. We propose a set of techniques and combinations that can be employed to move beyond the current centralized/decentralized dichotomy and build a privacy-preserving optionally-distributed cryptographic filesystem. Such a filesystem, prototyped as UPSS: the user-centred private sharing system, can be used to build applications that enable rich, collaborative sharing in environments that have traditionally either avoided such interaction or else suffered the costs of out-of-control sharing on untrustworthy systems. We believe that our combination of filesystems and security protocols research demonstrates that sharing and security can go hand in hand.
Arastoo Bozorgi, Mahya Soleimani Jadidi, Jonathan Anderson
Challenges in Designing a Distributed Cryptographic File System (Transcript of Discussion)
Abstract
We are going to design a cryptographic file system, which has some cool features, which the current file system doesn’t solve, like: file sharing, partial file sharing and also enable the users to collaborate with each other based on these shared files and also make this promise to users that their possible conflicts with your result in the file system level.
Arastoo Bozorgi, Mahya Soleimani Jadidi, Jonathan Anderson

Is the Future Finally Arriving?

Frontmatter
Zero-Knowledge User Authentication: An Old Idea Whose Time Has Come
Abstract
User authentication can rely on various factors (e.g., a password, a cryptographic key, and/or biometric data) but should not reveal any secret information held by the user. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes.
Laurent Chuat, Sarah Plocher, Adrian Perrig
Zero-Knowledge User Authentication: An Old Idea Whose Time Has Come (Transcript of Discussion)
Abstract
So, user authentication on the web: I don’t think it needs much introduction. Still, often, just a username and password, unfortunately. And I think it would be an understatement to say that this is suboptimal, both in terms of security and usability.
Laurent Chuat
A Rest Stop on the Unending Road to Provable Security
Abstract
During the past decade security research has offered persuasive arguments that the road to provable security is unending, and further that there’s no rest stop on this road; e.g., there is no security property one can prove without making assumptions about other, often unproven, system properties. In this paper I suggest what a useful first rest stop might look like, and illustrate one possible place for it on the road to provable security. Specifically, I argue that a small and simple verifier can establish software root of trust (RoT) on an untrusted system unconditionally; i.e., without secrets, trusted hardware modules, or bounds on the adversary power; and the verifier’s trustworthiness can be proven without dependencies of other unverified computations. The foundation for proving RoT establishment unconditionally already exists, and the proofs require only the availability of randomness in nature and correct specifications for the untrusted system. In this paper, I also illustrate why RoT establishment is useful for obtaining other basic properties unconditionally, such as secure initial state determination, verifiable boot, and on-demand firmware verification for I/O devices.
Virgil D. Gligor
A Rest Stop on the Unending Road to Provable Security (Transcript of Discussion)
Abstract
The title of this paper is a spoof on Butler Lampson’s assertion that there is no resting place on the road to perfection.
Virgil D. Gligor

Evidence of Humans Behaving Badly

Frontmatter
Ghost Trace on the Wire? Using Key Evidence for Informed Decisions
Abstract
Modern smartphone messaging apps now use end-to-end encryption to provide authenticity, integrity and confidentiality. Consequently, the preferred strategy for wiretapping such apps is to insert a ghost user by compromising the platform’s public key infrastructure. The use of warning messages alone is not a good defence against a ghost user attack since users change smartphones, and therefore keys, regularly, leading to a multitude of warning messages which are overwhelmingly false positives. Consequently, these false positives discourage users from viewing warning messages as evidence of a ghost user attack. To address this problem, we propose collecting evidence from a variety of sources, including direct communication between smartphones over local networks and CONIKS, to reduce the number of false positives and increase confidence in key validity. When there is enough confidence to suggest a ghost user attack has taken place, we can then supply the user with evidence to help them make a more informed decision.
Diana A. Vasile, Martin Kleppmann, Daniel R. Thomas, Alastair R. Beresford
Ghost Trace on the Wire? Using Key Evidence for Informed Decisions (Transcript of Discussion)
Abstract
If there’s an error shown that the secret has changed, when I’m alerted, because I’m an aware user, I message Alice on her phone and I find out that she hasn’t got anything reinstalled, hasn’t done anything new, hasn’t reset her phone so it’s probably a ghost being entered into the conversation. Can I then do something within the WhatsApp conversation to, let’s say, renew the keys and kick the ghost out again?
Diana A. Vasile

Warnings

Frontmatter
Evolution of SSL/TLS Indicators and Warnings in Web Browsers
Abstract
The creation of the World Wide Web (WWW) in the early 1990’s finally made the Internet accessible to a wider part of the population. With this increase in users, security became more important. To address confidentiality and integrity requirements on the web, Netscape—by then a major web browser vendor—presented the Secure Socket Layer (SSL), later versions of which were renamed to Transport Layer Security (TLS). In turn, this necessitated the introduction of both security indicators in browsers to inform users about the TLS connection state and also of warnings to inform users about potential errors in the TLS connection to a website. Looking at the evolution of indicators and warnings, we find that the qualitative data on security indicators and warnings, i.e., screen shots of different browsers over time is inconsistent. Hence, in this paper we outline our methodology for collecting a comprehensive data set of web browser security indicators and warnings, which will enable researchers to better understand how security indicators and TLS warnings in web browsers evolved over time.
Lydia Kraus, Martin Ukrop, Vashek Matyas, Tobias Fiebig
Evolution of SSL/TLS Indicators and Warnings in Web Browsers (Transcript of Discussion)
Abstract
I would like to start my talk with a short thought experiment.
Lydia Kraus
Snitches Get Stitches: On the Difficulty of Whistleblowing
Abstract
One of the most critical security protocol problems for humans is when you are betraying a trust, perhaps for some higher purpose, and the world can turn against you if you’re caught. In this short paper, we report on efforts to enable whistleblowers to leak sensitive documents to journalists more safely. Following a survey of cases where whistleblowers were discovered due to operational or technological issues, we propose a game-theoretic model capturing the power dynamics involved in whistleblowing. We find that the whistleblower is often at the mercy of motivations and abilities of others. We identify specific areas where technology may be used to mitigate the whistleblower’s risk. However we warn against technical solutionism: the main constraints are often institutional.
Mansoor Ahmed-Rengers, Ross Anderson, Darija Halatova, Ilia Shumailov
Snitches Get Stitches: On the Difficulty of Whistleblowing (Transcript of Discussion)
Abstract
I should first comment on the title. We wrote this a long time before the events of yesterday. We do not consider Julian Assange a snitch, and we sincerely hope he doesn’t get stitches. So let’s ignore the first part. Another thing we can ignore is the grace period. Please feel free to interrupt me, starting now.
Mansoor Ahmed-Rengers, Ross Anderson, Darija Halatova, Ilia Shumailov
Backmatter
Metadaten
Titel
Security Protocols XXVII
herausgegeben von
Dr. Jonathan Anderson
Prof. Frank Stajano
Prof. Dr. Bruce Christianson
Vashek Matyáš
Copyright-Jahr
2020
Electronic ISBN
978-3-030-57043-9
Print ISBN
978-3-030-57042-2
DOI
https://doi.org/10.1007/978-3-030-57043-9