Skip to main content
Top

2016 | Book

Security Standardisation Research

Third International Conference, SSR 2016, Gaithersburg, MD, USA, December 5–6, 2016, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the Third International Conference on Security Standardisation Research, SSR 2016, held in Gaithersburg, MD, USA, in December 2016.The accepted papers cover a range of topics in the field of security standardisation research, including hash-based signatures, algorithm agility, secure protocols, access control, secure APIs, payment security and key distribution.

Table of Contents

Frontmatter
Analyzing and Fixing the QACCE Security of QUIC
Abstract
QUIC is a secure transport protocol developed by Google. Lychev et al. proposed a security model (QACCE model) to capture the security of QUIC. However, the QACCE model is very complicated, and it is not clear if security requirements for QUIC are appropriately defined. In this paper, we show the first formal analysis result of QUIC using automated security verification tool ProVerif. Our symbolic model formalizes the QACCE model and the specification of QUIC. As the result of the verification, we find three attacks against QUIC in the QACCE model. It means that the Lychev et al.’s security proofs are not correct. We discuss why such attacks occur, and clarify there are unnecessarily strong points in the QACCE model. Finally, we give a way to improve the QACCE model to exactly address the appropriate security requirements.
Hideki Sakurada, Kazuki Yoneyama, Yoshikazu Hanatani, Maki Yoshida
Cross-Tool Semantics for Protocol Security Goals
Abstract
Formal protocol analysis tools provide objective evidence that a protocol under standardization meets security goals, as well as counterexamples to goals it does not meet (“attacks”). Different tools are however based on different execution semantics and adversary models. If different tools are applied to alternative protocols under standardization, can formal evidence offer a yardstick to compare the results?
We propose a family of languages within first order predicate logic to formalize protocol safety goals (rather than indistinguishability). Although they were originally designed for the strand space formalism that supports the tool cpsa, we show how to translate them to goals for the applied \(\pi \) calculus that supports the tool ProVerif. We give a criterion for protocols expressed in the two formalisms to correspond, and prove that if a protocol in the strand space formalism satisfies a goal, then a corresponding applied \(\pi \) process satisfies the translation of that goal. We show that the converse also holds for a class of goal formulas, and conjecture a broader equivalence. We also describe a compiler that, from any protocol in the strand space formalism, constructs a corresponding applied \(\pi \) process and the relevant goal translation.
Joshua D. Guttman, John D. Ramsdell, Paul D. Rowe
Cryptanalysis of GlobalPlatform Secure Channel Protocols
Abstract
GlobalPlatform (GP) card specifications are the de facto standards for the industry of smart cards. Being highly sensitive, GP specifications were defined regarding stringent security requirements. In this paper, we analyze the cryptographic core of these requirements; i.e. the family of Secure Channel Protocols (SCP). Our main results are twofold. First, we demonstrate a theoretical attack against SCP02, which is the most popular protocol in the SCP family. We discuss the scope of our attack by presenting an actual scenario in which a malicious entity can exploit it in order to recover encrypted messages. Second, we investigate the security of SCP03 that was introduced as an amendment in 2009. We find that it provably satisfies strong notions of security. Of particular interest, we prove that SCP03 withstands algorithm substitution attacks (ASAs) defined by Bellare et al. that may lead to secret mass surveillance. Our findings highlight the great value of the paradigm of provable security for standards and certification, since unlike extensive evaluation, it formally guarantees the absence of security flaws.
Mohamed Sabt, Jacques Traoré
NFC Payment Spy: A Privacy Attack on Contactless Payments
Abstract
In a contactless transaction, when more than one card is presented to the payment terminal’s field, the terminal does not know which card to choose to proceed with the transaction. This situation is called card collision. EMV (which is the primary standard for smart card payments) specifies that the reader should not proceed when it detects a card collision and that instead it should notify the payer. In comparison, the ISO/IEC 14443 standard specifies that the reader should choose one card based on comparing the UIDs of the cards detected in the field. However, our observations show that the implementation of contactless readers in practice does not follow EMV’s card collision algorithm, nor does it match the card collision procedure specified in ISO.
Due to this inconsistency between the implementation and the standards, we show an attack that may compromise the user’s privacy by collecting the user’s payment details. We design and implement a malicious app simulating an NFC card which the user needs to install on her phone. When she aims to pay contactlessly while placing her card close to her phone, this app engages with the terminal before the card does. The experiments show that even when the terminal detects a card collision (the app essentially acts like a card), it proceeds with the EMV protocol. We show the app can retrieve from the terminal the transaction data, which include information about the payment such as the amount and date. The experimental results show that our app can effectively spy on contactless payment transactions, winning the race condition caused by card collisions around 66 % when testing with different cards. By suggesting these attacks we raise awareness of privacy and security issues in the specifications, standardisation and implementations of contactless cards and readers.
Maryam Mehrnezhad, Mohammed Aamir Ali, Feng Hao, Aad van Moorsel
Security Analysis of the W3C Web Cryptography API
Abstract
Due to the success of formal modeling of protocols such as TLS, there is a revival of interest in applying formal modeling to standardized APIs. We argue that formal modeling should happen as the standard is being developed (not afterwards) as it can detect complex or even simple attacks that the standardization group may not otherwise detect. As a case example of this, we discuss in detail the W3C Web Cryptography API. We demonstrate how a formal analysis of the API using the modeling language AVISPA with a SAT solver demonstrates that while the API has no errors in basic API operations and maintains its security properties for the most part, there are nonetheless attacks on secret key material due to how key wrapping and usages are implemented. Furthermore, there were a number of basic problems in terms of algorithm selection and a weakness that led to a padding attack. The results of this study led to the removal of algorithms before its completed standardization and the removal of the padding attack via normalization of error codes, although the key wrapping attack is still open. We expect this sort of formal methodology to be applied to new standardization efforts at the W3C such as the W3C Web Authentication API.
Kelsey Cairns, Harry Halpin, Graham Steel
Algorithm Agility – Discussion on TPM 2.0 ECC Functionalities
Abstract
The TPM 2.0 specification has been designed to support a number of Elliptic Curve Cryptographic (ECC) primitives, such as key exchange, digital signatures and Direct Anonymous Attestation (DAA). In order to meet the requirement that different TPM users may favor different cryptographic algorithms, each primitive can be implemented from multiple algorithms. This feature is called Algorithm Agility. For the purpose of performance efficiency, multiple algorithms share a small set of TPM commands. In this paper, we review all the TPM 2.0 ECC functionalities, and discuss on whether the existing TPM commands can be used to implement new cryptographic algorithms which have not yet been addressed in the specification. We demonstrate that four asymmetric encryption schemes specified in ISO/IEC 18033-2 can be implemented using a TPM 2.0 chip, and we also show on some ECDSA variants that the coverage of algorithm agility from TPM 2.0 is limited. Security analysis of algorithm agility is a challenge, which is not responded in this paper. However, we believe that this paper will help future researchers analyze TPM 2.0 in more comprehensive methods than it has been done so far.
Liqun Chen, Rainer Urian
Reactive and Proactive Standardisation of TLS
Abstract
In the development of TLS 1.3, the IETF TLS Working Group has adopted an “analysis-prior-to-deployment” design philosophy. This is in sharp contrast to all previous versions of the protocol. We present an account of the TLS standardisation narrative, examining the differences between the reactive standardisation process for TLS 1.2 and below, and the more proactive standardisation process for TLS 1.3. We explore the possible factors that have contributed to the shift in the TLS WG’s design mindset, considering the protocol analysis tools available, the levels of academic involvement and the incentives governing relevant stakeholders at the time of standardisation. In an attempt to place TLS within the broader realm of standardisation, we perform a comparative analysis of standardisation models and discuss the standardisation of TLS within this context.
Kenneth G. Paterson, Thyla van der Merwe
Extending the UML Standards to Model Tree-Structured Data and Their Access Control Requirements
Abstract
Secure data sharing between computational systems is a necessity to many workflows across domains such as healthcare informatics, law enforcement and national security. While there exist many approaches towards securing data for the purpose of dissemination, the vast majority follows the traditional thought of security engineering that occurs as the last step of the overall software engineering process. In this paper we extend the Unified Modeling Language (UML) standard to: (1) modeling tree-structured data and associated schemas and (2) information security via role-based, lattice-based, and discretionary access control; both push it towards the forefront of the software development life-cycle. Tree structured data and associated schemas are dominant in information modeling and exchange formats including: the eXtensible Markup Language (XML), JavaScript Object Notation (JSON), etc. New UML artifacts for tree-structured data and schemas would allow the modeling of generalized information solutions from which XML, JSON, RDF, etc., could be generated; this is akin to generating different object-oriented programming language code from UML class diagrams. This UML extension also allows security experts to model and define information security requirements at the schema level as well, before code is written. The end-result is the assurance of information security for the purpose of sharing across computational systems.
Alberto De la Rosa Algarín, Steven A. Demurjian
Attribute-Based Access Control Architectures with the eIDAS Protocols
Abstract
The extended access control protocol has been used for the German identity card since November 2010, primarily to establish a cryptographic key between a card and a service provider and to authenticate the partners. The protocol is also referenced by the International Civil Aviation Organization for machine readable travel documents (Document 9303) as an option, and it is a candidate for the future European eIDAS identity system. Here we show that the system can be used to build a secure access system which operates in various settings (e.g., integrated, distributed, or authentication-service based architectures), and where access can be granted based on card’s attributes. In particular we prove the protocols to provide strong cryptographic guarantees, including privacy of the attributes against outsiders.
Frank Morgner, Paul Bastian, Marc Fischlin
Secure Multicast Group Management and Key Distribution in IEEE 802.21
Abstract
Controlling a large number of devices such as sensors and smart end points, is always a challenge where scalability and security are indispensable. This is even more important when it comes to periodic configuration updates to a large number of such devices belonging to one or more groups. One solution could be to take a group of devices as a unit of control and then manage them through a group communication mechanism. An obvious challenge to this approach is how to create such groups dynamically and manage them securely. Moreover, there need to be mechanisms in place by which members of the group can be removed and added dynamically. In this paper, we propose a technique that has been recently standardized in IEEE 802.21 (IEEE Std 802.21d™-2015) with the objective of providing a standard-based solution to the above challenges. The approach relies on Logical Key Hierarchy (LKH) based key distribution mechanism but optimizes the number of encryption and decryption by using “Complete Subtree”. It leverages IEEE 802.21 framework, services, and protocol for communication and management, and provides a scalable and secure way to manage (e.g., add and remove) devices from one or more groups. We describe the group key distribution protocol in details and provide a security analysis of the scheme along with some performance results from a prototype implementation.
Yoshikazu Hanatani, Naoki Ogura, Yoshihiro Ohba, Lidong Chen, Subir Das
State Management for Hash-Based Signatures
Abstract
The unavoidable transition to post-quantum cryptography requires dependable quantum-safe digital signature schemes. Hash-based signatures are well-understood and promising candidates, and the object of current standardization efforts. In the scope of this standardization process, the most commonly raised concern is statefulness, due to the use of one-time signature schemes. While the theory of hash-based signatures is mature, a discussion of the system security issues arising from the concrete management of their state has been lacking. In this paper, we analyze state management in N-time hash-based signature schemes, considering both security and performance, and categorize the security issues that can occur due to state synchronization failures. We describe a state reservation and nonvolatile storage, and show that it can be naturally realized in a hierarchical signature scheme. To protect against unintentional copying of the private key state, we consider a hybrid stateless/stateful scheme, which provides a graceful security degradation in the face of unintentional copying, at the cost of increased signature size. Compared to a completely stateless scheme, the hybrid approach realizes the essential benefits, with smaller signatures and faster signing.
David McGrew, Panos Kampanakis, Scott Fluhrer, Stefan-Lukas Gazdag, Denis Butin, Johannes Buchmann
Analysis of a Proposed Hash-Based Signature Standard
Abstract
We analyze the concrete security of a hash-based signature scheme described in a recent series of Internet Drafts by McGrew and Curcio. We show that an original version of their proposal achieves only a “loose” security bound, but that the latest version can be proven to have tighter security in the random-oracle model.
Jonathan Katz
Backmatter
Metadata
Title
Security Standardisation Research
Editors
Lidong Chen
David McGrew
Chris Mitchell
Copyright Year
2016
Electronic ISBN
978-3-319-49100-4
Print ISBN
978-3-319-49099-1
DOI
https://doi.org/10.1007/978-3-319-49100-4

Premium Partner