Skip to main content

2018 | Book

Security Standardisation Research

4th International Conference, SSR 2018, Darmstadt, Germany, November 26-27, 2018, Proceedings


About this book

This book constitutes the refereed proceedings of the 4th International Conference on Security Standardisation Research, SSR 2018, held in Darmstadt, Germany, in November 2018.The papers cover a range of topics in the field of security standardisation research, including cryptographic evaluation, standards development, analysis with formal methods, potential future areas of standardisation, and improving existing standards.

Table of Contents

hacspec: Towards Verifiable Crypto Standards
We present hacspec, a collaborative effort to design a formal specification language for cryptographic primitives. Specifications (specs) written in hacspec are succinct, easy to read and implement, and lend themselves to formal verification using a variety of existing tools. The syntax of hacspec is similar to the pseudocode used in cryptographic standards but is equipped with a static type system and syntax checking tools that can find errors. Specs written in hacspec are executable and can hence be tested against test vectors taken from standards and specified in a common format. Finally, hacspec is designed to be compilable to other formal specification languages like F\({}^{\star }\), EasyCrypt, Coq, and cryptol, so that it can be used as the basis for formal proofs of functional correctness and cryptographic security using various verification frameworks. This paper presents the syntax, design, and tool architecture of hacspec. We demonstrate the use of the language to specify popular cryptographic algorithms, and describe preliminary compilers from hacspec to F\({}^{\star }\) and to EasyCrypt. Our goal is to invite authors of cryptographic standards to write their pseudocode in hacspec and to help the formal verification community develop the language and tools that are needed to promote high-assurance cryptographic software backed by mathematical proofs.
Karthikeyan Bhargavan, Franziskus Kiefer, Pierre-Yves Strub
Formal Verification of Ephemeral Diffie-Hellman Over COSE (EDHOC)
Ephemeral Diffie-Hellman over COSE (EDHOC) [1] is an authentication protocol that aims to replace TLS for resource constrained Internet of Things (IoT) devices using a selection of lightweight ciphers and formats. It is inspired by the newest Internet Draft of TLS 1.3 [2] and includes reduced round-trip modes. Unlike TLS 1.3, EDHOC is designed from scratch, and does not have to support legacy versions of the protocol. As the protocol is neither well-known nor has been used in practice it has not been scrutinized to the extent it should be.
The objective of this paper is to verify security properties of the protocol, including integrity, secrecy, and perfect forward secrecy properties. We use ProVerif [3] to analyze these properties formally. We describe violations of specific security properties for the reduced round-trip modes. The flaws were reported to the authors of the EDHOC protocol.
Alessandro Bruni, Thorvald Sahl Jørgensen, Theis Grønbech Petersen, Carsten Schürmann
Experimental Evaluation of Attacks on TESLA-Secured Time Synchronization Protocols
There is an increasingly relevant class of protocols that employ TESLA stream authentication to provide authenticity for one-way time synchronization. For such protocols, an interdependency between synchronization and security has been found to theoretically enable attackers to render the security measures useless. We evaluate to what extent this attack works in practice. To this end, we use a tailor-made configurable testbed implementation to simulate behaviors of TESLA-protected one-way synchronization protocols in hostile networks. In particular, this lets us confirm vulnerabilities to the attack for two published protocols, TinySeRSync and ASTS. Our analysis also yields a set of countermeasures, with which in-development and future specifications can potentially use TESLA to successfully secure one-way time synchronization.
Kristof Teichel, Gregor Hildermeier
Co-ordinating Developers and High-Risk Users of Privacy-Enhanced Secure Messaging Protocols
Due to the increased deployment of secure messaging protocols, differences between what developers “believe” are the needs of their users and their actual needs can have real consequences. Based on 90 interviews with both high and low-risk users, as well as the developers of popular secure messaging applications, we mapped the design choices of the protocols made by developers to the relevance of these features to threat models of both high-risk and low-risk users. Client device seizures are considered more dangerous than compromised servers by high-risk users. Key verification was important to high-risk users, but they often did not engage in cryptographic key verification, instead using other“out of band” means for key verification. High-risk users, unlike low-risk users, needed pseudonyms and were heavily concerned over metadata collection. Developers tended to value open standards, open-source, and decentralization, but high-risk users found these aspects less urgent given their more pressing concerns.
Harry Halpin, Ksenia Ermoshina, Francesca Musiani
Building Blocks in Standards: Improving Consistency in Standardization with Ontology and Reasoning
International standardization in ICT has grown in importance due to the rapid technology development, the increasing need for interoperability, and the global nature of the digital infrastructure. However, technical resources available to international standards bodies have become more limited. With its focus on international collaboration and consensus, the standardization community has not invested significantly in the automation of the process of developing standards. In this paper, we describe potential gains in efficiency with an ontology-based approach and automated reasoning. As part of the exploratory phase of the project, we built a prototype ontology and evaluated the benefits of automated reasoning to improve the process of developing and harmonizing broadly understood ICT assessment standards. The exploratory phase confirmed feasibility and clarified the benefits of the ontology-based approach to standardization, but also highlighted difficulties and unsolved problems in this area.
Marcello Balduccini, Claire Vishik
Defeating the Downgrade Attack on Identity Privacy in 5G
3GPP Release 15, the first 5G standard, includes protection of user identity privacy against IMSI catchers. These protection mechanisms are based on public key encryption. Despite this protection, IMSI catching is still possible in LTE networks which opens the possibility of a downgrade attack on user identity privacy, where a fake LTE base station obtains the identity of a 5G user equipment. We propose (i) to use an existing pseudonym-based solution to protect user identity privacy of 5G user equipment against IMSI catchers in LTE and (ii) to include a mechanism for updating LTE pseudonyms in the public key encryption based 5G identity privacy procedure. The latter helps to recover from a loss of synchronization of LTE pseudonyms. Using this mechanism, pseudonyms in the user equipment and home network are automatically synchronized when the user equipment connects to 5G. Our mechanisms utilize existing LTE and 3GPP Release 15 messages and require modifications only in the user equipment and home network in order to provide identity privacy. Additionally, lawful interception requires minor patching in the serving network.
Mohsin Khan, Philip Ginzboorg, Kimmo Järvinen, Valtteri Niemi
Identity Confidentiality in 5G Mobile Telephony Systems
The 3\(^\mathrm{rd}\) Generation Partnership Project (3GPP) recently proposed a standard for 5G telecommunications, containing an identity protection scheme meant to address the long-outstanding privacy problem of permanent subscriber-identity disclosure. The proposal is essentially two disjoint phases: an identification phase, followed by an establishment of security context between mobile subscribers and their service providers via symmetric-key based authenticated key agreement. Currently, 3GPP proposes to protect the identification phase with a public-key based solution, and while the current proposal is secure against a classical adversary, the same would not be true of a quantum adversary. 5G specifications target very long-term deployment scenarios (well beyond the year 2030), therefore it is imperative that quantum-secure alternatives be part of the current specification. In this paper, we present such an alternative scheme for the problem of private identification protection. Our solution is compatible with the current 5G specifications, depending mostly on cryptographic primitives already specified in 5G, adding minimal performance overhead and requiring minor changes in existing message structures. Finally, we provide a detailed formal security analysis of our solution in a novel security framework.
Haibat Khan, Benjamin Dowling, Keith M. Martin
Great Expectations: A Critique of Current Approaches to Random Number Generation Testing & Certification
Random number generators are a critical component of security systems. They also find use in a variety of other applications from lotteries to scientific simulations. Randomness tests, such as the NIST’s STS battery (documented in SP800-22), Marsaglia’s Diehard, and L’Ecuyer et al.’s TestU01 seek to find whether a generator exhibits any signs of non-random behaviour. However, many statistical test batteries are unable to reliably detect certain issues present in poor generators. Severe mistakes when determining whether a given generator passes the tests are common. Irregularities in sample size selection and a lack of granularity in test result interpretation contribute to this. This work provides evidence of these and other issues in several statistical test batteries. We identify problems with current practices and recommend improvements. The novel concept of suitable randomness is presented, precisely defining two bias bounds for a TRNG, instead of a simple binary pass/fail outcome. Randomness naivety is also introduced, outlining how binary pass/fail analysis cannot express the complexities of RNG output in a manner that is useful to determine whether a generator is suitable for a given range of applications.
Darren Hurley-Smith, Julio Hernandez-Castro
The New Randomness Beacon Format Standard: An Exercise in Limiting the Power of a Trusted Third Party
We discuss the development of a new format for beacons–servers which provide a sequence of digitally signed and hash-chained public random numbers on a fixed schedule. Users of beacons rely on the trustworthiness of the beacon operators. We consider several possible attacks on the users by the beacon operators, and discuss defenses against those attacks that have been incorporated into the new beacon format. We then analyze and quantify the effectiveness of those defenses.
John Kelsey
Security Standardisation Research
Cas Cremers
Anja Lehmann
Copyright Year
Electronic ISBN
Print ISBN

Premium Partner