Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 16th International Conference on on Applied Cryptography and Network Security, ACNS 2018, held in Leuven, Belgium, in July 2018.

The 36 revised full papers presented were carefully reviewed and selected from 173 submissions. The papers were organized in topical sections named: Cryptographic Protocols; Side Channel Attacks and Tamper Resistance; Digital Signatures; Privacy Preserving Computation; Multi-party Computation; Symmetric Key Primitives; Symmetric Key Primitives; Symmetric Key Cryptanalysis; Public Key Encryption; Authentication and Biometrics; Cloud and Peer-to-peer Security.

Inhaltsverzeichnis

Frontmatter

Cryptographic Protocols

Frontmatter

A Cryptographic Analysis of the WireGuard Protocol

WireGuard (Donenfeld, NDSS 2017) is a recently proposed secure network tunnel operating at layer 3. WireGuard aims to replace existing tunnelling solutions like IPsec and OpenVPN, while requiring less code, being more secure, more performant, and easier to use. The cryptographic design of WireGuard is based on the Noise framework. It makes use of a key exchange component which combines long-term and ephemeral Diffie-Hellman values (along with optional preshared keys). This is followed by the use of the established keys in an AEAD construction to encapsulate IP packets in UDP. To date, WireGuard has received no rigorous security analysis. In this paper, we, rectify this. We first observe that, in order to prevent Key Compromise Impersonation (KCI) attacks, any analysis of WireGuard’s key exchange component must take into account the first AEAD ciphertext from initiator to responder. This message effectively acts as a key confirmation and makes the key exchange component of WireGuard a 1.5 RTT protocol. However, the fact that this ciphertext is computed using the established session key rules out a proof of session key indistinguishability for WireGuard’s key exchange component, limiting the degree of modularity that is achievable when analysing the protocol’s security. To overcome this proof barrier, and as an alternative to performing a monolithic analysis of the entire WireGuard protocol, we add an extra message to the protocol. This is done in a minimally invasive way that does not increase the number of round trips needed by the overall WireGuard protocol. This change enables us to prove strong authentication and key indistinguishability properties for the key exchange component of WireGuard under standard cryptographic assumptions.

Benjamin Dowling, Kenneth G. Paterson

Distributed SSH Key Management with Proactive RSA Threshold Signatures

SSH is a security network protocol that uses public key cryptography for client authentication. SSH connections are designed to be run between a client and a server and therefore in enterprise networks there is no centralized monitoring of all SSH connections. An attractive method for enforcing such centralized control, audit or even revocation is to require all clients to access a centralized service in order to obtain their SSH keys. The benefits of centralized control come with new challenges in security and availability.In this paper we present ESKM - a distributed enterprise SSH key manager. ESKM is a secure and fault-tolerant logically-centralized SSH key manager. ESKM leverages k-out-of-n threshold security to provide a high level of security. SSH private keys are never stored at any single node, not even when they are used for signing. On a technical level, the system uses k-out-of-n threshold RSA signatures, which are enforced with new methods that refresh the shares in order to achieve proactive security and prevent many side-channel attacks. In addition, we support password-based user authentication with security against offline dictionary attacks, that is achieved using threshold oblivious pseudo-random evaluation.ESKM does not require modification in the server side or of the SSH protocol. We implemented the ESKM system, and a patch for OpenSSL libcrypto for client side services. We show that the system is scalable and that the overhead in the client connection setup time is marginal.

Yotam Harchol, Ittai Abraham, Benny Pinkas

Non-interactive Zaps of Knowledge

While non-interactive zero-knowledge (NIZK) proofs require trusted parameters, Groth, Ostrovsky and Sahai constructed non-interactive witness-indistinguishable (NIWI) proofs without any setup; they called their scheme a non-interactive zap. More recently, Bellare, Fuchsbauer and Scafuro investigated the security of NIZK in the face of parameter subversion and observe that NI zaps provide subversion-resistant soundness and WI.Arguments of knowledge prove that not only the statement is true, but also that the prover knows a witness for it, which is essential for anonymous identification. We present the first NIWI argument of knowledge without parameters, i.e., a NI zap of knowledge. Consequently, our scheme is also the first subversion-resistant knowledge-sound proof system, a notion recently proposed by Fuchsbauer.

Georg Fuchsbauer, Michele Orrù

Side Channel Attacks and Tamper Resistance

Frontmatter

Formal Verification of Side-Channel Countermeasures via Elementary Circuit Transformations

We describe a technique to formally verify the security of masked implementations against side-channel attacks, based on elementary circuit transforms. We describe two complementary approaches: a generic approach for the formal verification of any circuit, but for small attack orders only, and a specialized approach for the verification of specific circuits, but at any order. We also show how to generate security proofs automatically, for simple circuits. We describe the implementation of CheckMasks, a formal verification tool for side-channel countermeasures. Using this tool, we formally verify the security of the Rivain-Prouff countermeasure for AES, and also the recent Boolean to arithmetic conversion algorithms from CHES 2017.

Jean-Sébastien Coron

Drive-By Key-Extraction Cache Attacks from Portable Code

We show how malicious web content can extract cryptographic secret keys from the user’s computer. The attack uses portable scripting languages supported by modern browsers to induce contention for CPU cache resources, and thereby gleans information about the memory accesses of other programs running on the user’s computer. We show how this side-channel attack can be realized in WebAssembly and PNaCl; how to attain fine-grained measurements; and how to extract ElGamal, ECDH and RSA decryption keys from various cryptographic libraries.The attack does not rely on bugs in the browser’s nominal sandboxing mechanisms, or on fooling users. It applies even to locked-down platforms with strong confinement mechanisms and browser-only functionality, such as Chromebook devices.Moreover, on browser-based platforms the attacked software too may be written in portable JavaScript; and we show that in this case even implementations of supposedly-secure constant-time algorithms, such as Curve25519’s, are vulnerable to our attack.

Daniel Genkin, Lev Pachmanov, Eran Tromer, Yuval Yarom

On the Ineffectiveness of Internal Encodings - Revisiting the DCA Attack on White-Box Cryptography

The goal of white-box cryptography is to implement cryptographic algorithms securely in software in the presence of an adversary that has complete access to the software’s program code and execution environment. In particular, white-box cryptography needs to protect the embedded secret key from being extracted. Bos et al. (CHES 2016) introduced differential computational analysis (DCA), the first automated attack on white-box cryptography. The DCA attack performs a statistical analysis on execution traces. These traces contain information such as memory addresses or register values, that is collected via binary instrumentation tooling during the encryption process. The white-box implementations that were attacked by Bos et al., as well as white-box implementations that have been described in the literature, protect the embedded key by using internal encodings techniques introduced by Chow et al. (SAC 2002). Thereby, a combination of linear and non-liner nibble encodings is used to protect the secret key. In this paper we analyse the use of such internal encodings and prove rigorously that they are too weak to protect against DCA. We prove that the use of non-linear nibble encodings does not hide key dependent correlations, such that a DCA attack succeeds with high probability.

Estuardo Alpirez Bock, Chris Brzuska, Wil Michiels, Alexander Treff

Continuously Non-malleable Codes with Split-State Refresh

Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible.In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e., without any interaction), which allows to avoid the self-destruct mechanism. An additional feature of our security model is that it captures directly security against continual leakage attacks. We give an abstract framework for building such codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption.Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient RAM programs. In comparison to other tamper-resilient compilers, ours has several advantages, among which the fact that, for the first time, it does not rely on the self-destruct feature.

Antonio Faonio, Jesper Buus Nielsen, Mark Simkin, Daniele Venturi

Digital Signatures

Frontmatter

Efficient Unconditionally Secure Signatures Using Universal Hashing

Digital signatures are one of the most important cryptographic primitives. In this work we construct an information-theoretically secure signature scheme which, unlike prior schemes, enjoys a number of advantageous properties such as short signature length and high generation efficiency, to name two. In particular, we extend symmetric-key message authentication codes (MACs) based on universal hashing to make them transferable, a property absent from traditional MAC schemes. Our main results are summarised as follows.We construct an unconditionally secure signature scheme which, unlike prior schemes, does not rely on a trusted third party or anonymous channels.We prove information-theoretic security of our scheme against forging, repudiation, and non-transferability.We compare our scheme with existing both “classical” (not employing quantum mechanics) and quantum unconditionally secure signature schemes. The comparison shows that our new scheme, despite requiring fewer resources, is much more efficient than all previous schemes.Finally, although our scheme does not rely on trusted third parties, we discuss this, showing that having a trusted third party makes our scheme even more attractive.

Ryan Amiri, Aysajan Abidin, Petros Wallden, Erika Andersson

Floppy-Sized Group Signatures from Lattices

We present the first lattice-based group signature scheme whose cryptographic artifacts are of size small enough to be usable in practice: for a group of $$2^{25}$$225 users, signatures take 910 kB and public keys are 501 kB. Our scheme builds upon two recently proposed lattice-based primitives: the verifiable encryption scheme by Lyubashevsky and Neven (Eurocrypt 2017) and the signature scheme by Boschini, Camenisch, and Neven (IACR ePrint 2017). To achieve such short signatures and keys, we first re-define verifiable encryption to allow one to encrypt a function of the witness, rather than the full witness. This definition enables more efficient realizations of verifiable encryption and is of independent interest. Second, to minimize the size of the signatures and public keys of our group signature scheme, we revisit the proof of knowledge of a signature and the proofs in the verifiable encryption scheme provided in the respective papers.

Cecilia Boschini, Jan Camenisch, Gregory Neven

On the Security Notions for Homomorphic Signatures

Homomorphic signature schemes allow anyone to perform computation on signed data in such a way that the correctness of computation’s results is publicly certified. In this work we analyze the security notions for this powerful primitive considered in previous work, with a special focus on adaptive security. Motivated by the complications of existing security models in the adaptive setting, we consider a simpler and (at the same time) stronger security definition inspired to that proposed by Gennaro and Wichs (ASIACRYPT’13) for homomorphic MACs. In addition to strength and simplicity, this definition has the advantage to enable the adoption of homomorphic signatures in dynamic data outsourcing scenarios, such as delegation of computation on data streams. Then, since no existing homomorphic signature satisfies this stronger notion, our main technical contribution are general compilers which turn a homomorphic signature scheme secure under a weak definition into one secure under the new stronger notion. Our compilers are totally generic with respect to the underlying scheme. Moreover, they preserve three important properties of homomorphic signatures: composability, context-hiding (i.e. signatures on computation’s output do not reveal information about the input) and efficient verification (i.e. verifying a signature against a program $${\mathcal P}$$P can be made faster, in an amortized, asymptotic sense, than recomputing $${\mathcal P}$$P from scratch).

Dario Catalano, Dario Fiore, Luca Nizzardo

Invisible Sanitizable Signatures and Public-Key Encryption are Equivalent

Sanitizable signature schemes are signature schemes which support the delegation of modification rights. The signer can allow a sanitizer to perform a set of admissible operations on the original message and then to update the signature, in such a way that basic security properties like unforgeability or accountability are preserved. Recently, Camenisch et al. (PKC 2017) devised new schemes with the previously unattained invisibility property. This property says that the set of admissible operations for the sanitizer remains hidden from outsiders. Subsequently, Beck et al. (ACISP 2017) gave an even stronger version of this notion and constructions achieving it. Here we characterize the invisibility property in both forms by showing that invisible sanitizable signatures are equivalent to $$\mathsf {IND{-}}\mathsf {CPA}$$IND-CPA-secure encryption schemes, and strongly invisible signatures are equivalent to $$\mathsf {IND{-}}\mathsf {CCA2}$$IND-CCA2-secure encryption schemes. The equivalence is established by proving that invisible (resp. strongly invisible) sanitizable signature schemes yield $$\mathsf {IND{-}}\mathsf {CPA}$$IND-CPA-secure (resp. $$\mathsf {IND{-}}\mathsf {CCA2}$$IND-CCA2-secure) public-key encryption schemes and that, vice versa, we can build (strongly) invisible sanitizable signatures given a corresponding public-key encryption scheme.

Marc Fischlin, Patrick Harasser

Delegatable Attribute-Based Anonymous Credentials from Dynamically Malleable Signatures

We introduce the notion of delegatable attribute-based anonymous credentials (DAAC). Such systems offer fine-grained anonymous access control and they give the credential holder the ability to issue more restricted credentials to other users. In our model, credentials are parameterized with attributes that (1) express what the credential holder himself has been certified and (2) define which attributes he may issue to others. Furthermore, we present a practical construction of DAAC. For this construction, we deviate from the usual approach of embedding a certificate chain in the credential. Instead, we introduce a novel approach for which we identify a new primitive we call dynamically malleable signatures (DMS) as the main ingredient. This primitive may be of independent interest. We also give a first instantiation of DMS with efficient protocols.

Johannes Blömer, Jan Bobolz

Privacy Preserving Computation

Frontmatter

Privacy-Preserving Ridge Regression with only Linearly-Homomorphic Encryption

Linear regression with 2-norm regularization (i.e., ridge regression) is an important statistical technique that models the relationship between some explanatory values and an outcome value using a linear function. In many applications (e.g., predictive modeling in personalized health-care), these values represent sensitive data owned by several different parties who are unwilling to share them. In this setting, training a linear regression model becomes challenging and needs specific cryptographic solutions. This problem was elegantly addressed by Nikolaenko et al. in S&P (Oakland) 2013. They suggested a two-server system that uses linearly-homomorphic encryption (LHE) and Yao’s two-party protocol (garbled circuits). In this work, we propose a novel system that can train a ridge linear regression model using only LHE (i.e., without using Yao’s protocol). This greatly improves the overall performance (both in computation and communication) as Yao’s protocol was the main bottleneck in the previous solution. The efficiency of the proposed system is validated both on synthetically-generated and real-world datasets.

Irene Giacomelli, Somesh Jha, Marc Joye, C. David Page, Kyonghwan Yoon

Privacy-Preserving Plaintext-Equality of Low-Entropy Inputs

Confidentiality requires to keep information away from the eyes of non-legitimate users, while practicality necessitates to make information usable for authorized users. The former issue is addressed with cryptography, and encryption schemes. The combination of both has been shown to be possible with advanced techniques that permit to perform computations on encrypted data. Searchable encryption concentrates on the problem of extracting specific information from a ciphertext.In this paper, we focus on a concrete use-case where sensitive tokens (medical records) allow third parties to find matching properties (compatible organ donor) without revealing more information than necessary (contact information).We reduce such case to the plaintext-equality problem. But in our particular application, the message-space is of limited size or, equivalently, the entropy of the plaintexts is small: public-key existing solutions are not fully satisfactory. We then propose a suitable security model, and give an instantiation with an appropriate security analysis.

Sébastien Canard, David Pointcheval, Quentin Santos, Jacques Traoré

Nothing Refreshes Like a RePSI: Reactive Private Set Intersection

Private Set Intersection (PSI) is a popular cryptographic primitive that allows two parties, a client and a server, to compute the intersection of their private sets, so that the client only receives the output of the computation, while the server learns nothing besides the size of the client’s set. A common limitation of PSI is that a dishonest client can progressively learn the server’s set by enumerating it over different executions. Although these “oracle attacks” do not formally violate security according to traditional secure computation definitions, in practice, they often hamper real-life deployment of PSI instantiations, especially if the server’s set does not change much over multiple interactions.In a first step to address this problem, this paper presents and studies the concept of Reactive PSI (RePSI). We model PSI as a reactive functionality, whereby the output depends on previous instances, and use it to limit the effectiveness of oracle attacks. We introduce a general security model for RePSI in the (augmented) semi-honest model and a construction which enables the server to control how many inputs have been used by the client across several executions. In the process, we also present the first practical construction of a Size-Hiding PSI (SHI-PSI) protocol in the standard model, which may be of independent interest.

Andrea Cerulli, Emiliano De Cristofaro, Claudio Soriente

Multi-party Computation

Frontmatter

New Protocols for Secure Equality Test and Comparison

Protocols for securely comparing private values are among the most fundamental building blocks of multiparty computation. introduced by Yao under the name millionaire’s problem, they have found numerous applications in a variety of privacy-preserving protocols; however, due to their inherent non-arithmetic structure, existing construction often remain an important bottleneck in large-scale secure protocols.In this work, we introduce new protocols for securely computing the greater-than and the equality predicate between two parties. Our protocols rely solely on the existence of oblivious transfer, and are $$\textsf {UC}$$UC-secure against passive adversaries. Furthermore, our protocols are well suited for use in large-scale secure computation protocols, where secure comparisons ($$\mathsf {SC}$$SC) and equality tests ($$\mathsf {ET}$$ET) are commonly used as basic routines: they perform particularly well in an amortized setting, and can be preprocessed efficiently (they enjoy an extremely efficient, information-theoretic online phase). We perform a detailed comparison of our protocols to the state of the art, showing that they improve over the most practical existing solutions regarding both communication and computation, while matching the asymptotic efficiency of the best theoretical constructions.

Geoffroy Couteau

Minimising Communication in Honest-Majority MPC by Batchwise Multiplication Verification

In this paper, we present two new and very communication-efficient protocols for maliciously secure multi-party computation over fields in the honest-majority setting with abort. Our first protocol improves a recent protocol by Lindell and Nof. Using the so far overlooked tool of batchwise multiplication verification, we speed up their technique for checking correctness of multiplications (with some other improvements), reducing communication by $$2{\times }$$2× to $$7{\times }$$7×. In particular, in the 3PC setting, each party sends only two field elements per multiplication. We also show how to achieve fairness, which Lindell and Nof left as an open problem. Our second protocol again applies batchwise multiplication verification, this time to perform 3PC by letting two parties perform the SPDZ protocol using triples generated by a third party and verified batchwise. In this protocol, each party sends only $$\frac{4}{3}$$43 field elements during the online phase and $$\frac{5}{3}$$53 field elements during the preprocessing phase.

Peter Sebastian Nordholt, Meilof Veeningen

Best of Both Worlds in Secure Computation, with Low Communication Overhead

When performing a secure multiparty computation with a few hundred parties, using the best protocols known today, bandwidth constraints are the primary bottleneck. A long line of work demonstrates that n parties can compute a circuit C of depth d while communicating $$O(|C|\log |C| + \mathsf {poly}(d, n)$$O(|C|log|C|+poly(d,n) field elements per party, as long as a majority of parties are honest. However, in the malicious majority setting, a lot less is known. The work of Nielsen and Ranellucci is the first to provide constant-overhead in the communication complexity when a majority of parties are malicious; their result demonstrates feasibility, but is quite complex and impractical.In this work, we construct a new MPC protocol in the pre-processing model. We introduce a new middle-ground: our protocol has low communication and provides robustness when a majority of parties are honest, and gives security with abort (possibly with higher communication cost) when a majority of players are malicious. Robustness is impossible when a majority of parties are malicious; viewing the increased communication complexity as a form of denial of service, similar to an abort, we view our result as providing the “best of both worlds”.

Daniel Genkin, S. Dov Gordon, Samuel Ranellucci

3PC ORAM with Low Latency, Low Bandwidth, and Fast Batch Retrieval

Multi-Party Computation of Oblivious RAM (MPC ORAM) implements secret-shared random access memory in a way that protects access pattern privacy against a threshold of corruptions. MPC ORAM enables secure computation of any RAM program on large data held by different entities, e.g. MPC processing of database queries on a secret-shared database. MPC ORAM can be constructed by any (client-server) ORAM, but there is an efficiency gap between known MPC ORAM’s and ORAM’s. Current asymptotically best MPC ORAM is implied by an “MPC friendly” variant of Path-ORAM [26] called Circuit-ORAM, due to Wang et al [27]. However, using garbled circuit for Circuit-ORAM’s client implies MPC ORAM which matches Path-ORAM in rounds but increases bandwidth by $$\varOmega (\kappa )$$Ω(κ) factor, while using GMW or BGW protocols implies MPC ORAM which matches Path-ORAM in bandwidth, but increases round complexity by $$\varOmega ({\log n}\log {\log n})$$Ω(lognloglogn) factor, where $$\kappa $$κ is a security parameter and $$n$$n is memory size.In this paper we bridge the gap between MPC ORAM and client-server ORAM by showing a specialized 3PC ORAM protocol, i.e. MPC ORAM for 3 parties tolerating 1 fault, which uses only symmetric ciphers and asymptotically matches client-server Path-ORAM in round complexity and for large records also in bandwidth.Our 3PC ORAM also allows for fast pipelined processing: With postponed clean-up it processes $$b\,{=}\,O({\log n})$$b=O(logn) accesses in $$O(b\,{+}\,{\log n})$$O(b+logn) rounds with $$O(D\,{+}\,\mathsf {poly}({\log n}))$$O(D+poly(logn)) bandwidth per item, where $$D$$D is record size.

Stanislaw Jarecki, Boyang Wei

Symmetric Key Primitives

Frontmatter

MergeMAC: A MAC for Authentication with Strict Time Constraints and Limited Bandwidth

This paper presents MergeMAC, a MAC that is particularly suitable for environments with strict time requirements and extremely limited bandwidth. MergeMAC computes the MAC by splitting the message into two parts. We use a pseudorandom function (PRF) to map messages to random bit strings and then merge them with a very efficient keyless function. The advantage of this approach is that the outputs of the PRF can be cached for frequently needed message parts. We demonstrate the merits of MergeMAC for authenticating messages on the CAN bus where bandwidth is extremely limited and caching can be used to recover parts of the message counter instead of transmitting it. We recommend an instantiation of the merging function Merge and analyze the security of our construction. Requirements for a merging function are formally defined and the resulting EUF-CMA security of MergeMAC is proven.

Ralph Ankele, Florian Böhl, Simon Friedberger

KangarooTwelve: Fast Hashing Based on

We present KangarooTwelve, a fast and secure arbitrary output-length hash function aiming at a higher speed than the FIPS 202’s SHA-3 and SHAKE functions. While sharing many features with SHAKE128, like the cryptographic primitive, the sponge construction, the eXtendable Output Function (XOF) and the 128-bit security strength, KangarooTwelve offers two major improvements over its standard counterpart. First it has a built-in parallel mode that efficiently exploits multi-core or SIMD instruction parallelism for long messages, without impacting the performance for short messages. Second, relying on the cryptanalysis results on Keccak over the past ten years, we tuned its permutation to require twice less computation effort while still offering a comfortable safety margin. By combining these two changes KangarooTwelve consumes less than 0.55 cycles/byte for long messages on the latest Intel$$^{\circledR }$$®’s SkylakeX architectures. The generic security of KangarooTwelve is guaranteed by the use of Sakura encoding for the tree hashing and of the sponge construction for the compression function.

Guido Bertoni, Joan Daemen, Michaël Peeters, Gilles Van Assche, Ronny Van Keer, Benoît Viguier

Symmetric Key Cryptanalysis

Frontmatter

Related-Key Boomerang Attacks on Full ANU Lightweight Block Cipher

This paper presents related-key attacks against lightweight block cipher ANU that requires only 1015 gate equivalents for a 128-bit key, which is less than all existing lightweight ciphers. The design of ANU appears to be a mixture of other decent lightweight ciphers such as Simon, PRESENT, Piccolo, TWINE etc., however, the security arguments especially against related-key attacks are not theoretically supported. In this paper, we observe that the mixture of a Simon-like round function and a PRESENT-like key schedule function causes a very sparse differential trail that avoids non-linear update in the key schedule function. By exploiting it, a distinguishing attack against full-round ANU works only with $$2^{19}$$219 queries in the related-key setting, in which the attack is verified by our machine experiment. This also leads to a key recovery attack for a 128-bit key with $$2^{112}$$2112 computations.

Yu Sasaki

Generic Round-Function-Recovery Attacks for Feistel Networks over Small Domains

Feistel Networks (FN) are now being used massively to encrypt credit card numbers through format-preserving encryption. In our work, we focus on FN with two branches, entirely unknown round functions, modular additions (or other group operations), and when the domain size of a branch (called ) is small. We investigate round-function-recovery attacks.The best known attack so far is an improvement of Meet-In-The-Middle (MITM) attack by Isobe and Shibutani from ASIACRYPT 2013 with optimal data complexity and time complexity , where is the round number in FN. We construct an algorithm with a surprisingly better complexity when is too low, based on partial exhaustive search. When the data complexity varies from the optimal to the one of a codebook attack , our time complexity can reach . It crosses the complexity of the improved MITM for .We also estimate the lowest secure number of rounds depending on and the security goal. We show that the format-preserving-encryption schemes FF1 and FF3 standardized by NIST and ANSI cannot offer 128-bit security (as they are supposed to) for and , respectively (the NIST standard only requires ), and we improve the results by Durak and Vaudenay from CRYPTO 2017.

F. Betül Durak, Serge Vaudenay

Differential Cryptanalysis of Round-Reduced Sparx-64/128

Sparx is a family of ARX-based block ciphers designed according to the long-trail strategy (LTS) that were both introduced by Dinu et al. at ASIACRYPT’16. Similar to the wide-trail strategy, the LTS allows provable upper bounds on the length of differential characteristics and linear paths. Thus, the cipher is a highly interesting target for third-party cryptanalysis. However, the only third-party cryptanalysis on Sparx-64/128 to date was given by Abdelkhalek et al. at AFRICACRYPT’17 who proposed impossible-differential attacks on 15 and 16 (out of 24) rounds.In this paper, we present chosen-ciphertext differential attacks on 16 rounds of Sparx-64/128. First, we show a truncated-differential analysis that requires $$2^{32}$$232 chosen ciphertexts and approximately $$2^{93}$$293 encryptions. Second, we illustrate the effectiveness of boomerangs on Sparx by a rectangle attack that requires approximately $$2^{59.6}$$259.6 chosen ciphertexts and about $$2^{122.2}$$2122.2 encryption equivalents. Finally, we also considered a yoyo attack on 16 rounds that, however, requires the full codebook and approximately $$2^{126}$$2126 encryption equivalents.

Ralph Ankele, Eik List

Can Caesar Beat Galois?

Robustness of CAESAR Candidates Against Nonce Reusing and High Data Complexity Attacks

The Competition for Authenticated Encryption: Security, Applicability and Robustness (CAESAR) has as its official goal to “identify a portfolio of authenticated ciphers that offer advantages over [the Galois-Counter Mode with AES]” and are suitable for widespread adoption.” Each of the 15 candidate schemes competing in the currently ongoing $$ {3}^{\text {rd}} $$3rd round of CAESAR must clearly declare its security claims, i.e. whether it can tolerate nonce misuse, and what is the maximal data complexity for which security is guaranteed. These claims appear to be valid for all 15 candidates. Interpreting “Robustness” in CAESAR as the ability to mitigate damage when security guarantees are void, we describe attacks with 64-bit complexity or above, and/or with nonce reuse for each of the 15 candidates. We then classify the candidates depending on how powerful does an attacker need to be to mount (semi-)universal forgeries, decryption attacks, or key recoveries. Rather than invalidating the security claims of any of the candidates, our results provide an additional criterion for evaluating the security that candidates deliver, which can be useful for e.g. breaking ties in the final CAESAR discussions.

Serge Vaudenay, Damian Vizár

Public Key Encryption

Frontmatter

Improved Anonymous Broadcast Encryptions

Tight Security and Shorter Ciphertext

We investigate anonymous broadcast encryptions (ANOBE) in which a ciphertext hides not only the message but also the target recipients associated with it. Following Libert et al.’s generic construction [PKC, 2012], we propose two concrete ANOBE schemes with tight reduction and better space efficiency.The IND-CCA security and anonymity of our two ANOBE schemes can be tightly reduced to standard k-Linear assumption (and the existence of other primitives). For a broadcast system with n users, Libert et al.’s security analysis suffers from $$O(n^3)$$ O(n3) loss while our security loss is constant.Our first ANOBE supports fast decryption and has a shorter ciphertext than the fast-decryption version of Libert et al.’s concrete ANOBE. Our second ANOBE is adapted from the first one. We sacrifice the fast decryption feature and achieve shorter ciphertexts than Libert et al.’s concrete ANOBE with the help of bilinear groups.Technically, we start from an instantiation of Libert et al.’s generic ANOBE [PKC, 2012], but we work out all our proofs from scratch instead of relying on their generic security result. This intuitively allows our optimizations in the concrete setting.

Jiangtao Li, Junqing Gong

Time-Based Direct Revocable Ciphertext-Policy Attribute-Based Encryption with Short Revocation List

In this paper, we propose an efficient revocable Ciphertext-Policy Attribute-Based Encryption (CP-ABE) scheme. We base on the direct revocation approach, by embedding the revocation list into ciphertext. However, since the revocation list will grow longer as time goes by, we further leverage this by proposing a secret key time validation technique so that users will have their keys expired on a date and the revocation list only needs to include those user keys revoked before their intended expired date (e.g. those user keys which have been stolen before expiry). These keys can be removed from the revocation list after their expiry date in order to keep the revocation list short, as these keys can no longer be used to decrypt ciphertext generated after their expiry time. This technique is derived from Hierarchical Identity-based Encryption (HIBE) mechanism and thus time periods are in hierarchical structure: year, month, day. Users with validity of the whole year can decrypt any ciphertext associated with time period of any month or any day within the year. By using this technique, the size of public parameters and user secret key can be greatly reduced. A bonus advantage of this technique is the support of discontinuity of user validity (e.g. taking no-paid leave).

Joseph K. Liu, Tsz Hon Yuen, Peng Zhang, Kaitai Liang

Almost Tight Multi-Instance Multi-Ciphertext Identity-Based Encryption on Lattices

Boyen and Li [AsiaCrypt, 2016] proposed the first almost tightly secure lattice identity-based encryption scheme in the standard model. The security of such scheme is proved under learning with errors assumption in the single-instance, single-challenge setting. In this work, we show how to extend the Boyen-Li scheme to obtain an almost tight security reduction in the multi-instance, multi-ciphertext setting, in which the security loss incurred is $$\textsf {poly}(\kappa )$$poly(κ) in the security parameter $$\kappa $$κ and independent of the number of adversarial queries.

Xavier Boyen, Qinyi Li

Authentication and Biometrics

Frontmatter

In-Region Authentication

Location information has wide applications in customization and personalization of services, as well as secure authentication and access control. We introduce in-Region Authentication (inRA), a novel type of authentication, that allows a prover to prove to a set of cooperating verifiers that they are in possession of the correct secret key, and are inside a specified (policy) region of arbitrary shape. These requirements naturally arise when a privileged service is offered to registered users within an area. Locating a prover without assuming GPS (Global Positioning System) signal however, incurs error. We discuss the challenge of designing secure protocols that have quantifiable error in this setting, define and formalize correctness and security properties of the protocols, and propose a systematic approach to designing a family of protocols with provable security where error can be flexibly defined and efficiently minimized. We give an instance of this family that requires only two verifiers, prove its security and evaluate its performance in four typical policy regions. Our results show that in all cases false acceptance and false rejection of below $$6\%$$6% can be achieved. We compare our results with related works, and propose directions for future research.

Mamunur Rashid Akand, Reihaneh Safavi-Naini

Formal Analysis of Distance Bounding with Secure Hardware

A distance bounding (DB) protocol is a two-party authentication protocol between a prover and a verifier which is based on the distance between the prover and the verifier. It aims to defeat threats by malicious provers who try to convince that they are closer to the verifier or adversaries which seek to impersonate a far-away prover. All these threats are covered in several security definitions and it is not possible to have a single definition covering all. In this paper, we describe a new DB model with three parties where the new party is named hardware. In this model, called secure hardware model (SHM), the hardware is held by the prover without being able to tamper with. We define an all-in-one security model which covers all the threats of DB and an appropriate privacy notion for SHM. In the end, we construct the most efficient (in terms of computation by the prover-hardware and number of rounds) and secure DB protocols achieving the optimal security bounds as well as privacy.

Handan Kılınç, Serge Vaudenay

KRB-CCN: Lightweight Authentication and Access Control for Private Content-Centric Networks

Content-Centric Networking (CCN) is an internetworking paradigm that offers an alternative to today’s IP-based Internet Architecture. Instead of focusing on hosts and their locations, CCN emphasizes addressable named content. By decoupling content from its location, CCN allows opportunistic in-network content caching, thus enabling better network utilization, at least for scalable content distribution. However, in order to be considered seriously, CCN must support basic security services, including content authenticity, integrity, confidentiality, authorization and access control. Current approaches rely on content producers to perform authorization and access control, which is typically attained via public key encryption. This general approach has several disadvantages. First, consumer privacy vis-a-vis producers is not preserved. Second, identity management and access control impose high computational overhead on producers. Also, unnecessary repeated authentication and access control decisions must be made for each content request. (This burden is particularly relevant for resource-limited producers, e.g., anemic IoT devices.)These issues motivate our design of KRB-CCN – a complete authorization and access control system for private CCN networks. Inspired by Kerberos in IP-based networks, KRB-CCN involves distinct authentication and authorization authorities. By doing so, KRB-CCN obviates the need for producers to make consumer authentication and access control decisions. KRB-CCN preserves consumer privacy since producers are unaware of consumer identities. Producers are also not required to keep any hard state and only need to perform two symmetric key operations to guarantee that sensitive content is confidentially delivered only to authenticated and authorized consumers. Furthermore, KRB-CCN works transparently on the consumer side. Most importantly, unlike prior designs, KRB-CCN leaves the network (i.e., CCN routers) out of any authorization, access control or confidentiality issues. We describe KRB-CCN design and implementation, analyze its security, and report on its performance.

Ivan O. Nunes, Gene Tsudik

Assentication: User De-authentication and Lunchtime Attack Mitigation with Seated Posture Biometric

Biometric techniques are often used as an extra security factor in authenticating human users. Numerous biometrics have been proposed and evaluated, each with its own set of benefits and pitfalls. Static biometrics (such as fingerprints) are geared for discrete operation, to identify users, which typically involves some user burden. Meanwhile, behavioral biometrics (such as keystroke dynamics) are well-suited for continuous and more unobtrusive operation. One important application domain for biometrics is de-authentication: a means of quickly detecting absence of a previously-authenticated user and immediately terminating that user’s secure sessions. De-authentication is crucial for mitigating so-called Lunchtime Attacks, whereby an insider adversary takes over an authenticated state of a careless user who leaves her computer.Motivated primarily by the need for an unobtrusive and continuous biometric to support effective de-authentication, we introduce Assentication – a new hybrid biometric based on a human user’s seated posture pattern. Assentication captures a unique combination of physiological and behavioral traits. We describe a low-cost fully functioning prototype that involves an office chair instrumented with 16 tiny pressure sensors. We also explore (via user experiments) how Assentication can be used in a typical workplace to provide continuous authentication (and de-authentication) of users. We experimentally assess viability of Assentication in terms of uniqueness by collecting and evaluating posture patterns of a cohort of 30 users. Results show that Assentication yields very low false accept and false reject rates. In particular, users can be identified with $$94.2\%$$94.2% and $$91.2\%$$91.2% accuracy using 16 and 10 sensors, respectively.

Tyler Kaczmarek, Ercan Ozturk, Gene Tsudik

Cloud and Peer-to-Peer Security

Frontmatter

Stateful Multi-client Verifiable Computation

This paper develops an asynchronous cryptographic protocol for outsourcing arbitrary stateful computation among multiple clients to an untrusted server, while guaranteeing integrity of the data. The clients communicate only with the server and merely store a short authenticator to ensure that the server does not cheat. Our contribution is two-fold. First, we extend the recent hash&prove scheme of Fiore et al. (CCS 2016) to stateful computations that support arbitrary updates by the untrusted server, in a way that can be verified by the clients. We use this scheme to generically instantiate authenticated data types. Second, we describe a protocol for multi-client verifiable computation based on an authenticated data type, and prove that it achieves a computational version of fork linearizability. This is the strongest guarantee that can be achieved in the setting where clients do not communicate directly; it ensures correctness and consistency of outputs seen by the clients individually.

Christian Cachin, Esha Ghosh, Dimitrios Papadopoulos, Björn Tackmann

VeriCount: Verifiable Resource Accounting Using Hardware and Software Isolation

In cloud computing, where clients are billed based on the consumed resources for outsourced tasks, both the cloud providers and the clients have the incentive to manipulate claims about resource usage. Both desire an accurate and verifiable resource accounting system, which is neutral and can be trusted to refute any disputes. In this work, we present VeriCount—a verifiable resource accounting system coupled with refutable billing support for Linux container-based applications. To protect VeriCount logic, we propose a novel approach called self-accounting that combines hardware-based isolation guarantees from trusted computing mechanisms and software fault isolation techniques. The self-accounting engine in VeriCount leverages security features present in trusted computing solutions, such as Intel SGX, to measure user CPU time, memory, I/O bytes and network bandwidth while simultaneously detecting resource usage inflation attacks. We claim three main results. First, VeriCount incurs an average performance overhead of 3.62% and 16.03% over non-accounting but SGX-compatible applications in hardware and simulation mode respectively. Next, it contributes only an additional 542 lines of code to the trusted computing base. Lastly, it generates highly accurate, fine-grained resource accounting, with no discernible difference to the resource measuring tool available with the OS.

Shruti Tople, Soyeon Park, Min Suk Kang, Prateek Saxena

Message-Locked Encryption with File Update

Message-locked encryption (MLE) (formalized by Bellare et al. [5]) is an important cryptographic primitive that supports deduplication in the cloud. Updatable block-level message-locked encryption (UMLE) (formalized by Zhao and Chow [13]) adds the update functionality to the MLE. In this paper, we formalize and extensively study a new cryptographic primitive file-updatable message-locked encryption (FMLE). FMLE can be viewed as a generalization of the UMLE, in the sense that unlike the latter, the former does not require the existence of BL-MLE (block-level message-locked encryption). FMLE allows more flexibility and efficient methods for updating the ciphertext and tag.Our second contribution is the design of two efficient FMLE constructions, namely, RevD-1 and RevD-2, whose design principles are inspired from the very unique reverse decryption functionality of the FP hash function (designed by Paul et al. [11]) and the APE authenticated encryption (designed by Andreeva et al. [2]). With respect to UMLE – which provides so far the most efficient update function – RevD-1 and RevD-2 reduce the total update time by at least 50%, on average. Additionally, our constructions are storage efficient. We also give extensive comparison between our and the existing constructions.

Suyash Kandele, Souradyuti Paul

DogFish: Decentralized Optimistic Game-theoretic FIle SHaring

Peer-to-peer (p2p) file sharing accounts for the most uplink bandwidth use in the Internet. Therefore, in the past few decades, many solutions tried to come up with better proposals to increase the social welfare of the participants. Social welfare in such systems are categorized generally as average download time or uplink bandwidth utilization. One of the most influential proposals was the BitTorrent. Yet, soonafter studies showed that BitTorrent has several problems that incentivize selfish users to game the system and hence decrease social welfare.Previous work, unfortunately, did not develop a system that maximizes social welfare in a decentralized manner (without a trusted party getting involved in every exchange), while the proposed strategy and honest piece revelation being the only equilibrium for the rational players. This is what we achieve, by modeling a general class of p2p file sharing systems theoretically, then showing honest piece revelation will help achieve social welfare, and then introducing a new cryptographic primitive, called randomized fair exchange, to instantiate our solution.

Seny Kamara, Alptekin Küpçü

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise