Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the Cryptographer's Track at the RSA Conference 2020, CT-RSA 2020, held in San Francisco, CA, USA, in February 2020.

The 28 papers presented in this volume were carefully reviewed and selected from 95 submissions.

CT-RSA is the track devoted to scientific papers on cryptography, public-key to symmetric-key cryptography and from crypto-graphic protocols to primitives and their implementation security.



Generic Attack on Iterated Tweakable FX Constructions

Tweakable block ciphers are increasingly becoming a common primitive to build new resilient modes as well as a concept for multiple dedicated designs. While regular block ciphers define a family of permutations indexed by a secret key, tweakable ones define a family of permutations indexed by both a secret key and a public tweak. In this work we formalize and study a generic framework for building such a tweakable block cipher based on regular block ciphers, the iterated tweakable FX construction, which includes many such previous constructions of tweakable block ciphers. Then we describe a cryptanalysis from which we can derive a provable security upper-bound for all constructions following this tweakable iterated FX strategy. Concretely, the cryptanalysis of r rounds of our generic construction based on n-bit block ciphers with \(\kappa \)-bit keys requires \(\mathcal {O}(2^{\frac{r}{r+1}(n + \kappa )})\) online and offline queries. For \(r=2\) rounds this interestingly matches the proof of the particular case of \({\texttt {XHX2}}\) by Lee and Lee (ASIACRYPT 2018) thus proving for the first time its tightness. In turn, the \({\texttt {XHX}}\) and \({\texttt {XHX2}}\) proofs show that our generic cryptanalysis is information theoretically optimal for 1 and 2 rounds.
Ferdinand Sibleyras

Universal Forgery Attack Against GCM-RUP

Authenticated encryption (AE) schemes are widely used to secure communications because they can guarantee both confidentiality and authenticity of a message. In addition to the standard AE security notion, some recent schemes offer extra robustness, i.e. they maintain security in some misuse scenarios. In particular, Ashur, Dunkelman and Luykx proposed a generic AE construction at CRYPTO’17 that is secure even when releasing unverified plaintext (the RUP setting), and a concrete instantiation, GCM-RUP. The designers proved that GCM-RUP is secure up to the birthday bound in the nonce-respecting model.
In this paper, we perform a birthday-bound universal forgery attack against GCM-RUP, matching the bound of the proof. While there are simple distinguishing attacks with birthday complexity on GCM-RUP, our attack is much stronger: we have a partial key recovery leading to universal forgeries. For reference, the best known universal forgery attack against GCM requires \(2^{2n/3}\) operations, and many schemes do not have any known universal forgery attacks faster than \(2^n\). This suggests that GCM-RUP offers a different security trade-off than GCM: stronger protection in the RUP setting, but more fragile when the data complexity reaches the birthday bound. In order to avoid this attack, we suggest a minor modification of GCM-RUP that seems to offer better robustness at the birthday bound.
Yanbin Li, Gaëtan Leurent, Meiqin Wang, Wei Wang, Guoyan Zhang, Yu Liu

My Gadget Just Cares for Me - How NINA Can Prove Security Against Combined Attacks

Differential Power Analysis and Differential Fault Analysis threaten the security of even the most trustworthy cryptographic primitives. It is important we protect their implementation such that no sensitive information is leaked using side channels and it withstands injected faults or combined physical attacks.
In this work, we propose security notions tailored against advanced physical attacks consisting of both faults and probes on circuit wires. We then transform the security notions to composable security notions. The motivation for this research includes the ease of verification time; the creation of secure components; and the isolation of primitives in larger protocols such as modes of operations. We dub our notion NINA, which forms the link between the established Non-Interference (NI) property and our composable active security property, Non-Accumulation (NA).
To illustrate the NINA property, we use it to prove the security of two multiplication gadgets: an error checking duplication gadget and an error correcting duplication gadget. The NINA proofs for error detecting gadgets capture the effect of Statistical Ineffective Fault Analysis (SIFA), an attack vector which threatens most current masked implementations. Additionally, we study error correcting techniques. We show that error correcting gadgets can attain the Independent NINA property. A stronger property which captures a clear separation between the effect of faults and probes. Thus, we show that clever error correcting gadgets improve on error detecting ones by achieving significant higher levels of combined security along with guaranteed output delivery.
Siemen Dhooghe, Svetla Nikova

Modeling Memory Faults in Signature and Authenticated Encryption Schemes

Memory fault attacks, inducing errors in computations, have been an ever-evolving threat to cryptographic schemes since their discovery for cryptography by Boneh et al. (Eurocrypt 1997). Initially requiring physical tampering with hardware, the software-based rowhammer attack put forward by Kim et al. (ISCA 2014) enabled fault attacks also through malicious software running on the same host machine. This led to concerning novel attack vectors, for example on deterministic signature schemes, whose approach to avoid dependency on (good) randomness renders them vulnerable to fault attacks. This has been demonstrated in realistic adversarial settings in a series of recent works. However, a unified formalism of different memory fault attacks, enabling also to argue the security of countermeasures, is missing yet.
In this work, we suggest a generic extension for existing security models that enables a game-based treatment of cryptographic fault resilience. Our modeling specifies exemplary memory fault attack types of different strength, ranging from random bit-flip faults to differential (rowhammer-style) faults to full adversarial control on indicated memory variables. We apply our model first to deterministic signatures to revisit known fault attacks as well as to establish provable guarantees of fault resilience for proposed fault-attack countermeasures. In a second application to nonce-misuse resistant authenticated encryption, we provide the first fault-attack treatment of the SIV mode of operation and give a provably secure fault-resilient variant.
Marc Fischlin, Felix Günther

Cryptanalysis of the Multivariate Encryption Scheme EFLASH

EFLASH is a multivariate public-key encryption scheme proposed by Cartor and Smith-Tone at SAC 2018. In this paper we investigate the hardness of solving the particular equation systems arising from EFLASH, and show that the solving degree for these types of systems is much lower than estimated by the authors. We show that a Gröbner basis algorithm will produce degree fall polynomials at a low degree for EFLASH systems. In particular we are able to accurately predict the number of these polynomials occurring at step degrees 3 and 4 in our attacks. We performed several experiments using the computer algebra system MAGMA, which indicate that the solving degree is at most one higher than the one where degree fall polynomials occur; moreover, our experiments show that whenever the predicted number of degree fall polynomials is positive, it is exact. Our conclusion is that EFLASH does not offer the level of security claimed by the designers. In particular, we estimate that the EFLASH version with 80-bit security parameters offers at most 69 bits of security.
Morten Øygarden, Patrick Felke, Håvard Raddum, Carlos Cid

: White-Box Secure Block Cipher Using Parallel Table Look-Ups

In this work, we propose a new table-based block cipher structure, dubbed \( \mathsf {FPL} \), that can be used to build white-box secure block ciphers. Our construction is a balanced Feistel cipher, where the input to each round function determines multiple indices for the underlying table via a probe function, and the sum of the values from the table becomes the output of the round function. We identify the properties of the probe function that make the resulting block cipher white-box secure in terms of weak and strong space hardness against known-space and non-adaptive chosen-space attacks. Our construction, enjoying rigorous provable security without relying on any ideal primitive, provides flexibility to the block size and the table size, and permits parallel table look-ups.
We also propose a concrete instantiation of \( \mathsf {FPL} \), dubbed \( \mathsf {FPL}_{\mathsf {AES}} \), using (round-reduced) \(\mathsf {AES}\) for the underlying table and probe functions. Our implementation shows that \( \mathsf {FPL}_{\mathsf {AES}} \) provides stronger security without significant loss of efficiency, compared to existing schemes including \(\mathsf {SPACE}\), \(\mathsf {WhiteBlock}\) and \(\mathsf {WEM}\).
Jihoon Kwon, Byeonghak Lee, Jooyoung Lee, Dukjae Moon

Extending NIST’s CAVP Testing of Cryptographic Hash Function Implementations

This paper describes a vulnerability in Apple’s CoreCrypto library, which affects 11 out of the 12 implemented hash functions: every implemented hash function except MD2 (Message Digest 2), as well as several higher-level operations such as the Hash-based Message Authentication Code (HMAC) and the Ed25519 signature scheme. The vulnerability is present in each of Apple’s CoreCrypto libraries that are currently validated under FIPS 140-2 (Federal Information Processing Standard). For inputs of about \(2^{32}\) bytes (4 GiB) or more, the implementations do not produce the correct output, but instead enter into an infinite loop. The vulnerability shows a limitation in the Cryptographic Algorithm Validation Program (CAVP) of the National Institute of Standards and Technology (NIST), which currently does not perform tests on hash functions for inputs larger than 65 535 bits. To overcome this limitation of NIST’s CAVP, we introduce a new test type called the Large Data Test (LDT). The LDT detects vulnerabilities similar to that in CoreCrypto in implementations submitted for validation under FIPS 140-2.
Nicky Mouha, Christopher Celi

A Fast Characterization Method for Semi-invasive Fault Injection Attacks

Semi-invasive fault injection attacks are powerful techniques well-known by attackers and secure embedded system designers. When performing such attacks, the selection of the fault injection parameters is of utmost importance and usually based on the experience of the attacker. Surprisingly, there exists no formal and general approach to characterize the target behavior under attack. In this work, we present a novel methodology to perform a fast characterization of the fault injection impact on a target, depending on the possible attack parameters. We experimentally show our methodology to be a successful one when targeting different algorithms such as DES and AES encryption and then extend to the full characterization with the help of deep learning. Finally, we show how the characterization results are transferable between different targets.
Lichao Wu, Gerard Ribera, Noemie Beringuier-Boher, Stjepan Picek

Tightly Secure Two-Pass Authenticated Key Exchange Protocol in the CK Model

Tightly secure authenticated key exchange (AKE), whose security is independent from the number of users and sessions (tight security), has been studied by Bader et al. [TCC 2015] and Gjøsteen-Jager [CRYPTO 2018] in the Bellare-Rogaway (BR) model. However, how to achieve tight security in stronger models (e.g., the Canetti-Krawczyk (CK) model and the extended Canetti-Krawczyk (eCK) model) were still left as an open problem by now.
In this paper, we investigate this problem in the CK model. We start from a generic construction [ACISP 2008] based on key encapsulated mechanisms (KEMs). We analyze the reason why it cannot achieve tight reduction, by merely assuming the underlying KEMs are secure in the multi-user and multi-challenge setting with corruption as Bader et al. [TCC 2015] and Gjøsteen-Jager [CRYPTO 2018] did. Then we put forward a new generic construction to overcome the potential obstacles.
In addition, we introduce a strong type of chosen ciphertext attack in the multi-user and multi-challenge setting with corruption for tag-based key encapsulated mechanism (TB-KEM), where adversaries are not only allowed to adaptively corrupt secret keys of users, generate multi-challenges with different coins, and open some challenges as well. We further prove that the Naor-Yung transform also works in this model, hence our generic construction can be instantiated.
Yuting Xiao, Rui Zhang, Hui Ma

Symmetric-Key Authenticated Key Exchange (SAKE) with Perfect Forward Secrecy

Key exchange protocols in the asymmetric-key setting are known to provide stronger security properties than protocols in symmetric-key cryptography. In particular, they can provide perfect forward secrecy, as illustrated by key exchange protocols based on the Diffie-Hellman scheme. However public-key algorithms are too heavy for low-resource devices, which can then not benefit from forward secrecy. In this paper, we describe a scheme that solves this issue. Using a shrewd resynchronisation technique, we propose an authenticated key exchange protocol in the symmetric-key setting that guarantees perfect forward secrecy. We prove that the protocol is sound, and provide a formal proof of its security.
Gildas Avoine, Sébastien Canard, Loïc Ferreira

TMPS: Ticket-Mediated Password Strengthening

We introduce the notion of TMPS: Ticket-Mediated Password Strengthening, a technique for allowing users to derive keys from passwords while imposing a strict limit on the number of guesses of their password any attacker can make, and strongly protecting the users’ privacy. We describe the security requirements of TMPS, and then a set of efficient and practical protocols to implement a TMPS scheme, requiring only hash functions, CCA2-secure encryption, and blind signatures. We provide several variant protocols, including an offline symmetric-only protocol that uses a local trusted computing environment, and online variants that use group signatures or stronger trust assumptions instead of blind signatures. We formalize the security of our scheme by defining an ideal functionality in the Universal Composability (UC) framework, and by providing game-based definitions of security. We prove that our protocol realizes the ideal functionality in the random oracle model (ROM) under adaptive corruptions with erasures, and prove that security with respect to the ideal/real definition implies security with respect to the game-based definitions.
John Kelsey, Dana Dachman-Soled, Sweta Mishra, Meltem Sönmez Turan

Overdrive2k: Efficient Secure MPC over from Somewhat Homomorphic Encryption

Recently, Cramer et al. (CRYPTO 2018) presented a protocol, \(\mathsf {SPDZ2k}\), for actively secure multiparty computation for dishonest majority in the pre-processing model over the ring \(\mathbb {Z}_{2^k}\), instead of over a prime field \(\mathbb {F}_p\). Their technique used oblivious transfer for the pre-processing phase, more specifically the MASCOT protocol (Keller et al. CCS 2016). In this paper we describe a more efficient technique for secure multiparty computation over \(\mathbb {Z}_{2^k}\) based on somewhat homomorphic encryption. In particular we adapt the Overdrive approach (Keller et al. EUROCRYPT 2018) to obtain a protocol which is more like the original SPDZ protocol (Damgård et al. CRYPTO 2012). To accomplish this we introduce a special packing technique for the BGV encryption scheme operating on the plaintext space defined by the \(\mathsf {SPDZ2k}\) protocol, extending the ciphertext packing method used in SPDZ to the case of \(\mathbb {Z}_{2^k}\). We also present a more complete pre-processing phase for secure computation modulo \(2^k\) by adding a new technique to produce shared random bits.
Emmanuela Orsini, Nigel P. Smart, Frederik Vercauteren

SoK: A Consensus Taxonomy in the Blockchain Era

Consensus is arguably one of the most fundamental problems in distributed computing, playing also an important role in the area of cryptographic protocols as the enabler of a secure broadcast functionality. While the problem has a long and rich history and has been analyzed from many different perspectives, recently, with the advent of blockchain protocols like Bitcoin, it has experienced renewed interest from a much wider community of researchers and has seen its application expand to various novel settings.
One of the main issues in consensus research is the many different variants of the problem that exist as well as the various ways the problem behaves when different setup, computational assumptions and network models are considered. In this work we perform a systematization of knowledge in the landscape of consensus research in the Byzantine failure model starting with the original formulation in the early 1980s up to the present blockchain-based new class of consensus protocols. Our work is a roadmap for studying the consensus problem under its many guises, classifying the way it operates in the various settings and highlighting the exciting new applications that have emerged in the blockchain era.
Juan Garay, Aggelos Kiayias

Consensus from Signatures of Work

Assuming the existence of a public-key infrastructure (PKI), digital signatures are a fundamental building block in the design of secure consensus protocols with optimal resilience. More recently, with the advent of blockchain protocols like Bitcoin, consensus has been considered in the “permissionless” setting where no authentication or even point-to-point communication is available. Yet, despite some positive preliminary results, all attempts to formalize a building block that is sufficient for designing consensus protocols in this setting, rely on a very strong independence assumption about adversarial accesses to the underlying computational resource.
In this work, we relax this assumption by putting forth a primitive, which we call signatures of work (SoW). Distinctive features of our new notion are a lower bound on the number of steps required to produce a signature; fast verification; moderate unforgeability—producing a sequence of SoWs, for chosen messages, does not provide an advantage to an adversary in terms of running time; and honest signing time independence—most relevant in concurrent multi-party applications, as we show.
Armed with SoW, we then present a new permissionless consensus protocol which is secure assuming an honest majority of computational power, thus in a sense providing a blockchain counterpart to the classical Dolev-Strong consensus protocol. The protocol is built on top of a SoW-based blockchain and standard properties of the underlying hash function, thus improving on the known provably secure consensus protocols in this setting, which rely on the strong independence property mentioned above in a fundamental way.
Juan A. Garay, Aggelos Kiayias, Giorgos Panagiotakos

Faster Homomorphic Encryption is not Enough: Improved Heuristic for Multiplicative Depth Minimization of Boolean Circuits

In somewhat homomorphic encryption schemes (e.g. B/FV, BGV) the size of ciphertexts and the execution performance of homomorphic operations depends heavily on the multiplicative depth. The multiplicative depth is the maximal number of consecutive multiplications for which the homomorphic encryption scheme was parameterized.
In this work we improve a heuristic for multiplicative depth minimization of Boolean circuits found in the literature. In particular, a new circuit rewriting operator is introduced, the so called cone rewrite operator. The results we obtain using the new method are relevant in terms of accuracy and performance. The multiplicative depths for a benchmark of Boolean circuits is highly improved and the execution time of the new heuristic is significantly lower. The proposed rewrite operator and heuristic are not limited to Boolean circuits, but can also be used for arithmetic circuits.
Pascal Aubry, Sergiu Carpov, Renaud Sirdey

Better Bootstrapping for Approximate Homomorphic Encryption

After Cheon et al. (Asiacrypt’ 17) proposed an approximate homomorphic encryption scheme, HEAAN, for operations between encrypted real (or complex) numbers, the scheme is widely used in a variety of fields with needs on privacy-preserving in data analysis. After that, a bootstrapping method for HEAAN is proposed by Cheon et al. (Eurocrypt’ 18) with modulus reduction being replaced by a sine function. In this paper, we generalize the Full-RNS variant of HEAAN proposed by Cheon et al. (SAC, 19) to reduce the number of temporary moduli used in key-switching. As a result, our scheme can support more depth computations without bootstrapping while ensuring the same level of security.
We also propose a new polynomial approximation method to evaluate a sine function in an encrypted state, which is specialized for the bootstrapping for HEAAN. Our method considers a ratio between the size of a plaintext and the size of a ciphertext modulus. Consequently, it requires a smaller number of non-scalar multiplications, which is about half of the Chebyshev method.
With our variant of the Full-RNS scheme and a new sine evaluation method, we firstly implement bootstrapping for a Full-RNS variant of approximate homomorphic encryption scheme. Our method enables bootstrapping for a plaintext in the space \({\mathbb {C}}^{16384}\) to be completed in 52 s while preserving 11 bit precision of each slot.
Kyoohyung Han, Dohyeong Ki

Improved Secure Integer Comparison via Homomorphic Encryption

Secure integer comparison has been one of the first problems introduced in cryptography, both for its simplicity to describe and for its applications. The first formulation of the problem was to enable two parties to compare their inputs without revealing the exact value of those inputs, also called the Millionaires’ problem [45]. The recent rise of fully homomorphic encryption has given a new formulation to this problem. In this new setting, one party blindly computes an encryption of the boolean \((a<b)\) given only ciphertexts encrypting a and b.
In this paper, we present new solutions for the problem of secure integer comparison in both of these settings. The underlying idea for both schemes is to avoid decomposing the integers in binary in order to improve the performances. On the one hand, our fully homomorphic based solution is inspired by [9], and makes use of the fast bootstrapping techniques developed by [12, 14, 23] to obtain scalability for large integers while preserving high efficiency. On the other hand, our solution to the original Millionaires’ problem is inspired by the protocol of [10], based on partially homomorphic encryption. We tweak their protocol in order to minimize the number of interactions required, while preserving the advantage of comparing non-binary integers.
Both our techniques provide efficient solutions to the problem of secure integer comparison for large (even a-priori unbounded in our first scenario) integers with minimum interactions.
Florian Bourse, Olivier Sanders, Jacques Traoré

Efficient FPGA Implementations of LowMC and Picnic

Post-quantum cryptography has received increased attention in recent years, in particular, due to the standardization effort by NIST. One of the second-round candidates in the NIST post-quantum standardization project is Picnic, a post-quantum secure signature scheme based on efficient zero-knowledge proofs of knowledge. In this work, we present the first FPGA implementation of Picnic. We show how to efficiently calculate LowMC, the block cipher used as a one-way function in Picnic, in hardware despite the large number of constants needed during computation. We then combine our LowMC implementation and efficient instantiations of Keccak to build the full Picnic algorithm. Additionally, we conform to recently proposed hardware interfaces for post-quantum schemes to enable easier comparisons with other designs. We provide evaluations of our Picnic implementation for both, the standalone design and a version wrapped with a PCIe interface, and compare them to the state-of-the-art software implementations of Picnic and similar hardware designs. Concretely, signing messages on our FPGA takes 0.25 ms for the L1 security level and 1.24 ms for the L5 security level, beating existing optimized software implementations by a factor of 4.
Daniel Kales, Sebastian Ramacher, Christian Rechberger, Roman Walch, Mario Werner

Traceable Ring Signatures with Post-quantum Security

Traceable ring signature (TRS), a variant of ring signature, allows a signer to sign a message anonymously labeled with a tag on behalf of a group of users, but may reveal the signer’s identity if he creates two signatures with the same tag. TRS provides accountable anonymity for users, and serves as an important role in e-voting systems and e-coupon services. However, current TRS schemes are built on hard problems in number theory that cannot resist quantum attackers. To address this issue, first, we propose a general framework of TRS, by using a non-interactive zero-knowledge proof of knowledge, a collision-resistant hash function, and a pseudorandom function with some additional properties. Then, we construct an efficient TRS scheme in the quantum random oracle model, by instantiating the framework with appropriate lattice-based building blocks. Moreover, the signature size of the lattice-based TRS is logarithmic in the ring size.
Hanwen Feng, Jianwei Liu, Qianhong Wu, Ya-Nan Li

Post-quantum Provably-Secure Authentication and MAC from Mersenne Primes

This paper presents a novel, yet efficient secret-key authentication and MAC, which provide post-quantum security promise, whose security is reduced to the quantum-safe conjectured hardness of Mersenne Low Hamming Combination (\(\mathsf {MERS}\)) assumption recently introduced by Aggarwal, Joux, Prakash, and Santha (CRYPTO 2018). Our protocols are very suitable to weak devices like smart card and RFID tags.
Houda Ferradi, Keita Xagawa

Another Look at Some Isogeny Hardness Assumptions

The security proofs for isogeny-based undeniable signature schemes have been based primarily on the assumptions that the One-Sided Modified SSCDH problem and the One-More SSCDH problem are intractable. We challenge the validity of these assumptions, showing that both the decisional and computational variants of these problems can be solved in polynomial time. We further demonstrate an attack, applicable to two undeniable signature schemes, one of which was proposed at PQCrypto 2014. The attack allows to forge signatures in \(2^{4\lambda /5}\) steps on a classical computer. This is an improvement over the expected classical security of \(2^{\lambda }\), where \(\lambda \) denotes the chosen security parameter.
Simon-Philipp Merz, Romy Minko, Christophe Petit

How to Construct CSIDH on Edwards Curves

CSIDH is an isogeny-based key exchange protocol proposed by Castryck, Lange, Martindale, Panny, and Renes in 2018. CSIDH is based on the ideal class group action on \(\mathbb {F}_p\)-isomorphism classes of Montgomery curves. In order to calculate the class group action, we need to take points defined over \(\mathbb {F}_{p^2}\). The original CSIDH algorithm requires a calculation over \(\mathbb {F}_p\) by representing points as x-coordinate over Montgomery curves. Meyer and Reith proposed a faster CSIDH algorithm in 2018 which calculates isogenies on Edwards curves by using a birational map between a Montgomery curve and an Edwards curve. There is a special coordinate on Edwards curves (the w-coordinate) to calculate group operations and isogenies. If we try to calculate the class group action on Edwards curves by using the w-coordinate in a similar way on Montgomery curves, we have to consider points defined over \(\mathbb {F}_{p^4}\). Therefore, it is not a trivial task to calculate the class group action on Edwards curves with w-coordinates over only \(\mathbb {F}_p\).
In this paper, we prove a number of theorems on the properties of Edwards curves. By using these theorems, we extend the CSIDH algorithm to that on Edwards curves with w-coordinates over \(\mathbb {F}_p\). This algorithm is as fast as (or a little bit faster than) the algorithm proposed by Meyer and Reith.
Tomoki Moriya, Hiroshi Onuki, Tsuyoshi Takagi

Policy-Based Sanitizable Signatures

Sanitizable signatures are a variant of signatures which allow a single, and signer-defined, sanitizer to modify signed messages in a controlled way without invalidating the respective signature. They turned out to be a versatile primitive, proven by different variants and extensions, e.g., allowing multiple sanitizers or adding new sanitizers one- by-one. However, existing constructions are very restricted regarding their flexibility in specifying potential sanitizers. We propose a different and more powerful approach: Instead of using sanitizers’ public keys directly, we assign attributes to them. Sanitizing is then based on policies, i.e., access structures defined over attributes. A sanitizer can sanitize, if, and only if, it holds a secret key to attributes satisfying the policy associated to a signature, while offering full-scale accountability.
Kai Samelin, Daniel Slamanig

Traceable Inner Product Functional Encryption

Functional Encryption (FE) has been widely studied in the last decade, as it provides a very useful tool for restricted access to sensitive data: from a ciphertext, it allows specific users to learn a function of the underlying plaintext. In practice, many users may be interested in the same function on the data, say the mean value of the inputs, for example. The conventional definition of FE associates each function to a secret decryption functional key and therefore all the users get the same secret key for the same function. This induces an important problem: if one of these users (called a traitor) leaks or sells the decryption functional key to be included in a pirate decryption tool, then there is no way to trace back its identity. Our objective is to solve this issue by introducing a new primitive, called Traceable Functional Encryption: the functional decryption key will not only be specific to a function, but to a user too, in such a way that if some users collude to produce a pirate decoder that successfully evaluates a function on the plaintext, from the ciphertext only, one can trace back at least one of them.
We propose a concrete solution for Inner Product Functional Encryption (IPFE). We first remark that the ElGamal-based IPFE from Abdalla et al. in PKC ’15 shares many similarities with the Boneh-Franklin traitor tracing from CRYPTO ’99. Then, we can combine these two schemes in a very efficient way, with the help of pairings, to obtain a Traceable IPFE with black-box confirmation.
Xuan Thanh Do, Duong Hieu Phan, David Pointcheval

One-More Assumptions Do Not Help Fiat-Shamir-type Signature Schemes in NPROM

On the Fiat-Shamir-type signature schemes, there are several impossibility results concerning their provable security. Most of these impossibility results employ the non-programmable random oracle model (NPROM), and to the best of our knowledge, all impossibilities deal with the security reductions from the non-interactive cryptographic assumptions except for the result on the security of Schnorr signature scheme from the One-More DL (OM-DL) assumption in ProvSec2017.
In this paper, we extend the impossibility result above concerning Schnorr signature scheme and the OM-DL assumption to a wider class of the Fiat-Shamir-type signature schemes, and aim to find out the conditions so that such impossibility results hold. We show that a specific class of the Fiat-Shamir-type signature schemes, including Schnorr signature scheme, cannot be proven to be euf-cma secure in NPROM from the generalized One-More cryptographic assumptions. This is just a generalization of the impossibility concerning Schnorr signature scheme and the OM-DL assumption. Our result also suggests that for some Fiat-Shamir-type signature schemes, which is not covered by our impossibility (e.g. the RSA-based schemes), there may exist a successful security proof in NPROM from the interactive cryptographic assumption.
Masayuki Fukumitsu, Shingo Hasegawa

Cut-and-Choose for Garbled RAM

Garbled RAM, introduced by Lu and Ostrovsky in 2013, provides a novel method for secure computation on RAM (Random Access Machine) programs directly. It can be seen as a RAM analogue of Yao’s garbled circuits such that the computational complexity and communication complexity only grow with the running time of the RAM program, avoiding the inefficient process of first converting it into a circuit. It allows for executing multiple RAM programs on a persistent database, but is secure only against semi-honest adversaries.
In this work we provide a cut-and-choose technique for garbled RAM. This gives the first constant-round two-party RAM computation protocol secure against malicious adversaries which allows for multiple RAM programs being executed on a persistent database. Our protocol makes black-box use of the one-way functions, and security of our construction is argued in the random oracle model.
Peihan Miao

Universally Composable Accumulators

Accumulators, first introduced by Benaloh and de Mare (Eurocrypt 1993), are compact representations of arbitrarily large sets and can be used to prove claims of membership or non-membership about the underlying set. They are almost exclusively used as building blocks in real-world complex systems, including anonymous credentials, group signatures and, more recently, anonymous cryptocurrencies. Having rigorous security analysis for such systems is crucial for their adoption and safe use in the real world, but it can turn out to be extremely challenging given their complexity.
In this work, we provide the first universally composable (UC) treatment of cryptographic accumulators. There are many different types of accumulators: some support additions, some support deletions and some support both; and, orthogonally, some support proofs of membership, some support proofs of non-membership, and some support both. Additionally, some accumulators support public verifiability of set operations, and some do not. Our UC definition covers all of these types of accumulators concisely in a single functionality, and captures the two basic security properties of accumulators: correctness and soundness. We then prove the equivalence of our UC definition to standard accumulator definitions. This implies that existing popular accumulator schemes, such as the RSA accumulator, already meet our UC definition, and that the security proofs of existing systems that leverage such accumulators can be significantly simplified.
Finally, we use our UC definition to get simple proofs of security. We build an accumulator in a modular way out of two weaker accumulators (in the style of Baldimtsi et al. (Euro S&P 2017), and we give a simple proof of its UC security. We also show how to simplify the proofs of security of complex systems such as anonymous credentials. Specifically, we show how to extend an anonymous credential system to support revocation by utilizing our results on UC accumulators.
Foteini Badimtsi, Ran Canetti, Sophia Yakoubov

A Non-interactive Shuffle Argument with Low Trust Assumptions

A shuffle argument is a cryptographic primitive for proving correct behaviour of mix-networks without leaking any private information. Several recent constructions of non-interactive shuffle arguments avoid the random oracle model but require the public key to be trusted.
We augment the most efficient argument by Fauzi et al. [Asiacrypt 2017] with a distributed key generation protocol that assures soundness of the argument if at least one party in the protocol is honest and additionally provide a key verification algorithm which guarantees zero-knowledge even if all the parties are malicious. Furthermore, we simplify their construction and improve security by using weaker assumptions while retaining roughly the same level of efficiency. We also provide an implementation to the distributed key generation protocol and the shuffle argument.
Antonis Aggelakis, Prastudy Fauzi, Georgios Korfiatis, Panos Louridas, Foteinos Mergoupis-Anagnou, Janno Siim, Michał Zając


Weitere Informationen

Premium Partner