Skip to main content
Top

2011 | Book

Advances in Cryptology – ASIACRYPT 2011

17th International Conference on the Theory and Application of Cryptology and Information Security, Seoul, South Korea, December 4-8, 2011. Proceedings

Editors: Dong Hoon Lee, Xiaoyun Wang

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the 17th International Conference on the Theory and Application of Cryptology and Information Security, ASIACRYPT 2011, held in Seoul, Korea, in December 2011. The 40 revised papers included in this volume were carefully reviewed and selected from 266 submissions. The contributions are organized in topical sections on lattices and quantum cryptography; public key encryption; database privacy; hash function; symmetric key encryption; zero knowledge proof; universal composability; foundation; secure computation and secret sharing; public key signature; and leakage resilient cryptography.

Table of Contents

Frontmatter

Lattices and Quantum Cryptography

BKZ 2.0: Better Lattice Security Estimates

The best lattice reduction algorithm known in practice for high dimension is Schnorr-Euchner’s BKZ: all security estimates of lattice cryptosystems are based on NTL’s old implementation of BKZ. However, recent progress on lattice enumeration suggests that BKZ and its NTL implementation are no longer optimal, but the precise impact on security estimates was unclear. We assess this impact thanks to extensive experiments with BKZ 2.0, the first state-of-the-art implementation of BKZ incorporating recent improvements, such as Gama-Nguyen-Regev pruning. We propose an efficient simulation algorithm to model the behaviour of BKZ in high dimension with high blocksize ≥ 50, which can predict approximately both the output quality and the running time, thereby revising lattice security estimates. For instance, our simulation suggests that the smallest NTRUSign parameter set, which was claimed to provide at least 93-bit security against key-recovery lattice attacks, actually offers at most 65-bit security.

Yuanmi Chen, Phong Q. Nguyen
Functional Encryption for Inner Product Predicates from Learning with Errors

We propose a lattice-based functional encryption scheme for inner product predicates whose security follows from the difficulty of the

learning with errors

(

LWE

) problem. This construction allows us to achieve applications such as range and subset queries, polynomial evaluation, and CNF/DNF formulas on encrypted data. Our scheme supports inner products over small fields, in contrast to earlier works based on bilinear maps.

Our construction is the first functional encryption scheme based on lattice techniques that goes beyond basic identity-based encryption. The main technique in our scheme is a novel twist to the identity-based encryption scheme of Agrawal, Boneh and Boyen (Eurocrypt 2010). Our scheme is weakly attribute hiding in the standard model.

Shweta Agrawal, David Mandell Freeman, Vinod Vaikuntanathan
Random Oracles in a Quantum World

The interest in post-quantum cryptography — classical systems that remain secure in the presence of a quantum adversary — has generated elegant proposals for new cryptosystems. Some of these systems are set in the random oracle model and are proven secure relative to adversaries that have classical access to the random oracle. We argue that to prove post-quantum security one needs to prove security in the

quantum-accessible

random oracle model where the adversary can query the random oracle with quantum state.

We begin by separating the classical and quantum-accessible random oracle models by presenting a scheme that is secure when the adversary is given classical access to the random oracle, but is insecure when the adversary can make quantum oracle queries. We then set out to develop generic conditions under which a classical random oracle proof implies security in the quantum-accessible random oracle model. We introduce the concept of a history-free reduction which is a category of classical random oracle reductions that basically determine oracle answers independently of the history of previous queries, and we prove that such reductions imply security in the quantum model. We then show that certain post-quantum proposals, including ones based on lattices, can be proven secure using history-free reductions and are therefore postquantum secure. We conclude with a rich set of open problems in this area.

Dan Boneh, Özgür Dagdelen, Marc Fischlin, Anja Lehmann, Christian Schaffner, Mark Zhandry

Public Key Encryption I

Lossy Encryption: Constructions from General Assumptions and Efficient Selective Opening Chosen Ciphertext Security

Lossy encryption was originally studied as a means of achieving efficient and composable oblivious transfer. Bellare, Hofheinz and Yilek showed that lossy encryption is also selective opening secure. We present new and general constructions of lossy encryption schemes and of cryptosystems secure against selective opening adversaries.

We show that

every

re-randomizable encryption scheme gives rise to efficient encryptions secure against a selective opening adversary. We show that statistically-hiding 2-round Oblivious Transfer implies Lossy Encryption and so do smooth hash proof systems. This shows that private information retrieval and homomorphic encryption

both

imply Lossy Encryption, and thus Selective Opening Secure Public Key Encryption. Applying our constructions to well-known cryptosystems, we obtain selective opening secure commitments and encryptions from the

Decisional Diffie-Hellman

,

Decisional Composite Residuosity

and

Quadratic Residuosity

assumptions.

In an indistinguishability-based model of chosen-ciphertext selective opening security, we obtain secure schemes featuring short ciphertexts under standard number theoretic assumptions. In a simulation-based definition of chosen-ciphertext selective opening security, we also handle non-adaptive adversaries by adapting the Naor-Yung paradigm and using the perfect zero-knowledge proofs of Groth, Ostrovsky and Sahai.

Brett Hemenway, Benoît Libert, Rafail Ostrovsky, Damien Vergnaud
Structure Preserving CCA Secure Encryption and Applications

In this paper we present the first CCA-secure public key encryption scheme that is structure preserving, i.e., our encryption scheme uses only algebraic operations. In particular, it does not use hash-functions or interpret group elements as bit-strings. This makes our scheme a perfect building block for cryptographic protocols where parties for instance want to prove properties about ciphertexts to each other or to jointly compute ciphertexts. Our scheme is very efficient and is secure against adaptive chosen ciphertext attacks.

We also provide a few example protocols for which our scheme is useful. For instance, we present an efficient protocol for two parties, Alice and Bob, that allows them to jointly encrypt a given function of their respective secret inputs such that only Bob learns the resulting ciphertext, yet they are both ensured of the computation’s correctness. This protocol serves as a building block for our second contribution which is a set of protocols that implement the concept of so-called oblivious trusted third parties. This concept has been proposed before, but no concrete realization was known.

Jan Camenisch, Kristiyan Haralambiev, Markulf Kohlweiss, Jorn Lapon, Vincent Naessens
Decoding Random Linear Codes in $\tilde{\mathcal{O}}(2^{0.054n})$

Decoding random linear codes is a fundamental problem in complexity theory and lies at the heart of almost all code-based cryptography. The best attacks on the most prominent code-based cryptosystems such as McEliece directly use decoding algorithms for linear codes. The asymptotically best decoding algorithm for random linear codes of length

n

was for a long time Stern’s variant of information-set decoding running in time

$\tilde{\mathcal{O}}\left(2^{0.05563n}\right)$

. Recently, Bernstein, Lange and Peters proposed a new technique called

Ball-collision decoding

which offers a speed-up over Stern’s algorithm by improving the running time to

$\tilde{\mathcal{O}}\left(2^{0.05558n}\right)$

.

In this paper, we present a new algorithm for decoding linear codes that is inspired by a representation technique due to Howgrave-Graham and Joux in the context of subset sum algorithms. Our decoding algorithm offers a rigorous complexity analysis for random linear codes and brings the time complexity down to

$\tilde{\mathcal{O}}\left(2^{0.05363n}\right)$

.

Alexander May, Alexander Meurer, Enrico Thomae
Lower and Upper Bounds for Deniable Public-Key Encryption

A deniable cryptosystem allows a sender and a receiver to communicate over an insecure channel in such a way that the communication is still secure even if the adversary can threaten the parties into revealing their internal states after the execution of the protocol. This is done by allowing the parties to change their internal state to make it look like a given ciphertext decrypts to a message different from what it really decrypts to. Deniable encryption was in this way introduced to allow to deny a message exchange and hence combat coercion.

Depending on which parties can be coerced, the security level, the flavor and the number of rounds of the cryptosystem, it is possible to define a number of notions of deniable encryption.

In this paper we prove that there does not exist any non-interactive receiver-deniable cryptosystem with better than polynomial security. This also shows that it is impossible to construct a non-interactive bi-deniable public-key encryption scheme with better than polynomial security. Specifically, we give an explicit bound relating the security of the scheme to how efficient the scheme is in terms of key size. Our impossibility result establishes a lower bound on the security.

As a final contribution we give constructions of deniable public-key encryption schemes which establishes upper bounds on the security in terms of key length. There is a gap between our lower and upper bounds, which leaves the interesting open problem of finding the tight bounds.

Rikke Bendlin, Jesper Buus Nielsen, Peter Sebastian Nordholt, Claudio Orlandi

Public Key Encryption II

Bridging Broadcast Encryption and Group Key Agreement

Broadcast encryption (BE) schemes allow a sender to securely broadcast to any subset of members but requires a trusted party to distribute decryption keys. Group key agreement (GKA) protocols enable a group of members to negotiate a common encryption key via open networks so that only the members can decrypt the ciphertexts encrypted under the shared encryption key, but a sender cannot exclude any particular member from decrypting the ciphertexts. In this paper, we bridge these two notions with a hybrid primitive referred to as contributory broadcast encryption (CBE). In this new primitive, a group of members negotiate a common public encryption key while each member holds a decryption key. A sender seeing the public group encryption key can limit the decryption to a subset of members of his choice. Following this model, we propose a CBE scheme with short ciphertexts. The scheme is proven to be fully collusion-resistant under the decision

n

-Bilinear Diffie-Hellman Exponentiation (BDHE) assumption in the standard model. We also illustrate a variant in which the communication and computation complexity is sub-linear with the group size. Of independent interest, we present a new BE scheme that is aggregatable. The aggregatability property is shown to be useful to construct advanced protocols.

Qianhong Wu, Bo Qin, Lei Zhang, Josep Domingo-Ferrer, Oriol Farràs
On the Joint Security of Encryption and Signature, Revisited

We revisit the topic of joint security for combined public key schemes, wherein a single keypair is used for both encryption and signature primitives in a secure manner. While breaking the principle of key separation, such schemes have attractive properties and are sometimes used in practice. We give a general construction for a combined public key scheme having joint security that uses IBE as a component and that works in the standard model. We provide a more efficient direct construction, also in the standard model.

Kenneth G. Paterson, Jacob C. N. Schuldt, Martijn Stam, Susan Thomson
Polly Cracker, Revisited

We initiate the formal treatment of cryptographic constructions (“Polly Cracker”) based on the hardness of computing remainders modulo an ideal over multivariate polynomial rings. We start by formalising the relation between the ideal remainder problem and the problem of computing a Gröbner basis. We show both positive and negative results. On the negative side, we define a symmetric Polly Cracker encryption scheme and prove that this scheme only achieves bounded

CPA

security. Furthermore, we show that a large class of algebraic transformations cannot convert this scheme to a fully secure Polly-Cracker-style scheme. On the positive side, we formalise noisy variants of the ideal membership, ideal remainder, and Gröbner basis problems. These problems can be seen as natural generalisations of the

LWE

problem and the approximate GCD problem over polynomial rings. We then show that noisy encoding of messages results in a fully

IND-CPA

-secure somewhat homomorphic encryption scheme. Our results provide a new family of somewhat homomorphic encryption schemes based on new, but natural, hard problems. Our results also imply that Regev’s

LWE

-based public-key encryption scheme is (somewhat)

multiplicatively

homomorphic for appropriate choices of parameters.

Martin R. Albrecht, Pooya Farshim, Jean-Charles Faugère, Ludovic Perret

Database Privacy

Oblivious RAM with O((logN)3) Worst-Case Cost

Oblivious RAM is a useful primitive that allows a client to hide its data access patterns from an untrusted server in storage outsourcing applications. Until recently, most prior works on Oblivious RAM aim to optimize its amortized cost, while suffering from linear or even higher worst-case cost. Such poor worst-case behavior renders these schemes impractical in realistic settings, since a data access request can occasionally be blocked waiting for an unreasonably large number of operations to complete.

This paper proposes novel Oblivious RAM constructions that achieves poly-logarithmic worst-case cost, while consuming constant client-side storage. To achieve the desired worst-case asymptotic performance, we propose a novel technique in which we organize the O-RAM storage into a binary tree over data buckets, while moving data blocks obliviously along tree edges.

Elaine Shi, T. -H. Hubert Chan, Emil Stefanov, Mingfei Li
Noiseless Database Privacy

Differential Privacy (DP) has emerged as a formal, flexible framework for privacy protection, with a guarantee that is agnostic to auxiliary information and that admits simple rules for composition. Benefits notwithstanding, a major drawback of DP is that it provides noisy responses to queries, making it unsuitable for many applications. We propose a new notion called Noiseless Privacy that provides exact answers to queries, without adding any noise whatsoever. While the form of our guarantee is similar to DP, where the privacy comes from is very different, based on statistical assumptions on the data and on restrictions to the auxiliary information available to the adversary. We present a first set of results for Noiseless Privacy of arbitrary Boolean-function queries and of linear Real-function queries, when data are drawn independently, from nearly-uniform and Gaussian distributions respectively. We also derive simple rules for composition under models of dynamically changing data.

Raghav Bhaskar, Abhishek Bhowmick, Vipul Goyal, Srivatsan Laxman, Abhradeep Thakurta

Hash Function

The Preimage Security of Double-Block-Length Compression Functions

We present new techniques for deriving preimage resistance bounds for block cipher based double-block-length, double-call hash functions. We give improved bounds on the preimage security of the three “classical” double-block-length, double-call, block cipher-based compression functions, these being Abreast-DM, Tandem-DM and Hirose’s scheme. For Hirose’s scheme, we show that an adversary must make at least 2

2

n

 − 5

block cipher queries to achieve chance 0.5 of inverting a randomly chosen point in the range. For Abreast-DM and Tandem-DM we show that at least 2

2

n

 − 10

queries are necessary. These bounds improve upon the previous best bounds of Ω(2

n

) queries, and are optimal up to a constant factor since the compression functions in question have range of size 2

2

n

.

Frederik Armknecht, Ewan Fleischmann, Matthias Krause, Jooyoung Lee, Martijn Stam, John Steinberger
Rebound Attack on JH42

The hash function JH [20] is one of the five finalists of the NIST SHA-3 hash competition. It has been recently tweaked for the final by increasing its number of rounds from 35.5 to 42. The previously best known results on JH were semi-free-start near-collisions up to 22 rounds using multi-inbound rebound attacks. In this paper we provide a new differential path on 32 rounds. Using this path, we are able to build various semi-free-start internal-state near-collisions and the maximum number of rounds that we achieved is up to 37 rounds on 986 bits. Moreover, we build distinguishers in the full 42-round internal permutation. These are, to our knowledge, the first results faster than generic attack on the full internal permutation of JH42, the finalist version. These distinguishers also apply to the compression function.

María Naya-Plasencia, Deniz Toz, Kerem Varici
Second-Order Differential Collisions for Reduced SHA-256

In this work, we introduce a new non-random property for hash/compression functions using the theory of higher order differentials. Based on this, we show a second-order differential collision for the compression function of SHA-256 reduced to 47 out of 64 steps with practical complexity. We have implemented the attack and provide an example. Our results suggest that the security margin of SHA-256 is much lower than the security margin of most of the SHA-3 finalists in this setting. The techniques employed in this attack are based on a rectangle/boomerang approach and cover advanced search algorithms for good characteristics and message modification techniques. Our analysis also exposes flaws in all of the previously published related-key rectangle attacks on the SHACAL-2 block cipher, which is based on SHA-256. We provide valid rectangles for 48 steps of SHACAL-2.

Alex Biryukov, Mario Lamberger, Florian Mendel, Ivica Nikolić
Finding SHA-2 Characteristics: Searching through a Minefield of Contradictions

In this paper, we analyze the collision resistance of SHA-2 and provide the first results since the beginning of the NIST SHA-3 competition. We extend the previously best known semi-free-start collisions on SHA-256 from 24 to 32 (out of 64) steps and show a collision attack for 27 steps. All our attacks are practical and verified by colliding message pairs. We present the first automated tool for finding complex differential characteristics in SHA-2 and show that the techniques on SHA-1 cannot directly be applied to SHA-2. Due to the more complex structure of SHA-2 several new problems arise. Most importantly, a large amount of contradicting conditions occur which render most differential characteristics impossible. We show how to overcome these difficulties by including the search for conforming message pairs in the search for differential characteristics.

Florian Mendel, Tomislav Nad, Martin Schläffer

Symmetric Key Encryption

Cryptanalysis of ARMADILLO2

ARMADILLO2 is the recommended variant of a multipurpose cryptographic primitive dedicated to hardware which has been proposed by Badel et al. in [1]. In this paper, we describe a meet-in-the-middle technique relying on the parallel matching algorithm that allows us to invert the ARMADILLO2 function. This makes it possible to perform a key recovery attack when used as a FIL-MAC. A variant of this attack can also be applied to the stream cipher derived from the PRNG mode. Finally we propose a (second) preimage attack when used as a hash function. We have validated our attacks by implementing cryptanalysis on scaled variants. The experimental results match the theoretical complexities.

In addition to these attacks, we present a generalization of the parallel matching algorithm, which can be applied in a broader context than attacking ARMADILLO2.

Mohamed Ahmed Abdelraheem, Céline Blondeau, María Naya-Plasencia, Marion Videau, Erik Zenner
An Experimentally Verified Attack on Full Grain-128 Using Dedicated Reconfigurable Hardware

In this paper we describe the first single-key attack which can recover the full key of the full version of Grain-128 for arbitrary keys by an algorithm which is significantly faster than exhaustive search (by a factor of about 2

38

). It is based on a new version of a cube tester, which uses an improved choice of dynamic variables to eliminate the previously made assumption that ten particular key bits are zero. In addition, the new attack is much faster than the previous weak-key attack, and has a simpler key recovery process. Since it is extremely difficult to mathematically analyze the expected behavior of such attacks, we implemented it on RIVYERA, which is a new massively parallel reconfigurable hardware, and tested its main components for dozens of random keys. These tests experimentally verified the correctness and expected complexity of the attack, by finding a very significant bias in our new cube tester for about 7.5% of the keys we tested. This is the first time that the main components of a complex analytical attack are successfully realized against a full-size cipher with a special-purpose machine. Moreover, it is also the first attack that truly exploits the configurable nature of an FPGA-based cryptanalytical hardware.

Itai Dinur, Tim Güneysu, Christof Paar, Adi Shamir, Ralf Zimmermann
Biclique Cryptanalysis of the Full AES

Since Rijndael was chosen as the Advanced Encryption Standard (AES), improving upon 7-round attacks on the 128-bit key variant (out of 10 rounds) or upon 8-round attacks on the 192/256-bit key variants (out of 12/14 rounds) has been one of the most difficult challenges in the cryptanalysis of block ciphers for more than a decade. In this paper, we present the novel technique of block cipher cryptanalysis with bicliques, which leads to the following results:

The first key recovery method for the full AES-128 with computational complexity 2

126.1

.

The first key recovery method for the full AES-192 with computational complexity 2

189.7

.

The first key recovery method for the full AES-256 with computational complexity 2

254.4

.

Key recovery methods with lower complexity for the reduced-round versions of AES not considered before, including cryptanalysis of 8-round AES-128 with complexity 2

124.9

.

Preimage search for compression functions based on the full AES versions faster than brute force.

In contrast to most shortcut attacks on AES variants, we

do not need to assume related-keys

. Most of our techniques only need a very small part of the codebook and have low memory requirements, and are practically verified to a large extent. As our cryptanalysis is of high computational complexity, it does not threaten the practical use of AES in any way.

Andrey Bogdanov, Dmitry Khovratovich, Christian Rechberger
Tag Size Does Matter: Attacks and Proofs for the TLS Record Protocol

We analyze the security of the TLS Record Protocol, a MAC-then-Encode-then-Encrypt (MEE) scheme whose design targets confidentiality and integrity for application layer communications on the Internet. Our main results are twofold. First, we give a new distinguishing attack against TLS when variable length padding and short (truncated) MACs are used. This combination will arise when standardized TLS 1.2 extensions (RFC 6066) are implemented. Second, we show that when tags are longer, the TLS Record Protocol meets a new length-hiding authenticated encryption security notion that is stronger than IND-CCA.

Kenneth G. Paterson, Thomas Ristenpart, Thomas Shrimpton

Zero Knowledge Proof

Resettable Cryptography in Constant Rounds – The Case of Zero Knowledge

A fundamental question in cryptography deals with understanding the role that randomness plays in cryptographic protocols and to what extent it is necessary. One particular line of works was initiated by Canetti, Goldreich, Goldwasser, and Micali (STOC 2000) who introduced the notion of resettable zero-knowledge, where the protocol must be zero-knowledge even if a cheating verifier can reset the prover and have several interactions in which the prover uses the same random tape. Soon afterwards, Barak, Goldreich, Goldwasser, and Lindell (FOCS 2001) studied the setting where the

verifier

uses a fixed random tape in multiple interactions. Subsequent to these works, a number of papers studied the notion of resettable protocols in the setting where

only one

of the participating parties uses a fixed random tape multiple times. The notion of resettable security has been studied in two main models: the plain model and the bare public key model (also introduced in the above paper by Canetti et. al.).

In a recent work, Deng, Goyal and Sahai (FOCS 2009) gave the first construction of a

simultaneous

resettable zero-knowledge protocol where both participants of the protocol can reuse a fixed random tape in any (polynomial) number of executions. Their construction however required

O

(

n

ε

) rounds of interaction between the prover and the verifier. Both in the plain as well as the BPK model, this construction remain the only known simultaneous resettable zero-knowledge protocols.

In this work, we study the question of round complexity of simultaneous resettable zero-knowledge in the BPK model. We present a

constant round

protocol in such a setting based on standard cryptographic assumptions. Our techniques are significantly different from the ones used by Deng, Goyal and Sahai.

Yi Deng, Dengguo Feng, Vipul Goyal, Dongdai Lin, Amit Sahai, Moti Yung
Two Provers in Isolation

We revisit the Two-Prover Bit Commitment Scheme of BenOr, Goldwasser, Kilian and Wigderson [BGKW88]. First, we introduce Two-Prover Bit Commitment Schemes similar to theirs and demonstrate that although they are classically secure using their proof technique, we also show that if the provers are allowed to share quantum entanglement, they are able to successfully break the binding condition. Secondly, we translate this result in a purely classical setting and investigate the possibility of using this Bit Commitment scheme in applications. We observe that the security claim of [BGKW88] based on the assumption that the provers cannot communicate is not a sufficient criteria to obtain soundness. We develop a set of conditions, called

isolation

, that must be satisfied by any third party interacting with the provers to guarantee the binding property of the Bit Commitment.

Claude Crépeau, Louis Salvail, Jean-Raymond Simard, Alain Tapp
Efficient Zero-Knowledge Arguments from Two-Tiered Homomorphic Commitments

We construct practical and efficient zero-knowledge arguments with sublinear communication complexity. The arguments have perfect completeness, perfect special honest verifier zero-knowledge and computational soundness. Our zero-knowledge arguments rely on two-tiered homomorphic commitments for which pairing-based constructions already exist.

As a concrete application of our new zero-knowledge techniques, we look at the case of range proofs. To demonstrate a committed value belongs to a specific

N

-bit integer interval we only need to communicate

$O(N^{\frac{1}{3}})$

group elements.

Jens Groth

Universal Composability

A Framework for Practical Universally Composable Zero-Knowledge Protocols

Zero-knowledge proofs of knowledge (ZK-PoK) for discrete logarithms and related problems are indispensable for practical cryptographic protocols. Recently, Camenisch, Kiayias, and Yung provided a specification language (the

CKY-language

) for such protocols which allows for a modular design and protocol analysis: for every zero-knowledge proof specified in this language, protocol designers are ensured that there exists an efficient protocol which indeed proves the specified statement.

However, the protocols resulting from their compilation techniques only satisfy the classical notion of ZK-PoK, which is not retained are when they used as building blocks for higher-level applications or composed with other protocols. This problem can be tackled by moving to the Universal Composability (UC) framework, which guarantees retention of security when composing protocols in arbitrary ways. While there exist generic transformations from Σ-protocols to UC-secure protocols, these transformation are often too inefficient for practice.

In this paper we introduce a specification language akin to the CKY-language and a compiler such that the resulting protocols are UC-secure and efficient. To this end, we propose an extension of the UC-framework addressing the issue that UC-secure zero-knowledge proofs are by definition proofs

of knowledge

, and state a special composition theorem which allows one to use the weaker – but more efficient and often sufficient – notion of proofs

of membership

in the UC-framework. We believe that our contributions enable the design of practically efficient protocols that are UC-secure and thus themselves can be used as building blocks.

Jan Camenisch, Stephan Krenn, Victor Shoup
Non-interactive and Re-usable Universally Composable String Commitments with Adaptive Security

We present the first provably secure constructions of universally composable (UC) commitments (in pairing-friendly groups) that simultaneously combine the key properties of being

non-interactive

, supporting commitments to

strings

(instead of bits only), and offering

re-usability of the common reference string

for multiple commitments. Our schemes are also

adaptively secure

assuming reliable erasures.

Marc Fischlin, Benoît Libert, Mark Manulis

Foundation

Cryptography Secure against Related-Key Attacks and Tampering

We show how to leverage the RKA (Related-Key Attack) security of blockciphers to provide RKA security for a suite of high-level primitives. This motivates a more general theoretical question, namely, when is it possible to transfer RKA security from a primitive P

1

to a primitive P

2

? We provide both positive and negative answers. What emerges is a broad and high level picture of the way achievability of RKA security varies across primitives, showing, in particular, that some primitives resist “more” RKAs than others. A technical challenge was to achieve RKA security even for the practical classes of related-key deriving (RKD) functions underlying fault injection attacks that fail to satisfy the “claw-freeness” assumption made in previous works. We surmount this barrier for the first time based on the construction of PRGs that are not only RKA secure but satisfy a new notion of identity-collision-resistance.

Mihir Bellare, David Cash, Rachel Miller
Counting Points on Genus 2 Curves with Real Multiplication

We present an accelerated Schoof-type point-counting algorithm for curves of genus 2 equipped with an efficiently computable real multiplication endomorphism. Our new algorithm reduces the complexity of genus 2 point counting over a finite field

$\mathbb{F}_{q}$

of large characteristic from

${\widetilde{O}}(\log^8 q)$

to

${\widetilde{O}}(\log^5 q)$

. Using our algorithm we compute a 256-bit prime-order Jacobian, suitable for cryptographic applications, and also the order of a 1024-bit Jacobian.

Pierrick Gaudry, David Kohel, Benjamin Smith
On the Efficiency of Bit Commitment Reductions

Two fundamental building blocks of secure two-party computation are

oblivious transfer

and

bit commitment

. While there exist unconditionally secure implementations of oblivious transfer from noisy correlations or channels that achieve constant rates, similar constructions are not known for bit commitment.

In this paper, we show that any protocol that implements

n

instances of bit commitment with an error of at most 2

− 

k

needs at least Ω(

kn

) instances of a given resource such as oblivious transfer or a noisy channel. This implies in particular that it is impossible to achieve a constant rate.

We then show that it is possible to circumvent the above lower bound by restricting the way in which the bit commitments can be opened. We present a protocol that achieves a constant rate in the special case where only a constant number of instances can be opened, which is optimal. Our protocol implements these restricted bit commitments from string commitments and is universally composable. The protocol provides significant speed-up over individual commitments in situations where restricted commitments are sufficient.

Samuel Ranellucci, Alain Tapp, Severin Winkler, Jürg Wullschleger
Secure Communication in Multicast Graphs

In this paper we solve the problem of secure communication in multicast graphs, which has been open for over a decade. At Eurocrypt ’98, Franklin and Wright initiated the study of secure communication against a Byzantine adversary on multicast channels in a neighbor network setting. Their model requires node-disjoint and neighbor-disjoint paths between a sender and a receiver. This requirement is too strong and hence not necessary in the general multicast graph setting. The research to find the lower and upper bounds on network connectivity for secure communication in multicast graphs has been carried out ever since. However, up until this day, there is no tight bound found for any level of security.

We study this problem from a new direction, i.e., we find the necessary and sufficient conditions (tight lower and upper bounds) for secure communication in the general adversary model with adversary structures, and then apply the results to the threshold model. Our solution uses an extended characterization of the multicast graphs, which is based on our observation on the eavesdropping and separating activities of the Byzantine adversary.

Qiushi Yang, Yvo Desmedt

Secure Computation and Secret Sharing

Constant-Round Private Function Evaluation with Linear Complexity

We consider the problem of

private function evaluation

(PFE) in the two-party setting. Here, informally, one party holds an input

x

while the other holds a (circuit describing a) function

f

; the goal is for one (or both) of the parties to learn

f

(

x

) while revealing nothing more to either party. In contrast to the usual setting of secure computation, where the function being computed is known to both parties, PFE is useful in settings where the function (i.e., algorithm) itself must remain secret, e.g., because it is proprietary or classified.

It is known that PFE can be reduced to standard secure computation by having the parties evaluate a

universal circuit

, and this is the approach taken in most prior work. Using a universal circuit, however, introduces additional overhead and results in a more complex implementation. We show here a completely new technique for PFE that avoids universal circuits, and results in constant-round protocols with communication/computational complexity

linear

in the size of the circuit computing

f

. This gives the first constant-round protocol for PFE with linear complexity (without using fully homomorphic encryption), even restricted to semi-honest adversaries.

Jonathan Katz, Lior Malka
Constant-Rounds, Linear Multi-party Computation for Exponentiation and Modulo Reduction with Perfect Security

Bit-decomposition is an important primitive in multi-party computation (MPC). With the help of bit-decomposition, we will be able to construct constant-rounds protocols for various MPC problems, such as

equality test

,

comparison

,

public modulo reduction

and

private exponentiation

, which are four main applications of bit-decomposition. However, when considering perfect security, bit-decomposition does

not

have a linear communication complexity; thus any protocols involving bit-decomposition inherit this inefficiency. Constructing protocols for MPC problems without relying on bit-decomposition is a meaningful work because this may provide us with perfectly secure protocols with linear communication complexity. It is already proved that

equality test

,

comparison

and

public modulo reduction

can be solved without involving bit-decomposition and the communication complexity can be reduced to linear. However, it remains an open problem whether

private exponentiation

could be done without relying on bit-decomposition. In this paper, maybe somewhat surprisingly, we show that it can. That is to say, we construct a

constant-rounds, linear, perfectly secure

protocol for private exponentiation

without

relying on bit-decomposition though it seems essential to this problem.

In a recent work, Ning and Xu proposed a generalization of bit-decomposi-tion and, as a simplification of their generalization, they also proposed a linear protocol for public modulo reduction. In this paper, we show that their generalization can be further generalized; more importantly, as a simplification of our further generalization, we propose a public modulo reduction protocol which is more efficient than theirs.

Chao Ning, Qiuliang Xu
Computational Verifiable Secret Sharing Revisited

Verifiable secret sharing (VSS) is an important primitive in distributed cryptography that allows a dealer to share a secret among

n

parties in the presence of an adversary controlling at most

t

of them. In the

computational

setting, the feasibility of VSS schemes based on commitments was established over two decades ago. Interestingly, all known computational VSS schemes rely on the homomorphic nature of these commitments or achieve weaker guarantees. As homomorphism is not inherent to commitments or to the computational setting in general, a closer look at its utility to VSS is called for. In this work, we demonstrate that homomorphism of commitments is not a necessity for computational VSS in the synchronous or in the asynchronous communication model. We present new VSS schemes based only on the definitional properties of commitments that are almost as good as the existing VSS schemes based on homomorphic commitments. Importantly, they have significantly lower communication complexities than their (statistical or perfect) unconditional counterparts.

Further, in the synchronous communication model, we observe that a crucial interactive complexity measure of

round complexity

has never been formally studied for computational VSS. Interestingly, for the optimal resiliency conditions, the least possible round complexity in the known computational VSS schemes is identical to that in the (statistical or perfect) unconditional setting: three rounds. Considering the strength of the computational setting, this equivalence is certainly surprising. In this work, we show that three rounds are actually not mandatory for computational VSS. We present the first two-round VSS scheme for

n

 ≥ 2

t

 + 1 and lower-bound the result tightly by proving the impossibility of one-round computational VSS for

t

 ≥ 2 or

n

 ≤ 3

t

. We also include a new two-round VSS scheme using homomorphic commitments that has the same communication complexity as the well-known three-round Feldman and Pedersen VSS schemes.

Michael Backes, Aniket Kate, Arpita Patra
Natural Generalizations of Threshold Secret Sharing

We present new families of access structures that, similarly to the multilevel and compartmented access structures introduced in previous works, are natural generalizations of threshold secret sharing. Namely, they admit an ideal linear secret sharing schemes over every large enough finite field, they can be described by a small number of parameters, and they have useful properties for the applications of secret sharing. The use of integer polymatroids makes it possible to find many new such families and it simplifies in great measure the proofs for the existence of ideal secret sharing schemes for them.

Oriol Farràs, Carles Padró, Chaoping Xing, An Yang

Public Key Signature

Separating Short Structure-Preserving Signatures from Non-interactive Assumptions

Structure-preserving signatures are signatures whose public keys, messages, and signatures are all group elements in bilinear groups, and the verification is done by evaluating pairing product equations. It is known that any structure-preserving signature in the asymmetric bilinear group setting must include at least 3 group elements per signature and a matching construction exists.

In this paper, we prove that optimally short structure preserving signatures cannot have a security proof by an algebraic reduction that reduces existential unforgeability against adaptive chosen message attacks to any non-interactive assumptions. Towards this end, we present a handy characterization of signature schemes that implies the separation.

Masayuki Abe, Jens Groth, Miyako Ohkubo
Short Signatures from Weaker Assumptions

We provide constructions of (

m

,1)-programmable hash functions (PHFs) for

m

 ≥ 2. Mimicking certain programmability properties of random oracles, PHFs can, e.g., be plugged into the generic constructions by Hofheinz and Kiltz (J. Cryptol. 2011) to yield digital signature schemes from the strong RSA and strong

q

-Diffie-Hellman assumptions. As another application of PHFs, we propose new and efficient constructions of digital signature schemes from weaker assumptions, i.e., from the (standard, non-strong) RSA and the (standard, non-strong)

q

-Diffie-Hellman assumptions.

The resulting signature schemes offer interesting tradeoffs between efficiency/signature length and the size of the public-keys. For example, our

q

-Diffie-Hellman signatures can be as short as 200 bits; the signing algorithm of our Strong RSA signature scheme can be as efficient as the one in RSA full domain hash; compared to previous constructions, our RSA signatures are shorter (by a factor of roughly 2) and we obtain a considerable efficiency improvement (by an even larger factor). All our constructions are in the standard model, i.e., without random oracles.

Dennis Hofheinz, Tibor Jager, Eike Kiltz
Practical Key-Recovery for All Possible Parameters of SFLASH

In this paper we present a new practical key-recovery attack on the SFLASH signature scheme. SFLASH is a derivative of the older C

*

encryption and signature scheme that was broken in 1995 by Patarin. In SFLASH, the public key is truncated, and this simple countermeasure prevents Patarin’s attack. The scheme is well-known for having been considered secure and selected in 2004 by the NESSIE project of the European Union to be standardized.

However, SFLASH was practically broken in 2007 by Dubois, Fouque, Stern and Shamir. Their attack breaks the original (and most relevant) parameters, but does not apply when more than half of the public key is truncated. It is therefore possible to choose parameters such that SFLASH is not broken by the existing attacks, although it is less efficient.

We show a key-recovery attack that breaks the full range of parameters in practice, as soon as the information-theoretically required amount of information is available from the public-key. The attack uses new cryptanalytic tools, most notably pencils of matrices and quadratic forms.

Charles Bouillaguet, Pierre-Alain Fouque, Gilles Macario-Rat

Leakage Resilient Cryptography

The Leakage-Resilience Limit of a Computational Problem Is Equal to Its Unpredictability Entropy

A cryptographic assumption is the (unproven) mathematical statement that a certain computational problem (e.g. factoring integers) is computationally hard. The leakage-resilience limit of a cryptographic assumption, and hence of a computational search problem, is the maximal number of bits of information that can be leaked (adaptively) about an instance, without making the problem easy to solve. This implies security of the underlying scheme against arbitrary side channel attacks by a computationally unbounded adversary as long as the number of leaked bits of information is less than the leakage resilience limit.

The hardness of a computational problem is typically characterized by the running time of the fastest (known) algorithm for solving it. We propose to consider, as another natural complexity-theoretic quantity, the success probability of the best polynomial-time algorithm (which can be exponentially small). We refer to its negative logarithm as the

unpredictability entropy

of the problem (which is defined up to an additive logarithmic term).

A main result of the paper is that the leakage-resilience limit and the unpredictability entropy are equal. This demonstrates, for the first time, the practical relevance of studying polynomial-time algorithms even for problems believed to be hard, and even if the success probability is too small to be of practical interest. With this view, we look at the best probabilistic polynomial time algorithms for the learning with errors and lattice problems that have in recent years gained relevance in cryptography.

We also introduce the concept of

witness compression

for computational problems, namely the reduction of a problem to another problem for which the witnesses are shorter. The length of the smallest achievable witness for a problem also corresponds to the

non-adaptive

leakage-resilience limit, and it is also shown to be equal to the unpredictability entropy of the problem. The witness compression concept is also of independent theoretical interest. An example of an implication of our result is that 3-SAT for

n

variables can be witness compressed from

n

bits (the variable assignments) to 0.41

n

bits.

Divesh Aggarwal, Ueli Maurer
Leakage-Resilient Cryptography from the Inner-Product Extractor

We present a generic method to secure various widely-used cryptosystems against

arbitrary

side-channel leakage, as long as the leakage adheres three restrictions: first, it is bounded per observation but in total can be arbitrary large. Second, memory parts leak

independently

, and, third, the randomness that is used for certain operations comes from a simple (non-uniform) distribution.

As a fundamental building block, we construct a scheme to store a cryptographic secret such that it remains

information theoretically

hidden, even given arbitrary continuous leakage from the storage. To this end, we use a randomized encoding and develop a method to securely

refresh

these encodings even in the presence of leakage. We then show that our encoding scheme exhibits an efficient additive homomorphism which can be used to protect important cryptographic tasks such as identification, signing and encryption. More precisely, we propose

efficient

implementations of the Okamoto identification scheme, and of an ElGamal-based cryptosystem with security against continuous leakage, as long as the leakage adheres the above mentioned restrictions. We prove security of the Okamoto scheme under the DL assumption and

CCA2 security

of our encryption scheme under the DDH assumption.

Stefan Dziembowski, Sebastian Faust
Program Obfuscation with Leaky Hardware

We consider general program obfuscation mechanisms using “somewhat trusted” hardware devices, with the goal of minimizing the usage of the hardware, its complexity, and the required trust. Specifically, our solution has the following properties:

(i)

The obfuscation remains secure even if all the hardware devices in use are

leaky

. That is, the adversary can obtain the result of evaluating any function on the local state of the device, as long as this function has short output. In addition the adversary also controls the communication between the devices.

(ii)

The number of hardware devices used in an obfuscation and the amount of work they perform are polynomial in the security parameter

independently

of the obfuscated function’s complexity.

(iii)

A (

universal

) set of hardware components, owned by the user, is initialized only once and from that point on can be used with multiple “software-based” obfuscations sent by different vendors.

Nir Bitansky, Ran Canetti, Shafi Goldwasser, Shai Halevi, Yael Tauman Kalai, Guy N. Rothblum
BiTR: Built-in Tamper Resilience

The assumption of the availability of tamper-proof hardware tokens has been used extensively in the design of cryptographic primitives. For example, Katz (Eurocrypt 2007) suggests them as an alternative to other setup assumptions, towards achieving general UC-secure multi-party computation. On the other hand, a lot of recent research has focused on protecting security of various cryptographic primitives against physical attacks such as leakage and tampering.

In this paper we put forward the notion of Built-in Tamper Resilience (BiTR) for cryptographic protocols, capturing the idea that the protocol that is encapsulated in a hardware token is designed in such a way so that tampering gives no advantage to an adversary. Our definition is within the UC model, and can be viewed as unifying and extending several prior related works. We provide a composition theorem for BiTR security of protocols, impossibility results, as well as several BiTR constructions for specific cryptographic protocols or tampering function classes. In particular, we achieve general UC-secure computation based on a hardware token that may be susceptible to affine tampering attacks. We also prove that two existing identification and signature schemes (by Schnorr and Okamoto, respecitively) are already BiTR against affine attacks (without requiring any modification or endcoding). We next observe that non-malleable codes can be used as state encodings to achieve the BiTR property, and show new positive results for

deterministic

non-malleable encodings for various classes of tampering functions.

Seung Geol Choi, Aggelos Kiayias, Tal Malkin
Backmatter
Metadata
Title
Advances in Cryptology – ASIACRYPT 2011
Editors
Dong Hoon Lee
Xiaoyun Wang
Copyright Year
2011
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-25385-0
Print ISBN
978-3-642-25384-3
DOI
https://doi.org/10.1007/978-3-642-25385-0

Premium Partner