Skip to main content

2009 | Buch

Advances in Cryptology – ASIACRYPT 2009

15th International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan, December 6-10, 2009. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 15th International Conference on the Theory and Application of Cryptology and Information Security, ASIACRYPT 2009, held in Tokyo, Japan, in December 2009. The 41 revised full papers presented were carefully reviewed and selected from 298 submissions. The papers are organized in topical sections on block ciphers, quantum and post-quantum, hash functions I, encryption schemes, multi party computation, cryptographic protocols, hash funtions II, models and frameworks I, cryptoanalysis: square and quadratic, models and framework II, hash functions III, lattice-based, and side channels.

Inhaltsverzeichnis

Frontmatter

Block Ciphers

Related-Key Cryptanalysis of the Full AES-192 and AES-256

In this paper we present two related-key attacks on the full AES. For AES-256 we show the first key recovery attack that works for all the keys and has 2

99.5

time and data complexity, while the recent attack by Biryukov-Khovratovich-Nikolić works for a weak key class and has much higher complexity. The second attack is the first cryptanalysis of the full AES-192. Both our attacks are boomerang attacks, which are based on the recent idea of finding

local collisions in block ciphers

and enhanced with the

boomerang switching

techniques to gain free rounds in the middle.

The extended version of this paper is available at

http://eprint.iacr.org/2009/317.pdf

.

Alex Biryukov, Dmitry Khovratovich
The Key-Dependent Attack on Block Ciphers

In this paper, we formalize an attack scheme using the key-dependent property, called key-dependent attack. In this attack, the intermediate value, whose distribution is key-dependent, is considered. The attack determines whether a key is right by conducting statistical hypothesis test of the intermediate value. The time and data complexity of the key-dependent attack is also discussed.

We also apply key-dependent attack on reduced-round IDEA. This attack is based on the key-dependent distribution of certain items in Biryukov-Demirci Equation. The attack on 5.5-round variant of IDEA requires 2

21

chosen plaintexts and 2

112.1

encryptions. The attack on 6-round variant requires 2

49

chosen plaintexts and 2

112.1

encryptions. Compared with the previous attacks, the key-dependent attacks on 5.5-round and 6-round IDEA have the lowest time and data complexity, respectively.

Xiaorui Sun, Xuejia Lai
Cascade Encryption Revisited

The security of cascade blockcipher encryption is an important and well-studied problem in theoretical cryptography with practical implications. It is well-known that double encryption improves the security only marginally, leaving triple encryption as the shortest reasonable cascade. In a recent paper, Bellare and Rogaway showed that in the ideal cipher model, triple encryption is significantly more secure than single and double encryption, stating the security of longer cascades as an open question.

In this paper, we propose a new lemma on the indistinguishability of systems extending Maurer’s theory of random systems. In addition to being of independent interest, it allows us to compactly rephrase Bellare and Rogaway’s proof strategy in this framework, thus making the argument more abstract and hence easy to follow. As a result, this allows us to address the security of longer cascades. Our result implies that for blockciphers with smaller key space than message space (e.g. DES), longer cascades improve the security of the encryption up to a certain limit. This partially answers the open question mentioned above.

Peter Gaži, Ueli Maurer

Quantum and Post-Quantum

Quantum-Secure Coin-Flipping and Applications

In this paper, we prove classical coin-flipping secure in the presence of quantum adversaries. The proof uses a recent result of Watrous [20] that allows quantum rewinding for protocols of a certain form. We then discuss two applications. First, the combination of coin-flipping with any non-interactive zero-knowledge protocol leads to an easy transformation from non-interactive zero-knowledge to interactive quantum zero-knowledge. Second, we discuss how our protocol can be applied to a recently proposed method for improving the security of quantum protocols [4], resulting in an implementation without set-up assumptions. Finally, we sketch how to achieve efficient simulation for an extended construction in the common-reference-string model.

Ivan Damgård, Carolin Lunemann
On the Power of Two-Party Quantum Cryptography

We study quantum protocols among two distrustful parties. Under the sole assumption of correctness—guaranteeing that honest players obtain their correct outcomes—we show that every protocol implementing a non-trivial primitive necessarily leaks information to a dishonest player. This extends known impossibility results to all non-trivial primitives. We provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. Our framework also covers the case where the two players are helped by a trusted third party. We show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. All our results hold even against quantum honest-but-curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. As concrete examples, we establish lower bounds on the leakage of standard universal two-party primitives such as oblivious transfer.

Louis Salvail, Christian Schaffner, Miroslava Sotáková
Security Bounds for the Design of Code-Based Cryptosystems

Code-based cryptography is often viewed as an interesting “Post-Quantum” alternative to the classical number theory cryptography. Unlike many other such alternatives, it has the convenient advantage of having only a few, well identified, attack algorithms. However, improvements to these algorithms have made their effective complexity quite complex to compute. We give here some lower bounds on the work factor of idealized versions of these algorithms, taking into account all possible tweaks which could improve their practical complexity. The aim of this article is to help designers select durably secure parameters.

Matthieu Finiasz, Nicolas Sendrier

Hash Functions I

Rebound Attack on the Full Lane Compression Function

In this work, we apply the rebound attack to the AES based SHA-3 candidate

Lane

. The hash function

Lane

uses a permutation based compression function, consisting of a linear message expansion and 6 parallel lanes. In the rebound attack on

Lane

, we apply several new techniques to construct a collision for the full compression function of

Lane

-256 and

Lane

-512. Using a relatively sparse truncated differential path, we are able to solve for a valid message expansion and colliding lanes independently. Additionally, we are able to apply the inbound phase more than once by exploiting the degrees of freedom in the parallel AES states. This allows us to construct semi-free-start collisions for full

Lane

-256 with 2

96

compression function evaluations and 2

88

memory, and for full

Lane

-512 with 2

224

compression function evaluations and 2

128

memory.

Krystian Matusiewicz, María Naya-Plasencia, Ivica Nikolić, Yu Sasaki, Martin Schläffer
Rebound Distinguishers: Results on the Full Whirlpool Compression Function

Whirlpool is a hash function based on a block cipher that can be seen as a scaled up variant of the AES. The main difference is the (compared to AES) extremely conservative key schedule. In this work, we present a distinguishing attack on the full compression function of Whirlpool. We obtain this result by improving the rebound attack on reduced Whirlpool with two new techniques. First, the inbound phase of the rebound attack is extended by up to two rounds using the available degrees of freedom of the key schedule. This results in a near-collision attack on 9.5 rounds of the compression function of Whirlpool with a complexity of 2

176

and negligible memory requirements. Second, we show how to turn this near-collision attack into a distinguishing attack for the full 10 round compression function of Whirlpool. This is the first result on the full Whirlpool compression function.

Mario Lamberger, Florian Mendel, Christian Rechberger, Vincent Rijmen, Martin Schläffer
MD5 Is Weaker Than Weak: Attacks on Concatenated Combiners

We consider a long standing problem in cryptanalysis: attacks on hash function combiners. In this paper, we propose the first attack that allows collision attacks on combiners with a runtime below the birthday-bound of the

smaller

compression function. This answers an open question by Joux posed in 2004.

As a concrete example we give such an attack on combiners with the widely used hash function MD5. The cryptanalytic technique we use combines a partial birthday phase with a differential inside-out technique, and may be of independent interest. This potentially reduces the effort for a collision attack on a combiner like MD5||SHA-1 for the first time.

Florian Mendel, Christian Rechberger, Martin Schläffer
The Intel AES Instructions Set and the SHA-3 Candidates

The search for SHA-3 is now well-underway and the 51 submissions accepted for the first round reflected a wide variety of design approaches. A significant number were built around Rijndael/AES-based operations and, in some cases, the AES round function itself. Many of the design teams pointed to the forthcoming Intel AES instructions set, to appear on Westmere chips during 2010, when making a variety of performance claims. In this paper we study, for the first time, the likely impact of the new AES instructions set on all the SHA-3 candidates that might benefit. As well as distinguishing between those algorithms that are AES-based and those that might be described as AES-inspired, we have developed optimised code for all the former. Since Westmere processors are not yet available, we have developed a novel software technique based on publicly available information that allows us to accurately emulate the performance of these algorithms on the currently available Nehalem processor. This gives us the most accurate insight to-date of the potential performance of SHA-3 candidates using the Intel AES instructions set.

Ryad Benadjila, Olivier Billet, Shay Gueron, Matt J. B. Robshaw

Encryption Schemes

Group Encryption: Non-interactive Realization in the Standard Model

Group encryption (

GE

) schemes, introduced at Asiacrypt’07, are an encryption analogue of group signatures with a number of interesting applications. They allow a sender to encrypt a message (in the CCA2 security sense) for some member of a PKI group concealing that member’s identity (in a CCA2 security sense, as well); the sender is able to convince a verifier that, among other things, the ciphertext is valid and some anonymous certified group member will be able to decrypt the message. As in group signatures, an opening authority has the power of pinning down the receiver’s identity. The initial

GE

construction uses interactive proofs as part of the design (which can be made non-interactive using the random oracle model) and the design of a fully non-interactive group encryption system is still an open problem. In this paper, we give the first

GE

scheme, which is a pure encryption scheme in the standard model,

i.e.

, a scheme where the ciphertext is a single message and proofs are non-interactive (and do not employ the random oracle heuristic). As a building block, we use a new public key certification scheme which incurs the smallest amount of interaction, as well.

Julien Cathalo, Benoît Libert, Moti Yung
On Black-Box Constructions of Predicate Encryption from Trapdoor Permutations

Predicate encryption

is a recent generalization of identity-based encryption (IBE), broadcast encryption, attribute-based encryption, and more. A natural question is whether there exist

black-box

constructions of predicate encryption based on generic building blocks, e.g., trapdoor permutations. Boneh et al. (FOCS 2008) recently gave a negative answer for the specific case of IBE.

We show both negative and positive results. First, we identify a combinatorial property on the sets of predicates/attributes and show that, for any sets having this property, no black-box construction of predicate encryption from trapdoor permutations (or even CCA-secure encryption) is possible. Our framework implies the result of Boneh et al. as a special case, and also rules out, e.g., black-box constructions of forward-secure encryption and broadcast encryption (with many excluded users). On the positive side, we identify conditions under which predicate encryption schemes

can

be constructed based on any CPA-secure (standard) encryption scheme.

Jonathan Katz, Arkady Yerukhimovich
Hierarchical Predicate Encryption for Inner-Products

This paper presents a hierarchical predicate encryption (HPE) scheme for inner-product predicates that is secure (selectively attribute-hiding) in the standard model under new assumptions. These assumptions are non-interactive and of fixed size in the number of adversary’s queries (i.e., not “

q

-type”), and are proven to hold in the generic model. To the best of our knowledge, this is the first HPE (or delegatable PE) scheme for inner-product predicates that is secure in the standard model. The underlying techniques of our result are based on a new approach on bilinear pairings, which is extended from bilinear pairing groups over linear spaces. They are quite different from the existing techniques and may be of independent interest.

Tatsuaki Okamoto, Katsuyuki Takashima
Hedged Public-Key Encryption: How to Protect against Bad Randomness

Public-key encryption schemes rely for their IND-CPA security on per-message fresh randomness. In practice, randomness may be of poor quality for a variety of reasons, leading to failure of the schemes. Expecting the systems to improve is unrealistic. What we show in this paper is that we can, instead, improve the cryptography to offset the lack of possible randomness. We provide public-key encryption schemes that achieve IND-CPA security when the randomness they use is of high quality, but, when the latter is not the case, rather than breaking completely, they achieve a weaker but still useful notion of security that we call IND-CDA. This hedged public-key encryption provides the best possible security guarantees in the face of bad randomness. We provide simple RO-based ways to make in-practice IND-CPA schemes hedge secure with minimal software changes. We also provide non-RO model schemes relying on lossy trapdoor functions (LTDFs) and techniques from deterministic encryption. They achieve adaptive security by establishing and exploiting the anonymity of LTDFs which we believe is of independent interest.

Mihir Bellare, Zvika Brakerski, Moni Naor, Thomas Ristenpart, Gil Segev, Hovav Shacham, Scott Yilek

Multi Party Computation

Secure Two-Party Computation Is Practical

Secure multi-party computation has been considered by the cryptographic community for a number of years. Until recently it has been a purely theoretical area, with few implementations with which to test various ideas. This has led to a number of optimisations being proposed which are quite restricted in their application. In this paper we describe an implementation of the two-party case, using Yao’s garbled circuits, and present various algorithmic protocol improvements. These optimisations are analysed both theoretically and empirically, using experiments of various adversarial situations. Our experimental data is provided for reasonably large circuits, including one which performs an AES encryption, a problem which we discuss in the context of various possible applications.

Benny Pinkas, Thomas Schneider, Nigel P. Smart, Stephen C. Williams
Secure Multi-party Computation Minimizing Online Rounds

Multi-party secure computations are general important procedures to compute any function while keeping the security of private inputs. In this work we ask whether preprocessing can allow low latency (that is, small round) secure multi-party protocols that are universally-composable (UC). In particular, we allow any polynomial time preprocessing as long as it is independent of the exact circuit and actual inputs of the specific instance problem to solve, with only a bound

k

on the number of gates in the circuits known.

To address the question, we first define the model of “Multi-Party Computation on Encrypted Data” (

mp-ced

), implicitly described in [FH96],[JJ00],[CDN01],[DN03]. In this model, computing parties establish a threshold public key in a preprocessing stage, and only then private data, encrypted under the shared public key, is revealed. The computing parties then get the computational circuit they agree upon and evaluate the circuit on the encrypted data. The

$\textsc{mp-ced}$

model is interesting since it is well suited for modern computing environments, where many repeated computations on overlapping data are performed.

We present two different round-efficient protocols in this model:

The first protocol generates

k

garbled gates in the preprocessing stage and requires only two (online) rounds.

The second protocol generates a garbled universal circuit of size

O

(

k

log

k

) in the preprocessing stage, and requires only one (online) round (i.e., an obvious lower bound), and therefore it can run asynchronously.

Both protocols are secure against an active, static adversary controlling any number of parties. When the fraction of parties the adversary can corrupt is less than half, the adversary cannot force the protocols to abort.

The

$\textsc{mp-ced}$

model is closely related to the general Multi-Party Computation (

mpc

) model and, in fact, both can be reduced to each other. The first (resp. second) protocol above naturally gives protocols for three-round (resp. two-round) universally composable

$\textsc{mpc}$

secure against active, static adversary controlling any number of parties (with preprocessing).

Seung Geol Choi, Ariel Elbaz, Tal Malkin, Moti Yung
Improved Non-committing Encryption with Applications to Adaptively Secure Protocols

We present a new construction of non-committing encryption schemes. Unlike the previous constructions of Canetti et al. (STOC ’96) and of Damgård and Nielsen (Crypto ’00), our construction achieves all of the following properties:

Optimal round complexity.

Our encryption scheme is a 2-round protocol, matching the round complexity of Canetti et al. and improving upon that in Damgård and Nielsen.

Weaker assumptions.

Our construction is based on

trapdoor simulatable cryptosystems

, a new primitive that we introduce as a relaxation of those used in previous works. We also show how to realize this primitive based on hardness of factoring.

Improved efficiency.

The amortized complexity of encrypting a single bit is

O

(1) public key operations on a constant-sized plaintext in the underlying cryptosystem.

As a result, we obtain the first non-committing public-key encryption schemes under hardness of factoring and worst-case lattice assumptions; previously, such schemes were only known under the CDH and RSA assumptions. Combined with existing work on secure multi-party computation, we obtain protocols for multi-party computation secure against a malicious adversary that may adaptively corrupt an arbitrary number of parties under weaker assumptions than were previously known. Specifically, we obtain the first adaptively secure multi-party protocols based on hardness of factoring in both the stand-alone setting and the UC setting with a common reference string.

Seung Geol Choi, Dana Dachman-Soled, Tal Malkin, Hoeteck Wee

Cryptographic Protocols

Non-malleable Statistically Hiding Commitment from Any One-Way Function

We give a construction of non-malleable statistically hiding commitments based on the existence of one-way functions. Our construction employs statistically hiding commitment schemes recently proposed by Haitner and Reingold [1], and special-sound WI proofs. Our proof of security relies on the message scheduling technique introduced by Dolev, Dwork and Naor [2], and requires only the use of black-box techniques.

Zongyang Zhang, Zhenfu Cao, Ning Ding, Rong Ma
Proofs of Storage from Homomorphic Identification Protocols

Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where ‘tags’ on multiple messages can be homomorphically combined to yield a ‘tag’ on any linear combination of these messages.

We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model).

Giuseppe Ateniese, Seny Kamara, Jonathan Katz
Simple Adaptive Oblivious Transfer without Random Oracle

Adaptive oblivious transfer (OT) is a two-party protocol which simulates an ideal world such that the sender sends

M

1

, ⋯ ,

M

n

to the trusted third party (TTP), and the receiver receives

$M_{\sigma_i}$

from TTP adaptively for

i

 = 1,2, ⋯ 

k

. This paper shows the first pairing-free

fully simulatable

adaptive OT. It is also the first

fully simulatable

scheme which does not rely on dynamic assumptions. Indeed our scheme holds under the DDH assumption.

Kaoru Kurosawa, Ryo Nojima

Hash Functions II

Improved Generic Algorithms for 3-Collisions

An

r

-collision for a function is a set of

r

distinct inputs with identical outputs. Actually finding

r

-collisions for a random map over a finite set of cardinality

N

requires at least about

N

(

r

 − 1)/

r

units of time on a sequential machine. For

r

=2, memoryless and well-parallelizable algorithms are known. The current paper describes memory-efficient and parallelizable algorithms for

r

 ≥ 3. The main results are: (1) A sequential algorithm for 3-collisions, roughly using memory

N

α

and time

N

1 − 

α

for

α

 ≤ 1/3. In particular, given

N

1/3

units of storage, one can find 3-collisions in time

N

2/3

. (2) A parallelization of this algorithm using

N

1/3

processors running in time

N

1/3

, where each single processor only needs a constant amount of memory. (3) A generalisation of this second approach to

r

-collisions for

r

 ≥ 3: given

N

s

parallel processors, with

s

 ≤ (

r

 − 2)/

r

, one can generate

r

-collisions roughly in time

N

((

r

 − 1)/

r

) − 

s

, using memory

N

((

r

 − 2)/

r

) − 

s

on every processor.

Antoine Joux, Stefan Lucks
A Modular Design for Hash Functions: Towards Making the Mix-Compress-Mix Approach Practical

The design of cryptographic hash functions is a very complex and failure-prone process. For this reason, this paper puts forward a completely

modular

and

fault-tolerant

approach to the construction of a full-fledged hash function from an underlying simpler hash function

H

and a further primitive

F

(such as a block cipher), with the property that collision resistance of the construction only relies on

H

, whereas indifferentiability from a random oracle follows from

F

being ideal. In particular, the failure of one of the two components must not affect the security property implied by the other component.

The

Mix-Compress-Mix

(MCM) approach by Ristenpart and Shrimpton (ASIACRYPT 2007) envelops the hash function

H

between two

injective

mixing steps, and can be interpreted as a first attempt at such a design. However, the proposed instantiation of the mixing steps, based on block ciphers, makes the resulting hash function impractical: First, it cannot be evaluated online, and second, it produces larger hash values than

H

, while only inheriting the collision-resistance guarantees for the shorter output. Additionally, it relies on a

trapdoor

one-way permutation, which seriously compromises the use of the resulting hash function for random oracle instantiation in certain scenarios.

This paper presents the first

efficient

modular hash function with online evaluation and short output length. The core of our approach are novel block-cipher based designs for the mixing steps of the MCM approach which rely on significantly weaker assumptions: The first mixing step is realized

without

any computational assumptions (besides the underlying cipher being ideal), whereas the second mixing step only requires a one-way permutation

without

a trapdoor, which we prove to be the minimal assumption for the construction of injective random oracles.

Anja Lehmann, Stefano Tessaro
How to Confirm Cryptosystems Security: The Original Merkle-Damgård Is Still Alive!

At Crypto 2005, Coron et al. showed that Merkle-Damgård hash function (

MDHF

) with a fixed input length random oracle is not indifferentiable from a random oracle

RO

due to the extension attack. Namely

MDHF

does not behave like

RO

. This result implies that there exists some cryptosystem secure in the

RO

model but insecure under

MDHF

. However, this does not imply that no cryptosystem is secure under

MDHF

. This fact motivates us to establish a criteria methodology for confirming cryptosystems security under

MDHF

.

In this paper, we confirm cryptosystems security by using the following approach:

1

Find a variant,

$\widetilde{\mathsf{RO}}$

, of

RO

which leaks the information needed to realize the extension attack.

1

Prove that

MDHF

is indifferentiable from

$\widetilde{\mathsf{RO}}$

.

1

Prove cryptosystems security in the

$\widetilde{\mathsf{RO}}$

model.

From the indifferentiability framework, a cryptosystem secure in the

$\widetilde{\mathsf{RO}}$

model is also secure under

MDHF

. Thus we concentrate on finding

$\widetilde{\mathsf{RO}}$

, which is weaker than

RO

.

We propose the Traceable Random Oracle (

TRO

) which leaks enough information to permit the extension attack. By using

TRO

, we can

easily

confirm the security of OAEP and variants of OAEP. However, there are several practical cryptosystems whose security cannot be confirmed by

TRO

(e.g. RSA-KEM). This is because

TRO

leaks information that is irrelevant to the extension attack. Therefore, we propose another

$\widetilde{\mathsf{RO}}$

, the Extension Attack Simulatable Random Oracle,

ERO

, that leaks

just

the information needed for the extension attack. Fortunately,

ERO

is

necessary and sufficient

to confirm the security of cryptosystems under

MDHF

. This means that the security of

any

cryptosystem under

MDHF

is

equivalent

to that under the

ERO

model. We prove that RSA-KEM is secure in the

ERO

model.

Yusuke Naito, Kazuki Yoneyama, Lei Wang, Kazuo Ohta

Models and Frameworks I

On the Analysis of Cryptographic Assumptions in the Generic Ring Model

At

Eurocrypt 2009

Aggarwal and Maurer proved that breaking RSA is equivalent to factoring in the

generic ring model

. This model captures algorithms that may exploit the full algebraic structure of the ring of integers modulo

n

, but no properties of the given representation of ring elements. This interesting result raises the question how to interpret proofs in the generic ring model. For instance, one may be tempted to deduce that a proof in the generic model gives some evidence that solving the considered problem is also hard in a general model of computation. But is this reasonable?

We prove that computing the

Jacobi symbol

is equivalent to factoring in the generic ring model. Since there are simple and efficient non-generic algorithms computing the Jacobi symbol, we show that the generic model cannot give any evidence towards the hardness of a computational problem. Despite this negative result, we also argue why proofs in the generic ring model are still interesting, and show that solving the

quadratic residuosity

and

subgroup decision

problems is generically equivalent to factoring.

Tibor Jager, Jörg Schwenk
Zero Knowledge in the Random Oracle Model, Revisited

We revisit previous formulations of zero knowledge in the random oracle model due to Bellare and Rogaway (CCS ’93) and Pass (Crypto ’03), and present a hierarchy for zero knowledge that includes both of these formulations. The hierarchy relates to the programmability of the random oracle, previously studied by Nielsen (Crypto ’02).

We establish a subtle separation between the Bellare-Rogaway formulation and a weaker formulation, which yields a finer distinction than the separation in Nielsen’s work.

We show that zero-knowledge according to each of these formulations is not preserved under sequential composition. We introduce stronger definitions wherein the adversary may receive auxiliary input that depends on the random oracle (as in Unruh (Crypto ’07)) and establish closure under sequential composition for these definitions. We also present round-optimal protocols for

NP

satisfying the stronger requirements.

Motivated by our study of zero knowledge, we introduce a new definition of proof of knowledge in the random oracle model that accounts for oracle-dependent auxiliary input. We show that two rounds of interaction are necessary and sufficient to achieve zero-knowledge proofs of knowledge according to this new definition, whereas one round of interaction is sufficient in previous definitions.

Extending our work on zero knowledge, we present a hierarchy for circuit obfuscation in the random oracle model, the weakest being that achieved in the work of Lynn, Prabhakaran and Sahai (Eurocrypt ’04). We show that the stronger notions capture precisely the class of circuits that is efficiently and exactly learnable under membership queries.

Hoeteck Wee
A Framework for Universally Composable Non-committing Blind Signatures

A universally composable (UC) blind signature functionality demands users to commit to the message to be blindly signed. It is thereby impossible to realize in the plain model. We show that even non-committing variants of UC blind signature functionality remain not realizable in the plain model. We then characterize adaptively secure UC non-committing blind signatures in the common reference string model by presenting equivalent stand-alone security notions. We also present a generic construction based on conceptually simple Fischlin’s blind signature scheme.

Masayuki Abe, Miyako Ohkubo

Cryptanalysis: Sqaure and Quadratic

Cryptanalysis of the Square Cryptosystems

Following the cryptanalyses of the encryption scheme HFE and of the signature scheme SFLASH, no serious alternative multivariate cryptosystems remained, except maybe the signature schemes UOV and HFE

− −

. Recently, two proposals have been made to build highly efficient multivariate cryptosystems around a quadratic

internal

transformation: the first one is a signature scheme called square-vinegar and the second one is an encryption scheme called square introduced at CT-RSA 2009.

In this paper, we present a total break of both the square-vinegar signature scheme and the square encryption scheme. For the practical parameters proposed by the authors of these cryptosystems, the complexity of our attacks is about 2

35

operations. All the steps of the attack have been implemented in the Magma computer algebra system and allowed to experimentally assess the results presented in this paper.

Olivier Billet, Gilles Macario-Rat
Factoring pq 2 with Quadratic Forms: Nice Cryptanalyses

We present a new algorithm based on binary quadratic forms to factor integers of the form

N

 = 

pq

2

. Its heuristic running time is exponential in the general case, but becomes polynomial when special (arithmetic) hints are available, which is exactly the case for the so-called

NICE

family of public-key cryptosystems based on quadratic fields introduced in the late 90s. Such cryptosystems come in two flavours, depending on whether the quadratic field is imaginary or real. Our factoring algorithm yields a general key-recovery polynomial-time attack on

NICE

, which works for both versions: Castagnos and Laguillaumie recently obtained a total break of

imaginary

-

NICE

, but their attack could not apply to

real

-

NICE

. Our algorithm is rather different from classical factoring algorithms: it combines Lagrange’s reduction of quadratic forms with a provable variant of Coppersmith’s lattice-based root finding algorithm for homogeneous polynomials. It is very efficient given either of the following arithmetic hints: the public key of

imaginary

-

NICE

, which provides an alternative to the CL attack; or the knowledge that the regulator of the quadratic field

$\mathbb{Q}(\sqrt{p})$

is unusually small, just like in

real

-

NICE

.

Guilhem Castagnos, Antoine Joux, Fabien Laguillaumie, Phong Q. Nguyen
Attacking Power Generators Using Unravelled Linearization: When Do We Output Too Much?

We look at iterated power generators

$s_i = s_{i-1}^e {\rm mod} N$

for a random seed

s

0

 ∈ ℤ

N

that in each iteration output a certain amount of bits. We show that heuristically an output of

$(1-\frac 1 e)\log N$

most significant bits per iteration allows for efficient recovery of the whole sequence. This means in particular that the Blum-Blum-Shub generator should be used with an output of less than half of the bits per iteration and the RSA generator with

e

 = 3 with less than a

$\frac 1 3$

-fraction of the bits.

Our method is lattice-based and introduces a new technique, which combines the benefits of two techniques, namely the method of linearization and the method of Coppersmith for finding small roots of polynomial equations. We call this new technique

unravelled linearization

.

Mathias Herrmann, Alexander May

Models and Frameworks II

Security Notions and Generic Constructions for Client Puzzles

By a computational puzzle we mean a mildly difficult computational problem that requires resources (processor cycles, memory, or both) to solve. Puzzles have found a variety of uses in security. In this paper we are concerned with

client puzzles

: a type of puzzle used as a defense against Denial of Service (DoS) attacks. The main contribution of this paper is a formal model for the security of client puzzles.We clarify the interface that client puzzles should offer and give two security notions for puzzles. Both functionality and security are inspired by, and tailored to, the use of puzzles as a defense against DoS attacks.Our definitions fill an important gap: breaking either of the two properties immediately leads to successful DoS attacks. We illustrate this point with an attack against a previously proposed puzzle construction.We also provide a generic construction of a client puzzle which meets our security definitions.

Liqun Chen, Paul Morrissey, Nigel P. Smart, Bogdan Warinschi
Foundations of Non-malleable Hash and One-Way Functions

Non-malleability is an interesting and useful property which ensures that a cryptographic protocol preserves the independence of the underlying values: given for example an encryption

${\cal E}(m)$

of some unknown message

m

, it should be hard to transform this ciphertext into some encryption

${\cal E}(m^*)$

of a related message

m

*

. This notion has been studied extensively for primitives like encryption, commitments and zero-knowledge. Non-malleability of one-way functions and hash functions has surfaced as a crucial property in several recent results, but it has not undergone a comprehensive treatment so far. In this paper we initiate the study of such non-malleable functions. We start with the design of an appropriate security definition. We then show that non-malleability for hash and one-way functions can be achieved, via a theoretical construction that uses perfectly one-way hash functions and simulation-sound non-interactive zero-knowledge proofs of knowledge (NIZKPoK). We also discuss the complexity of non-malleable hash and one-way functions. Specifically, we show that such functions imply perfect one-wayness and we give a black-box based separation of non-malleable functions from one-way permutations (which our construction bypasses due to the “non-black-box” NIZKPoK based on trapdoor permutations). We exemplify the usefulness of our definition in cryptographic applications by showing that (some variant of) non-malleability is necessary and sufficient to securely replace one of the two random oracles in the IND-CCA encryption scheme by Bellare and Rogaway, and to improve the security of client-server puzzles.

Alexandra Boldyreva, David Cash, Marc Fischlin, Bogdan Warinschi

Hash Functions III

Improved Cryptanalysis of Skein

The hash function Skein is the submission of Ferguson et al. to the NIST Hash Competition, and is arguably a serious candidate for selection as SHA-3. This paper presents the first third-party analysis of Skein, with an extensive study of its main component: the block cipher Threefish. We notably investigate near collisions, distinguishers, impossible differentials, key recovery using related-key differential and boomerang attacks. In particular, we present near collisions on up to 17 rounds, an impossible differential on 21 rounds, a related-key boomerang distinguisher on 34 rounds, a known-related-key boomerang distinguisher on 35 rounds, and key recovery attacks on up to 32 rounds, out of 72 in total for Threefish-512. None of our attacks directly extends to the full Skein hash. However, the pseudorandomness of Threefish is required to validate the security proofs on Skein, and our results conclude that at least 36 rounds of Threefish seem required for optimal security guarantees.

Jean-Philippe Aumasson, Çağdaş Çalık, Willi Meier, Onur Özen, Raphael C. -W. Phan, Kerem Varıcı
Linearization Framework for Collision Attacks: Application to CubeHash and MD6

In this paper, an improved differential cryptanalysis framework for finding collisions in hash functions is provided. Its principle is based on linearization of compression functions in order to find low weight differential characteristics as initiated by Chabaud and Joux. This is formalized and refined however in several ways: for the problem of finding a conforming message pair whose differential trail follows a linear trail, a condition function is introduced so that finding a collision is equivalent to finding a preimage of the zero vector under the condition function. Then, the dependency table concept shows how much influence every input bit of the condition function has on each output bit. Careful analysis of the dependency table reveals degrees of freedom that can be exploited in accelerated preimage reconstruction under the condition function. These concepts are applied to an in-depth collision analysis of reduced-round versions of the two SHA-3 candidates CubeHash and MD6, and are demonstrated to give by far the best currently known collision attacks on these SHA-3 candidates.

Eric Brier, Shahram Khazaei, Willi Meier, Thomas Peyrin
Preimages for Step-Reduced SHA-2

In this paper, we present preimage attacks on up to 43-step SHA-256 (around 67% of the total 64 steps) and 46-step SHA-512 (around 57.5% of the total 80 steps), which significantly increases the number of attacked steps compared to the best previously published preimage attack working for 24 steps. The time complexities are 2

251.9

, 2

509

for finding pseudo-preimages and 2

254.9

, 2

511.5

compression function operations for full preimages. The memory requirements are modest, around 2

6

words for 43-step SHA-256 and 46-step SHA-512. The pseudo-preimage attack also applies to 43-step SHA-224 and SHA-384. Our attack is a meet-in-the-middle attack that uses a range of novel techniques to split the function into two independent parts that can be computed separately and then matched in a birthday-style phase.

Kazumaro Aoki, Jian Guo, Krystian Matusiewicz, Yu Sasaki, Lei Wang

Lattice-Based

Fiat-Shamir with Aborts: Applications to Lattice and Factoring-Based Signatures

We demonstrate how the framework that is used for creating efficient number-theoretic ID and signature schemes can be transferred into the setting of lattices. This results in constructions of the most efficient to-date identification and signature schemes with security based on the worst-case hardness of problems in ideal lattices. In particular, our ID scheme has communication complexity of around 65,000 bits and the length of the signatures produced by our signature scheme is about 50,000 bits. All prior lattice-based identification schemes required on the order of millions of bits to be transferred, while all previous lattice-based signature schemes were either stateful, too inefficient, or produced signatures whose lengths were also on the order of millions of bits. The security of our identification scheme is based on the hardness of finding the approximate shortest vector to within a factor of

$\tilde{O}(n^2)$

in the standard model, while the security of the signature scheme is based on the same assumption in the random oracle model. Our protocols are very efficient, with all operations requiring

$\tilde{O}(n)$

time.

We also show that the technique for constructing our lattice-based schemes can be used to improve certain number-theoretic schemes. In particular, we are able to shorten the length of the signatures that are produced by Girault’s factoring-based digital signature scheme ([10][11][31]).

Vadim Lyubashevsky
Efficient Public Key Encryption Based on Ideal Lattices
(Extended Abstract)

We describe public key encryption schemes with security provably based on the worst case hardness of the approximate Shortest Vector Problem in some structured lattices, called ideal lattices. Under the assumption that the latter is exponentially hard to solve even with a quantum computer, we achieve CPA-security against subexponential attacks, with (quasi-)optimal asymptotic performance: if

n

is the security parameter, both keys are of bit-length

${\widetilde{O}}(n)$

and the amortized costs of both encryption and decryption are

${\widetilde{O}}(1)$

per message bit. Our construction adapts the trapdoor one-way function of Gentry

et al.

(STOC’08), based on the Learning With Errors problem, to structured lattices. Our main technical tools are an adaptation of Ajtai’s trapdoor key generation algorithm (ICALP’99) and a re-interpretation of Regev’s quantum reduction between the Bounded Distance Decoding problem and sampling short lattice vectors.

Damien Stehlé, Ron Steinfeld, Keisuke Tanaka, Keita Xagawa
Smooth Projective Hashing and Password-Based Authenticated Key Exchange from Lattices

We describe a public-key encryption scheme based on lattices — specifically, based on the hardness of the

learning with error

(LWE) problem — that is secure against chosen-ciphertext attacks while admitting (a variant of) smooth projective hashing. This encryption scheme suffices to construct a protocol for password-based authenticated key exchange (PAKE) that can be proven secure based on the LWE assumption in the standard model. We thus obtain the first PAKE protocol whose security relies on a lattice-based assumption.

Jonathan Katz, Vinod Vaikuntanathan

Side Channels

PSS Is Secure against Random Fault Attacks

A fault attack consists in inducing hardware malfunctions in order to recover secrets from electronic devices. One of the most famous fault attack is Bellcore’s attack against RSA with CRT; it consists in inducing a fault modulo

p

but not modulo

q

at signature generation step; then by taking a gcd the attacker can recover the factorization of

N

 = 

pq

. The Bellcore attack applies to any encoding function that is deterministic, for example FDH. Recently, the attack was extended to

randomized

encodings based on the

iso/iec

9796-2 signature standard. Extending the attack to other randomized encodings remains an open problem.

In this paper, we show that the Bellcore attack cannot be applied to the PSS encoding; namely we show that PSS is provably secure against random fault attacks in the random oracle model, assuming that inverting RSA is hard.

Jean-Sébastien Coron, Avradip Mandal
Cache-Timing Template Attacks

Cache-timing attacks are a serious threat to security-critical software. We show that the combination of vector quantization and hidden Markov model cryptanalysis is a powerful tool for automated analysis of cache-timing data; it can be used to recover critical algorithm state such as key material. We demonstrate its effectiveness by running an attack on the elliptic curve portion of OpenSSL (0.9.8k and under). This involves automated lattice attacks leading to key recovery within hours. We carry out the attack on live cache-timing data without simulating the side channel, showing these attacks are practical and realistic.

Billy Bob Brumley, Risto M. Hakala
Memory Leakage-Resilient Encryption Based on Physically Unclonable Functions

Physical attacks on cryptographic implementations and devices have become crucial. In this context a recent line of research on a new class of side-channel attacks, called

memory attacks

, has received increasingly more attention. These attacks allow an adversary to measure a significant fraction of secret key bits directly from memory, independent of any computational side-channels.

Physically Unclonable Functions (PUFs) represent a promising new technology that allows to store secrets in a tamper-evident and unclonable manner. PUFs enjoy their security from physical structures at submicron level and are very useful primitives to protect against memory attacks.

In this paper we aim at making the first step towards combining and binding algorithmic properties of cryptographic schemes with physical structure of the underlying hardware by means of PUFs. We introduce a new cryptographic primitive based on PUFs, which we call PUF-PRFs. These primitives can be used as a source of randomness like pseudorandom functions (PRFs). We construct a block cipher based on PUF-PRFs that allows simultaneous protection against algorithmic and physical attackers, in particular against memory attacks. While PUF-PRFs in general differ in some aspects from traditional PRFs, we show a concrete instantiation based on established SRAM technology that closes these gaps.

Frederik Armknecht, Roel Maes, Ahmad-Reza Sadeghi, Berk Sunar, Pim Tuyls
Signature Schemes with Bounded Leakage Resilience

A

leakage-resilient

cryptosystem remains secure even if arbitrary, but bounded, information about the secret key (and possibly other internal state information) is leaked to an adversary. Denote the length of the secret key by

n

. We show:

A full-fledged signature scheme tolerating leakage of

n

 − 

n

ε

bits of information about the secret key (for any constant

ε

> 0), based on general assumptions.

A one-time signature scheme, based on the minimal assumption of one-way functions, tolerating leakage of

$(\frac{1}{4}-\epsilon) \cdot n$

bits of information about the signer’s entire state.

A more efficient one-time signature scheme, that can be based on several specific assumptions, tolerating leakage of

$(\frac{1}{2}-\epsilon) \cdot n$

bits of information about the signer’s entire state.

The latter two constructions extend to give leakage-resilient

t

-time signature schemes. All the above constructions are in the standard model.

Jonathan Katz, Vinod Vaikuntanathan
Backmatter
Metadaten
Titel
Advances in Cryptology – ASIACRYPT 2009
herausgegeben von
Mitsuru Matsui
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-10366-7
Print ISBN
978-3-642-10365-0
DOI
https://doi.org/10.1007/978-3-642-10366-7

Premium Partner