Skip to main content

2008 | Buch

Advances in Cryptology - ASIACRYPT 2008

14th International Conference on the Theory and Application of Cryptology and Information Security, Melbourne, Australia, December 7-11, 2008. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 14th International Conference on the Theory and Application of Cryptology and Information Security, ASIACRYPT 2008, held in Melbourne, Australia, in December 2008. The 33 revised full papers presented together with the abstract of 1 invited lecture were carefully reviewed and selected from 208 submissions. The papers are organized in topical sections on muliti-party computation, cryptographic protocols, cryptographic hash functions, public-key cryptograhy, lattice-based cryptography, private-key cryptograhy, and analysis of stream ciphers.

Inhaltsverzeichnis

Frontmatter

Multi-Party Computation

MPC vs. SFE : Unconditional and Computational Security

In secure computation among a set

$\mathcal{P}$

of players one considers an adversary who can corrupt certain players. The three usually considered types of corruption are active, passive, and fail corruption. The adversary’s corruption power is characterized by a so-called adversary structure which enumerates the adversary’s corruption options, each option being a triple (

A

,

E

,

F

) of subsets of

$\mathcal{P}$

, where the adversary can actively corrupt the players in

A

, passively corrupt the players in

E

, and fail-corrupt the players in

F

.

This paper is concerned with characterizing for which adversary structures general secure function evaluation (SFE) and secure (reactive) multi-party computation (MPC) is possible, in various models. This has been achieved so far only for the very special model of perfect security, where, interestingly, the conditions for SFE and MPC are distinct. Such a separation was first observed by Ishai

et al.

in the context of computational security. We give the exact conditions for general SFE and MPC to be possible for information-theoretic security (with negligible error probability) and for computational security, assuming a broadcast channel, with and without setup. In all these settings we confirm the strict separation between SFE and MPC. As a simple consequence of our results we solve an open problem for computationally secure MPC in a threshold model with all three corruption types.

Martin Hirt, Ueli Maurer, Vassilis Zikas
Strongly Multiplicative and 3-Multiplicative Linear Secret Sharing Schemes

Strongly multiplicative linear secret sharing schemes (LSSS) have been a powerful tool for constructing secure multi-party computation protocols. However, it remains open

whether or not there exist efficient constructions of strongly multiplicative LSSS from general LSSS

. In this paper, we propose the new concept of 3

-multiplicative LSSS

, and establish its relationship with strongly multiplicative LSSS. More precisely, we show that any 3-multiplicative LSSS is a strongly multiplicative LSSS, but the converse is not true; and that any strongly multiplicative LSSS can be efficiently converted into a 3-multiplicative LSSS. Furthermore, we apply 3-multiplicative LSSS to the computation of unbounded fan-in multiplication, which reduces its round complexity to four (from five of the previous protocol based on multiplicative LSSS). We also give two constructions of 3-multiplicative LSSS from Reed-Muller codes and algebraic geometric codes. We believe that the construction and verification of 3-multiplicative LSSS are easier than those of strongly multiplicative LSSS. This presents a step forward in settling the open problem of efficient constructions of strongly multiplicative LSSS from general LSSS.

Zhifang Zhang, Mulan Liu, Yeow Meng Chee, San Ling, Huaxiong Wang
Graph Design for Secure Multiparty Computation over Non-Abelian Groups

Recently, Desmedt et al. studied the problem of achieving secure

n

-party computation over non-Abelian groups. They considered the passive adversary model and they assumed that the parties were only allowed to perform black-box operations over the finite group

G

. They showed three results for the

n

-product function

f

G

(

x

1

,...,

x

n

) : = 

x

1

·

x

2

·...·

x

n

, where the input of party

P

i

is

x

i

 ∈ 

G

for

i

 ∈ {1,...,

n

}. First, if

$t \geq \lceil \tfrac{n}{2} \rceil$

then it is impossible to have a

t

-private protocol computing

f

G

. Second, they demonstrated that one could

t

-privately compute

f

G

for any

$t \leq \lceil \tfrac{n}{2} \rceil - 1$

in exponential communication cost. Third, they constructed a randomized algorithm with

O

(

n

t

2

) communication complexity for any

$t < \tfrac{n}{2.948}$

.

In this paper, we extend these results in two directions. First, we use percolation theory to show that for any fixed

ε

> 0, one can design a randomized algorithm for any

$t\leq \frac{n}{2+\epsilon}$

using

O

(

n

3

) communication complexity, thus nearly matching the known upper bound

$\lceil \tfrac{n}{2} \rceil - 1$

. This is the first time that percolation theory is used for multiparty computation. Second, we exhibit a deterministic construction having polynomial communication cost for any

t

 = 

O

(

n

1 − 

ε

) (again for any fixed

ε

> 0). Our results extend to the more general function

$\widetilde{f}_{G}(x_{1},\ldots,x_{m}) := x_{1} \cdot x_{2} \cdot \ldots \cdot x_{m}$

where

m

 ≥ 

n

and each of the

n

parties holds one or more input values.

Xiaoming Sun, Andrew Chi-Chih Yao, Christophe Tartary

Invited Talk

Some Perspectives on Complexity-Based Cryptography

In the 1940’s, Shannon applied his information theory to build a mathematical foundation for classical cryptography which studies how information can be securely encrypted and communicated. In the internet age, Turing’s theory of computation has been summoned to augment Shannon’s model and create new frameworks, under which numerous cryptographic applications have blossomed. Fundamental concepts, such as ‘‘information” and ‘‘knowledge transfer”, often need to be re-examined and reformulated. The amalgamation process is still on-going in view of the many unsolved security issues. In this talk we give a brief overview of the background, and discuss some of the recent developments in complexity-based cryptography. We also raise some open questions and explore directions for future work.

Andrew Chi-Chih Yao

Cryptographic Protocols I

A Modular Security Analysis of the TLS Handshake Protocol

We study the security of the widely deployed Secure Session Layer/Transport Layer Security (TLS) key agreement protocol. Our analysis identifies, justifies, and exploits the modularity present in the design of the protocol: the

application keys

offered to higher level applications are obtained from a

master key

, which in turn is derived, through interaction, from a

pre-master key

.

Our first contribution consists of formal models that clarify the security level enjoyed by each of these types of keys. The models that we provide fall under well established paradigms in defining execution, and security notions. We capture the realistic setting where only one of the two parties involved in the execution of the protocol (namely the server) has a certified public key, and where the same master key is used to generate multiple application keys.

The main contribution of the paper is a modular and generic proof of security for the application keys established through the TLS protocol. We show that the transformation used by TLS to derive master keys essentially transforms an

arbitrary

secure pre-master key agreement protocol into a secure master-key agreement protocol. Similarly, the transformation used to derive application keys works when applied to an arbitrary secure master-key agreement protocol. These results are in the random oracle model. The security of the overall protocol then follows from proofs of security for the basic pre-master key generation protocols employed by TLS.

P. Morrissey, N. P. Smart, B. Warinschi
Ambiguous Optimistic Fair Exchange

Optimistic fair exchange (OFE) is a protocol for solving the problem of exchanging items or services in a fair manner between two parties, a signer and a verifier, with the help of an arbitrator which is called in only when a dispute happens between the two parties. In almost all the previous work on OFE, after obtaining a partial signature from the signer, the verifier can present it to others and show that the signer has indeed committed itself to something corresponding to the partial signature

even

prior to the completion of the transaction. In some scenarios, this capability given to the verifier may be harmful to the signer. In this paper, we propose the notion of

ambiguous optimistic fair exchange

(A-OFE), which is an OFE but also requires that the verifier cannot convince anybody about the authorship of a partial signature generated by the signer. We present a formal security model for A-OFE in the multi-user setting and chosen-key model. We also propose an efficient construction with security proven without relying on the random oracle assumption.

Qiong Huang, Guomin Yang, Duncan S. Wong, Willy Susilo
Compact Proofs of Retrievability

In a proof-of-retrievability system, a data storage center convinces a verifier that he is actually storing all of a client’s data. The central challenge is to build systems that are both efficient and

provably

secure—that is, it should be possible to extract the client’s data from any prover that passes a verification check. In this paper, we give the first proof-of-retrievability schemes with full proofs of security against

arbitrary

adversaries in the strongest model, that of Juels and Kaliski. Our first scheme, built from BLS signatures and secure in the random oracle model, has the

shortest query and response

of any proof-of-retrievability with public verifiability. Our second scheme, which builds elegantly on pseudorandom functions (PRFs) and is secure in the standard model, has the

shortest response

of any proof-of-retrievability scheme with private verifiability (but a longer query). Both schemes rely on homomorphic properties to aggregate a proof into one small authenticator value.

Hovav Shacham, Brent Waters
On the Security of HB# against a Man-in-the-Middle Attack

At EuroCrypt ’08, Gilbert, Robshaw and Seurin proposed HB

#

to improve on HB

 + 

in terms of transmission cost and security against man-in-the-middle attacks. Although the security of HB

#

is formally proven against a certain class of man-in-the-middle adversaries, it is only conjectured for the general case. In this paper, we present a general man-in-the-middle attack against HB

#

and

Random

-HB

#

, which can also be applied to all anterior HB-like protocols, that recovers the shared secret in 2

25

or 2

20

authentication rounds for HB

#

and 2

34

or 2

28

for

Random

-HB

#

, depending on the parameter set. We further show that the asymptotic complexity of our attack is polynomial under some conditions on the parameter set which are met on one of those proposed in [8].

Khaled Ouafi, Raphael Overbeck, Serge Vaudenay

Cryptographic Hash Functions I

Hash Functions from Sigma Protocols and Improvements to VSH

We present a general way to get a provably collision-resistant hash function from any (suitable) Σ− protocol. This enables us to both get new designs and to unify and improve previous work. In the first category, we obtain, via a modified version of the Fiat-Shamir protocol, the fastest known hash function that is provably collision-resistant based on the

standard

factoring assumption. In the second category, we provide a modified version

VSH*

of

VSH

which is faster when hashing short messages. (Most Internet packets are short.) We also show that Σ− hash functions are chameleon, thereby obtaining several new and efficient chameleon hash functions with applications to on-line/off-line signing, chameleon signatures and designated-verifier signatures.

Mihir Bellare, Todor Ristov
Slide Attacks on a Class of Hash Functions

This paper studies the application of slide attacks to hash functions. Slide attacks have mostly been used for block cipher cryptanalysis. But, as shown in the current paper, they also form a potential threat for hash functions, namely for sponge-function like structures. As it turns out, certain constructions for hash-function-based MACs can be vulnerable to forgery and even to key recovery attacks. In other cases, we can at least distinguish a given hash function from a random oracle.

To illustrate our results, we describe attacks against the

Grindahl

-256 and

Grindahl

-512 hash functions. To the best of our knowledge, this is the first cryptanalytic result on

Grindahl

-512. Furthermore, we point out a slide-based distinguisher attack on a slightly modified version of

RadioGatún

. We finally discuss simple countermeasures as a defense against slide attacks.

Michael Gorski, Stefan Lucks, Thomas Peyrin
Basing PRFs on Constant-Query Weak PRFs: Minimizing Assumptions for Efficient Symmetric Cryptography

Although it is well known that all basic private-key cryptographic primitives can be built from one-way functions, finding weak assumptions from which practical implementations of such primitives exist remains a challenging task. Towards this goal, this paper introduces the notion of a

constant-query weak PRF

, a function with a secret key which is computationally indistinguishable from a truly random function when evaluated at a

constant

number

s

of known random inputs, where

s

can be as small as two.

We provide iterated constructions of (arbitrary-input-length) PRFs from constant-query weak PRFs that even improve the efficiency of previous constructions based on the stronger assumption of a weak PRF (where polynomially many evaluations are allowed).

One of our constructions directly provides a new mode of operation using a constant-query weak PRF for IND-CPA symmetric encryption which is essentially as efficient as conventional PRF-based counter-mode encryption. Furthermore, our constructions yield efficient modes of operation for keying hash functions (such as MD5 and SHA-1) to obtain iterated PRFs (and hence MACs) which rely solely on the assumption that the underlying compression function is a constant-query weak PRF, which is the weakest assumption ever considered in this context.

Ueli Maurer, Stefano Tessaro

Cryptographic Protocols II

Universally Composable Adaptive Oblivious Transfer

In an oblivious transfer

(OT)

protocol, a Sender with messages

M

1

,...,

M

N

and a Receiver with indices

σ

1

,...,

σ

k

 ∈ [1,

N

] interact in such a way that at the end the Receiver obtains

$M_{\sigma_1},\dots,M_{\sigma_k}$

without learning anything about the other messages and the Sender does not learn anything about

σ

1

,...,

σ

k

. In an

adaptive

protocol, the Receiver may obtain

$M_{\sigma_{i-1}}$

before deciding on

σ

i

. Efficient adaptive

OT

protocols are interesting as a building block for secure multiparty computation and for enabling oblivious searches on medical and patent databases.

Historically, adaptive

OT

protocols were analyzed with respect to a “half-simulation” definition which Naor and Pinkas showed to be flawed. In 2007, Camenisch, Neven, and shelat, and subsequent other works, demonstrated efficient adaptive protocols in the full-simulation model. These protocols, however, all use standard rewinding techniques in their proofs of security and thus are not universally composable. Recently, Peikert, Vaikuntanathan and Waters presented universally composable (UC)

non-adaptive

OT

protocols for the 1-out-of-2 variant, in the static corruption model using certain trusted setup assumptions. However, it is not clear how to preserve UC security while extending these protocols to the adaptive

k

-out-of-

N

setting. Further, any such attempt would seem to require

O

(

N

) computation per transfer for a database of size

N

. In this work, we present an efficient and UC-secure

adaptive

k

-out-of-

N

OT

protocol in the same model as Peikert

et al.

, where after an initial commitment to the database, the cost of each transfer is

constant

. Our construction is secure under bilinear assumptions in the standard model.

Matthew Green, Susan Hohenberger
A Linked-List Approach to Cryptographically Secure Elections Using Instant Runoff Voting

Numerous methods have been proposed to conduct cryptographically secure elections. Most of these protocols focus on 1-out-of-

n

voting schemes. Few protocols have been devised for preferential voting systems, in which voters provide a list of rankings of the candidates, and many of those treat ballots as if they were ballots in a 1-out-of-

n

voting scheme. We propose a linked-list-based scheme that provides improved privacy over current schemes, hiding voter preferences that should not be revealed. For large lists of candidates we achieve improved asymptotic performance.

Jason Keller, Joe Kilian
Towards Robust Computation on Encrypted Data

Encryption schemes that support computation on encrypted data are useful in constructing efficient and intuitively simple cryptographic protocols. However, the approach was previously limited to stand-alone and/or honest-but-curious security. In this work, we apply recent results on “non-malleable homomorphic encryption” to construct new protocols with Universally Composable security against active corruption, for certain interesting tasks. Also, we use our techniques to develop non-malleable homomorphic encryption that can handle homomorphic operations involving more than one ciphertext.

Manoj Prabhakaran, Mike Rosulek
Efficient Protocols for Set Membership and Range Proofs

We consider the following problem: Given a commitment to a value

σ

, prove in zero-knowledge that

σ

belongs to some discrete set

Φ

. The set

Φ

can perhaps be a list of cities or clubs; often

Φ

can be a numerical range such as [1,2

20

]. This problem arises in e-cash systems, anonymous credential systems, and various other practical uses of zero-knowledge protocols.

When using commitment schemes relying on RSA-like assumptions, there are solutions to this problem which require only a constant number of RSA-group elements to be exchanged between the prover and verifier [5, 15, 16]. However, for many commitment schemes based on bilinear group assumptions, these techniques do not work, and the best known protocols require

O

(

k

) group elements to be exchanged where

k

is a security parameter.

In this paper, we present two new approaches to building set-membership proofs. The first is based on bilinear group assumptions. When applied to the case where

Φ

is a range of integers, our protocols require

$O(\frac{k}{\log k - \log\log k})$

group elements to be exchanged. Not only is this result asymptotically better, but the constants are small enough to provide significant improvements even for small ranges. Indeed, for a discrete logarithm based setting, our new protocol is an order of magnitude more efficient than previously known ones.

We also discuss alternative implementations of our membership proof based on the strong RSA assumption. Depending on the application, e.g., when

Φ

is a published set of values such a frequent flyer clubs, cities, or other ad hoc collections, these alternative also outperform prior solutions.

Jan Camenisch, Rafik Chaabouni, abhi shelat

Cryptographic Hash Functions II

Preimage Attacks on 3, 4, and 5-Pass HAVAL

This paper proposes preimage attacks on hash function HAVAL whose output length is 256 bits. This paper has three main contributions; a preimage attack on 3-pass HAVAL at the complexity of 2

225

, a preimage attack on 4-pass HAVAL at the complexity of 2

241

, and a preimage attack on 5-pass HAVAL reduced to 151 steps at the complexity of 2

241

. Moreover, we optimize the computational order for brute-force attack on full 5-pass HAVAL and its complexity is 2

254.89

. As far as we know, the proposed attack on 3-pass HAVAL is the best attack and there is no preimage attack so far on 4-pass and 5-pass HAVAL. Note that the complexity of the previous best attack on 3-pass HAVAL is 2

230

. Technically, our attacks find pseudo-preimages of HAVAL by combining the meet-in-the-middle and local-collision approaches, then convert pseudo-preimages to a preimage by using a generic algorithm.

Yu Sasaki, Kazumaro Aoki
How to Fill Up Merkle-Damgård Hash Functions

Many of the popular Merkle-Damgård hash functions have turned out to be not collision-resistant (CR). The problem is that we no longer know if these hash functions are even second-preimage-resistant (SPR) or one-way (OW), without the underlying compression functions being CR. We remedy this situation by introducing the “split padding” into a current Merkle-Damgård hash function

H

. The patched hash function

$\bar{H}$

resolves the problem in the following ways: (i)

$\bar{H}$

is SPR if the underlying compression function

h

satisfies an “SPR-like” property, and (ii)

$\bar{H}$

is OW if

h

satisfies an “OW-like” property. The assumptions we make about

h

are provided with simple definitions and clear relations to other security notions. In particular, they belong to the class whose existence is ensured by that of OW functions, revealing an evident separation from the strong CR requirement. Furthermore, we get the full benefit from the patch at almost no expense: The new scheme requires no change in the internals of a hash function, runs as efficiently as the original, and as usual inherits CR from

h

. Thus the patch has significant effects on systems and applications whose security relies heavily on the SPR or OW property of Merkle-Damgård hash functions.

Kan Yasuda
Limits of Constructive Security Proofs

The collision-resistance of hash functions is an important foundation of many cryptographic protocols. Formally, collision-resistance can only be expected if the hash function in fact constitutes a parametrized family of functions, since for a single function, the adversary could simply know a single hard-coded collision. In practical applications, however, unkeyed hash functions are a common choice, creating a gap between the practical application and the formal proof, and, even more importantly, the concise mathematical definitions.

A pragmatic way out of this dilemma was recently formalized by Rogaway: instead of requiring that no adversary exists that breaks the protocol (existential security), one requires that given an adversary that breaks the protocol, we can efficiently construct a collision of the hash function

using an explicitly given reduction

(constructive security).

In this paper, we show the limits of this approach: We give a protocol that is existentially secure, but that provably cannot be proven secure using a constructive security proof.

Consequently, constructive security—albeit constituting a useful improvement over the state of the art—is not comprehensive enough to encompass all protocols that can be dealt with using existential security proofs.

Michael Backes, Dominique Unruh

Public-Key Cryptography I

Efficient Chosen Ciphertext Secure Public Key Encryption under the Computational Diffie-Hellman Assumption

Recently Cash, Kiltz, and Shoup [13] showed a variant of the Cramer-Shoup (CS) scheme [14] whose chosen-ciphertext (CCA) security relies on the

computational Diffie-Hellman

(CDH) assumption. The cost for this high security is that the size of ciphertexts is much longer than the CS scheme (which is based on the decisional Diffie-Hellman assumption). In this paper, we show how to achieve CCA-security under the CDH assumption without increasing the size of ciphertexts. We also show a more efficient scheme under the

hashed Diffie-Hellman

assumption.

Both of our schemes are based on a certain broadcast encryption (BE) scheme while the Cash-Kiltz-Shoup scheme is based on the Twin DH problem. Of independent interest, we also show a generic method of constructing CCA-secure PKE schemes from BE schemes.

Goichiro Hanaoka, Kaoru Kurosawa
Twisted Edwards Curves Revisited

This paper introduces fast algorithms for performing group operations on twisted Edwards curves, pushing the recent speed limits of Elliptic Curve Cryptography (ECC) forward in a wide range of applications. Notably, the new addition algorithm uses

$8\mathrm{\textbf{M}}$

for suitably selected curve constants. In comparison, the fastest point addition algorithms for (twisted) Edwards curves stated in the literature use

$9\mathrm{\textbf{M}} + 1\mathrm{\textbf{S}}$

. It is also shown that the new addition algorithm can be implemented with four processors dropping the effective cost to

$2\mathrm{\textbf{M}}$

. This implies an effective speed increase by the full factor of 4 over the sequential case. Our results allow faster implementation of elliptic curve scalar multiplication. In addition, the new point addition algorithm can be used to provide a natural protection from side channel attacks based on simple power analysis (SPA).

Huseyin Hisil, Kenneth Koon-Ho Wong, Gary Carter, Ed Dawson
On the Validity of the Φ-Hiding Assumption in Cryptographic Protocols

Most cryptographic protocols, in particular asymmetric protocols, are based on assumptions about the computational complexity of mathematical problems. The

Φ

-Hiding assumption is such an assumption. It states that if

p

1

and

p

2

are small primes exactly one of which divides

ϕ

(

N

), where

N

is a number whose factorization is unknown and

ϕ

is Euler’s totient function, then there is no polynomial-time algorithm to distinguish which of the primes

p

1

and

p

2

divides

ϕ

(

N

) with a probability significantly greater than 1/2. In this paper, it will be shown that the

Φ

-Hiding assumption is not valid when applied to a modulus

N

 = 

PQ

2

e

, where

P

,

Q

 > 2 are primes,

e

 > 0 is an integer and

P

hides the prime in question. This indicates that cryptographic protocols using such moduli and relying on the

Φ

-Hiding assumption must be handled with care.

Christian Schridde, Bernd Freisleben
Chosen Ciphertext Security with Optimal Ciphertext Overhead

Every public-key encryption scheme has to incorporate a certain amount of randomness into its ciphertexts to provide semantic security against chosen ciphertext attacks (IND-CCA). The difference between the length of a ciphertext and the embedded message is called the

ciphertext overhead

. While a generic brute-force adversary running in 2

t

steps gives a theoretical lower bound of

t

bits on the ciphertext overhead for IND-CPA security, the best known IND-CCA secure schemes demand roughly 2

t

bits even in the random oracle model. Is the

t

-bit gap essential for achieving IND-CCA security?

We close the gap by proposing an IND-CCA secure scheme whose ciphertext overhead matches the generic lower bound up to a small constant. Our scheme uses a variation of a four-round Feistel network in the random oracle model and hence belongs to the family of OAEP-based schemes. Maybe of independent interest is a new efficient method to encrypt long messages exceeding the length of the permutation while retaining the minimal overhead.

Masayuki Abe, Eike Kiltz, Tatsuaki Okamoto

Lattice-Based Cryptography

Concurrently Secure Identification Schemes Based on the Worst-Case Hardness of Lattice Problems

In this paper, we show that two variants of Stern’s identification scheme [IEEE Transaction on Information Theory ’96] are provably secure against concurrent attack under the assumptions on the

worst-case

hardness of lattice problems. These assumptions are weaker than those for the previous lattice-based identification schemes of Micciancio and Vadhan [CRYPTO ’03] and of Lyubashevsky [PKC ’08].We also construct efficient ad hoc anonymous identification schemes based on the lattice problems by modifying the variants.

Akinori Kawachi, Keisuke Tanaka, Keita Xagawa
Rigorous and Efficient Short Lattice Vectors Enumeration

The Kannan-Fincke-Pohst enumeration algorithm for the shortest and closest lattice vector problems is the keystone of all strong lattice reduction algorithms and their implementations. In the context of the fast developing lattice-based cryptography, the practical security estimates derive from floating-point implementations of these algorithms. However, these implementations behave very unexpectedly and make these security estimates debatable. Among others, numerical stability issues seem to occur and raise doubts on what is actually computed. We give here the first results on the numerical behavior of the floating-point enumeration algorithm. They provide a theoretical and practical framework for the use of floating-point numbers within strong reduction algorithms, which could lead to more sensible hardness estimates.

Xavier Pujol, Damien Stehlé
Solving Linear Equations Modulo Divisors: On Factoring Given Any Bits

We study the problem of finding solutions to linear equations modulo an unknown divisor

p

of a known composite integer

N

. An important application of this problem is factorization of

N

with given bits of

p

. It is well-known that this problem is polynomial-time solvable if at most half of the bits of

p

are unknown and if the unknown bits are located in one

consecutive

block. We introduce an heuristic algorithm that extends

factoring with known bits

to an arbitrary number

n

of blocks. Surprisingly, we are able to show that ln (2) ≈ 70% of the bits are sufficient for any

n

in order to find the factorization. The algorithm’s running time is however exponential in the parameter

n

. Thus, our algorithm is polynomial time only for

$n = {\mathcal O}(\log\log N)$

blocks.

Mathias Herrmann, Alexander May

Private-Key Cryptography

An Infinite Class of Balanced Functions with Optimal Algebraic Immunity, Good Immunity to Fast Algebraic Attacks and Good Nonlinearity

After the improvement by Courtois and Meier of the algebraic attacks on stream ciphers and the introduction of the related notion of algebraic immunity, several constructions of infinite classes of Boolean functions with optimum algebraic immunity have been proposed. All of them gave functions whose algebraic degrees are high enough for resisting the Berlekamp-Massey attack and the recent Rønjom-Helleseth attack, but whose nonlinearities either achieve the worst possible value (given by Lobanov’s bound) or are slightly superior to it. Hence, these functions do not allow resistance to fast correlation attacks. Moreover, they do not behave well with respect to fast algebraic attacks. In this paper, we study an infinite class of functions which achieve an optimum algebraic immunity. We prove that they have an optimum algebraic degree and a much better nonlinearity than all the previously obtained infinite classes of functions. We check that, at least for small values of the number of variables, the functions of this class have in fact a very good nonlinearity and also a good behavior against fast algebraic attacks.

Claude Carlet, Keqin Feng
An Improved Impossible Differential Attack on MISTY1

MISTY1 is a Feistel block cipher that received a great deal of cryptographic attention. Its recursive structure, as well as the added FL layers, have been successful in thwarting various cryptanalytic techniques. The best known attacks on reduced variants of the cipher are on either a 4-round variant with the FL functions, or a 6-round variant without the FL functions (out of the 8 rounds of the cipher).

In this paper we combine the generic impossible differential attack against 5-round Feistel ciphers with the dedicated Slicing attack to mount an attack on 5-round MISTY1 with all the FL functions with time complexity of 2

46.45

simple operations. We then extend the attack to 6-round MISTY1 with the FL functions present, leading to the best known cryptanalytic result on the cipher. We also present an attack on 7-round MISTY1 without the FL layers.

Orr Dunkelman, Nathan Keller

Public-Key Cryptography II

Generalized Identity Based and Broadcast Encryption Schemes

We provide a general framework for constructing identity-based and broadcast encryption systems. In particular, we construct a general encryption system called

spatial encryption

from which many systems with a variety of properties follow. The ciphertext size in all these systems is independent of the number of users involved and is just three group elements. Private key size grows with the complexity of the system. One application of these results gives the first

broadcast HIBE

system with short ciphertexts. Broadcast HIBE solves a natural problem having to do with identity-based encrypted email.

Dan Boneh, Michael Hamburg
Speeding Up the Pollard Rho Method on Prime Fields

We propose a method to speed up the

r

-adding walk on multiplicative subgroups of the prime field. The

r

-adding walk is an iterating function used with the Pollard rho algorithm and is known to require less iterations than Pollard’s original iterating function in reaching a collision. Our main idea is to follow through the

r

-adding walk with only partial information about the nodes reached.

The trail traveled by the proposed method is a normal

r

-adding walk, but with significantly reduced execution time for each iteration. While a single iteration of most

r

-adding walks on

F

p

require a multiplication of two integers of log

p

size, the proposed method requires an operation of complexity only linear in log

p

, using a pre-computed table of size

O

((log

p

)

r

 + 1

·loglog

p

). In practice, our rudimentary implementation of the proposed method increased the speed of Pollard rho with

r

-adding walks by a factor of more than 10 for 1024-bit random primes

p

.

Jung Hee Cheon, Jin Hong, Minkyu Kim
Sufficient Conditions for Intractability over Black-Box Groups: Generic Lower Bounds for Generalized DL and DH Problems

The generic group model is a valuable methodology for analyzing the computational hardness of number-theoretic problems used in cryptography. Although generic hardness proofs exhibit many similarities, still the computational intractability of every newly introduced problem needs to be proven from scratch, a task that can easily become complicated and cumbersome when done rigorously. In this paper we make the first steps towards overcoming this problem by identifying criteria which guarantee the hardness of a problem in an extended generic model where algorithms are allowed to perform any operation representable by a polynomial function.

Andy Rupp, Gregor Leander, Endre Bangerter, Alexander W. Dent, Ahmad-Reza Sadeghi
OAEP Is Secure under Key-Dependent Messages

Key-dependent message security, short KDM security, was introduced by Black, Rogaway and Shrimpton to address the case where key cycles occur among encryptions, e.g., a key is encrypted with itself. We extend this definition to include the cases of adaptive corruptions and arbitrary active attacks, called adKDM security incorporating several novel design choices and substantially differing from prior definitions for public-key security. We also show that the OAEP encryption scheme (using a partial-domain one-way function) satisfies the strong notion of adKDM security in the random oracle model.The OAEP construction thus constitutes a suitable candidate for implementating symbolic abstractions of encryption schemes in a computationally sound manner under active adversaries.

Michael Backes, Markus Dürmuth, Dominique Unruh

Analysis of Stream Ciphers

Cryptanalysis of Sosemanuk and SNOW 2.0 Using Linear Masks

In this paper, we present a correlation attack on Sosemanuk with complexity less than 2

150

. Sosemanuk is a software oriented stream cipher proposed by Berbain et al. to the eSTREAM call for stream cipher and has been selected in the final portfolio. Sosemanuk consists of a linear feedback shift register(LFSR) of ten 32-bit words and a finite state machine(FSM) of two 32-bit words. By combining linear approximation relations regarding the FSM update function, the FSM output function and the keystream output function, it is possible to derive linear approximation relations with correlation − 2

− 21.41

involving only the keystream words and the LFSR initial state. Using such linear approximation relations, we mount a correlation attack with complexity 2

147.88

and success probability 99% to recover the initial internal state of 384 bits. We also mount a correlation attack on SNOW 2.0 with complexity 2

204.38

.

Jung-Keun Lee, Dong Hoon Lee, Sangwoo Park
A New Attack on the LEX Stream Cipher

In [6], Biryukov presented a new methodology of stream cipher design, called

leak extraction

. The stream cipher LEX, based on this methodology and on the AES block cipher, was selected to phase 3 of the eSTREAM competition. The suggested methodology seemed promising, and LEX, due to its elegance, simplicity and performance was expected to be selected to the eSTREAM portfolio.

In this paper we present a key recovery attack on LEX. The attack requires about 2

36.3

bytes of key-stream produced by the same key (possibly under many different IVs), and retrieves the secret key in time of 2

112

simple operations. Following a preliminary version of our attack, LEX was discarded from the final portfolio of eSTREAM.

Orr Dunkelman, Nathan Keller
Breaking the F-FCSR-H Stream Cipher in Real Time

The F-FCSR stream cipher family has been presented a few years ago. Apart from some flaws in the initial propositions, corrected in a later stage, there are no known weaknesses of the core of these algorithms. The hardware oriented version, called FCSR-H, is one of the ciphers selected for the eSTREAM portfolio.

In this paper we present a new and severe cryptanalytic attack on the F-FCSR stream cipher family. We give the details of the attack when applied on F-FCSR-H. The attack requires a few Mbytes of received sequence and the complexity is low enough to allow the attack to be performed on a single PC within seconds.

Martin Hell, Thomas Johansson
Backmatter
Metadaten
Titel
Advances in Cryptology - ASIACRYPT 2008
herausgegeben von
Josef Pieprzyk
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-89255-7
Print ISBN
978-3-540-89254-0
DOI
https://doi.org/10.1007/978-3-540-89255-7