Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 18th International Conference on the Theory and Application of Cryptology and Information Security, Asiacrypt 2012, held in Beijing, China, in December 2012. The 43 full papers presented were carefully reviewed and selected from 241 submissions. They are organized in topical sections named: public-key cryptography, foundation, symmetric cipher, security proof, lattice-based cryptography and number theory, hash function, cryptographic protocol, and implementation issues.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Pairing-Based Cryptography: Past, Present, and Future

While pairings were first introduced in cryptography as a tool to attack the discrete-log problem on certain elliptic curves, they have since found numerous applications in the construction of cryptographic systems. To this day many problems can only be solved using pairings. A few examples include collusion-resistant broadcast encryption and traitor tracing with short keys, 3-way Diffie-Hellman, and short signatures.

In this talk we survey some of the existing applications of pairings to cryptography, but mostly focus on open problems that cannot currently be solved using pairings. In particular we explain where the current techniques fail and outline a few potential directions for future progress.

One of the central applications of pairings is identity-based encryption and its generalization to functional encryption. While identity-based encryption can be built using arithmetic modulo composites and using lattices, constructions based on pairings currently provide the most expressive functional encryption systems. Constructing comparable functional encryption systems from lattices and composite arithmetic is a wonderful open problem. Again we survey the state of the art and outline a few potential directions for further progress.

Going beyond pairings (a.k.a bi-linear maps), a central open problem in public-key cryptography is constructing a secure tri-linear or more generally a secure

n

-linear map. That is, construct groups

G

and

$G_{\scriptscriptstyle\mathrm{T}}$

where discrete-log in

G

is intractable and yet there is an efficiently computable non-degenerate

n

-linear map

$e:G^n \to G_{\scriptscriptstyle\mathrm{T}}$

. Such a construct can lead to powerful solutions to the problems mentioned in the first paragraph as well as to new functional encryption and homomorphic encryption systems. Currently, no such construct is known and we hope this talk will encourage further research on this problem.

Dan Boneh

Some Mathematical Mysteries in Lattices

Lattice, as a basic object in Mathematics, has been studied by many prominent figures, including Gauss, Hermite, Voronio, Minkowski, Davenport, Hlawka, Rogers and many others still active today. It is one of the most important cornerstones of Geometry of Numbers, a classic branch of Number Theory. During recent decades, this pure mathematical concept has achieved remarkable applications in Cryptography, in particular its algorithm approaches. The main purpose of this talk is to demonstrate some basic mathematical problems and results (old and new) about lattices, which are probably useful in Cryptography in the future. These problems reflect some of the main interests of the mathematicians about lattices.

Chuanming Zong

Public-Key Cryptography I

Constant-Size Structure-Preserving Signatures: Generic Constructions and Simple Assumptions

This paper presents efficient structure-preserving signature schemes based on assumptions as simple as Decisional-Linear. We first give two general frameworks for constructing fully secure signature schemes from weaker building blocks such as variations of one-time signatures and random-message secure signatures. They can be seen as refinements of the Even-Goldreich-Micali framework, and preserve many desirable properties of the underlying schemes such as constant signature size

and structure preservation

. We then instantiate them based on simple (i.e., not q-type) assumptions over symmetric and asymmetric bilinear groups. The resulting schemes are structure-preserving and yield constant-size signatures consisting of 11 to 17 group elements, which compares favorably to existing schemes relying on q-type assumptions for their security.

Masayuki Abe, Melissa Chase, Bernardo David, Markulf Kohlweiss, Ryo Nishimaki, Miyako Ohkubo

Dual Form Signatures: An Approach for Proving Security from Static Assumptions

In this paper, we introduce the abstraction of Dual Form Signatures as a useful framework for proving security (existential unforgeability) from static assumptions for schemes with special structure that are used as a basis of other cryptographic protocols and applications. We demonstrate the power of this framework by proving security under static assumptions for close variants of pre-existing schemes: the LRSW-based Camenisch-Lysyanskaya signature scheme, and the identity-based sequential aggregate signatures of Boldyreva, Gentry, O’Neill, and Yum. The Camenisch-Lysyanskaya signature scheme was previously proven only under the interactive LRSW assumption, and our result can be viewed as a static replacement for the LRSW assumption. The scheme of Boldyreva, Gentry, O’Neill, and Yum was also previously proven only under an interactive assumption that was shown to hold in the generic group model. The structure of the public key signature scheme underlying the BGOY aggregate signatures is quite distinctive, and our work presents the first security analysis of this kind of structure under static assumptions.

Michael Gerbush, Allison Lewko, Adam O’Neill, Brent Waters

Breaking Pairing-Based Cryptosystems Using η T Pairing over GF(397)

In this paper, we discuss solving the DLP over

GF

(3

6·97

) by using the function field sieve (FFS) for breaking paring-based cryptosystems using the

η

T

pairing over

GF

(3

97

). The extension degree 97 has been intensively used in benchmarking tests for the implementation of the

η

T

pairing, and the order (923-bit) of

GF

(3

6·97

) is substantially larger than the previous world record (676-bit) of solving the DLP by using the FFS. We implemented the FFS for the medium prime case, and proposed several improvements of the FFS. Finally, we succeeded in solving the DLP over

GF

(3

6·97

). The entire computational time requires about 148.2 days using 252 CPU cores.

Takuya Hayashi, Takeshi Shimoyama, Naoyuki Shinohara, Tsuyoshi Takagi

On the (Im)possibility of Projecting Property in Prime-Order Setting

Projecting bilinear pairings have frequently been used for designing cryptosystems since they were first derived from composite order bilinear groups. There have been only a few studies on the (im)possibility of projecting bilinear pairings. Groth and Sahai showed that projecting bilinear pairings can be achieved in the prime-order group setting. They constructed both projecting

asymmetric

bilinear pairings and projecting

symmetric

bilinear pairings, where a bilinear pairing

e

is symmetric if it satisfies

e

(

g

,

h

) = 

e

(

h

,

g

) for any group elements

g

and

h

; otherwise, it is asymmetric.

In this paper, we provide impossibility results on projecting bilinear pairings in a prime-order group setting. More precisely, we specify the lower bounds of

the image size of a projecting asymmetric bilinear pairing

the image size of a projecting symmetric bilinear pairing

the computational cost for a projecting asymmetric bilinear pairing

the computational cost for a projecting symmetric bilinear pairing

in a prime-order group setting naturally induced from the

k

-linear assumption, where the computational cost means the number of generic operations.

Our lower bounds regarding a projecting asymmetric bilinear pairing are tight, i.e., it is impossible to construct a more efficient projecting asymmetric bilinear pairing than the constructions of Groth-Sahai and Freeman. However, our lower bounds regarding a projecting symmetric bilinear pairing differ from Groth and Sahai’s results regarding a symmetric bilinear pairing results; We fill these gaps by constructing projecting symmetric bilinear pairings.

In addition, on the basis of the proposed symmetric bilinear pairings, we construct more efficient instantiations of cryptosystems that essentially use the projecting symmetric bilinear pairings in a modular fashion. Example applications include new instantiations of the Boneh-Goh-Nissim cryptosystem, the Groth-Sahai non-interactive proof system, and Seo-Cheon round optimal blind signatures proven secure under the DLIN assumption. These new instantiations are more efficient than the previous ones, which are also provably secure under the DLIN assumption. These applications are of independent interest.

Jae Hong Seo

Foundation

Optimal Reductions of Some Decisional Problems to the Rank Problem

In the last years the use of large matrices and their algebraic properties proved to be useful to instantiate new cryptographic primitives like Lossy Trapdoor Functions and encryption schemes with improved security, like Key Dependent Message resilience. In these constructions the rank of a matrix is assumed to be hard to guess when the matrix is hidden by elementwise exponentiation. This problem, that we call here the Rank Problem, is known to be related to the Decisional Diffie-Hellman problem, but in the known reductions between both problems there appears a loss-factor in the advantage which grows linearly with the rank of the matrix.

In this paper, we give a new and better reduction between the Rank problem and the Decisional Diffie-Hellman problem, such that the reduction loss-factor depends logarithmically in the rank. This new reduction can be applied to a number of cryptographic constructions, improving their efficiency. The main idea in the reduction is to build from a DDH tuple a matrix which rank shifts from

r

to 2

r

, and then apply a hybrid argument to deal with the general case. In particular this technique widens the range of possible values of the ranks that are tightly related to DDH.

On the other hand, the new reduction is optimal as we show the nonexistence of more efficient reductions in a wide class containing all the “natural” ones (i.e., black-box and algebraic). The result is twofold: there is no (natural) way to build a matrix which rank shifts from

r

to 2

r

 + 

α

for

α

 > 0, and no hybrid argument can improve the logarithmic loss-factor obtained in the new reduction.

The techniques used in the paper extend naturally to other “algebraic” problems like the Decisional Linear or the Decisional 3-Party Diffie- Hellman problems, also obtaining reductions of logarithmic complexity.

Jorge Luis Villar

Signature Schemes Secure against Hard-to-Invert Leakage

In the auxiliary input model an adversary is allowed to see a

computationally hard-to-invert function

of the secret key. The auxiliary input model weakens the bounded leakage assumption commonly made in leakage resilient cryptography as the hard-to-invert function may information-theoretically reveal the entire secret key. In this work, we propose the

first

constructions of digital signature schemes that are secure in the auxiliary input model. Our main contribution is a digital signature scheme that is secure against

chosen message attacks

when given an

exponentially hard-to-invert function

of the secret key. As a second contribution, we construct a signature scheme that achieves security for

random messages

assuming that the adversary is given a

polynomial-time

hard to invert function. Here, polynomial-hardness is required even when given the entire public-key – so called

weak

auxiliary input security. We show that such signature schemes readily give us auxiliary input secure identification schemes.

Sebastian Faust, Carmit Hazay, Jesper Buus Nielsen, Peter Sebastian Nordholt, Angela Zottarel

Completeness for Symmetric Two-Party Functionalities - Revisited

Understanding the minimal assumptions required for carrying out cryptographic tasks is one of the fundamental goals of theoretical cryptography. A rich body of work has been dedicated to understanding the complexity of cryptographic tasks in the context of (semi-honest) secure two-party computation. Much of this work has focused on the characterization of trivial and complete functionalities (resp., functionalities that can be securely implemented unconditionally, and functionalities that can be used to securely compute all functionalities).

All previous works define reductions via an ideal implementation of the functionality; i.e.,

f

reduces to

g

if one can implement

f

using an ideal box (or oracle) that computes the function

g

and returns the output to both parties. Such a reduction models the computation of

f

as an

atomic operation

. However, in the real-world, protocols proceed in rounds, and the output is not learned by the parties simultaneously. In this paper we show that this distinction is significant. Specifically, we show that there exist symmetric functionalities (where both parties receive the same outcome), that are neither trivial nor complete under “ideal-box reductions”, and yet the existence of a constant-round protocol for securely computing such a functionality implies infinitely-often oblivious transfer (meaning that it is secure for infinitely-many

n

’s). In light of the above, we propose an alternative definitional infrastructure for studying the triviality and completeness of functionalities.

Yehuda Lindell, Eran Omri, Hila Zarosim

Adaptively Secure Garbling with Applications to One-Time Programs and Secure Outsourcing

Standard constructions of garbled circuits provide only

static

security, meaning the input

x

is not allowed to depend on the garbled circuit

F

. But some applications—notably

one-time programs

(Goldwasser, Kalai, and Rothblum 2008) and

secure outsourcing

(Gennaro, Gentry, Parno 2010)—need

adaptive

security, where

x

may depend on

F

. We identify gaps in proofs from these papers with regard to adaptive security and suggest the need of a better abstraction boundary. To this end we investigate the adaptive security of

garbling schemes

, an abstraction of Yao’s garbled-circuit technique that we recently introduced (Bellare, Hoang, Rogaway 2012). Building on that framework, we give definitions encompassing

privacy

,

authenticity

, and

obliviousness

, with either

coarse-grained

or

fine-grained

adaptivity. We show how adaptively secure garbling schemes support simple solutions for one-time programs and secure outsourcing, with privacy being the goal in the first case and obliviousness and authenticity the goal in the second. We give transforms that promote static-secure garbling schemes to adaptive-secure ones. Our work advances the thesis that conceptualizing garbling schemes as a first-class cryptographic primitive can simplify, unify, or improve treatments for higher-level protocols.

Mihir Bellare, Viet Tung Hoang, Phillip Rogaway

The Generalized Randomized Iterate and Its Application to New Efficient Constructions of UOWHFs from Regular One-Way Functions

This paper presents the

Generalized Randomized Iterate

of a (regular) one-way function

f

and show that it can be used to build Universal One-Way Hash Function (UOWHF) families with

O

(

n

2

) key length.

We then show that Shoup’s technique for UOWHF domain extension can be used to improve the efficiency of the previous construction. We present the

Reusable Generalized Randomized Iterate

which consists of

k

 ≥ 

n

 + 1 iterations of a regular one-way function composed at each iteration with a pairwise independent hash function, where we only use log

k

such hash functions, and we “schedule” them according to the same scheduling of Shoup’s domain extension technique. The end result is a UOWHF construction from regular one-way functions with an

O

(

n

log

n

) key. These are the first such efficient constructions of UOWHF from regular one-way functions of unknown regularity.

Finally we show that the Shoup’s domain extension technique can also be used in lieu of derandomization techniques to improve the efficiency of PRGs and of hardness amplification constructions for regular one-way functions.

Scott Ames, Rosario Gennaro, Muthuramakrishnan Venkitasubramaniam

Symmetric Cipher

Perfect Algebraic Immune Functions

A perfect algebraic immune function is a Boolean function with perfect immunity against algebraic and fast algebraic attacks. The main results are that for a perfect algebraic immune balanced function the number of input variables is one more than a power of two; for a perfect algebraic immune unbalanced function the number of input variables is a power of two. Also, for

n

equal to a power of two, the Carlet-Feng functions on

n

 + 1 variables and the modified Carlet-Feng functions on

n

variables are shown to be perfect algebraic immune functions.

Meicheng Liu, Yin Zhang, Dongdai Lin

Differential Analysis of the LED Block Cipher

In this paper, we present a security analysis of the lightweight block cipher

LED

proposed by Guo et al. at CHES 2011. Since the design of

LED

is very similar to the Even-Mansour scheme, we first review existing attacks on this scheme and extend them to related-key and related-key-cipher settings before we apply them to

LED

. We obtain results for 12 and 16 rounds (out of 32) for

LED

-64 and 16 and 24 rounds (out of 48) for

LED

-128. Furthermore, we present an observation on full

LED

in the related-key-cipher setting. For all these attacks we need to find good differentials for one step (4 rounds) of

LED

. Therefore, we extend the study of plateau characteristics for AES-like structures from two rounds to four rounds when the key addition is replaced with a constant addition. We introduce an algorithm that can be used to find good differentials and right pairs for one step of

LED

. To be more precise, we can find more than 2

10

right pairs for one step of

LED

with complexity of 2

16

and memory requirement of 5 ×2

17

. Moreover, a similar algorithm can also be used to find iterative characteristics for the

LED

.

Florian Mendel, Vincent Rijmen, Deniz Toz, Kerem Varıcı

PRINCE – A Low-Latency Block Cipher for Pervasive Computing Applications

Extended Abstract

This paper presents a block cipher that is optimized with respect to latency when implemented in hardware. Such ciphers are desirable for many future pervasive applications with real-time security needs. Our cipher, named

PRINCE

, allows encryption of data within one clock cycle with a very competitive chip area compared to known solutions. The fully unrolled fashion in which such algorithms need to be implemented calls for innovative design choices. The number of rounds must be moderate and rounds must have short delays in hardware. At the same time, the traditional need that a cipher has to be iterative with very similar round functions disappears, an observation that increases the design space for the algorithm. An important further requirement is that realizing decryption and encryption results in minimum additional costs.

PRINCE

is designed in such a way that the overhead for decryption on top of encryption is negligible. More precisely for our cipher it holds that decryption for one key corresponds to encryption with a related key. This property we refer to as

α

-reflection is of independent interest and we prove its soundness against generic attacks.

Julia Borghoff, Anne Canteaut, Tim Güneysu, Elif Bilge Kavun, Miroslav Knezevic, Lars R. Knudsen, Gregor Leander, Ventzislav Nikov, Christof Paar, Christian Rechberger, Peter Rombouts, Søren S. Thomsen, Tolga Yalçın

Analysis of Differential Attacks in ARX Constructions

In this paper, we study differential attacks against ARX schemes. We build upon the generalized characteristics of de Cannière and Rechberger; we introduce new multi-bit constraints to describe differential characteristics in ARX designs more accurately, and quartet constraints to analyze boomerang attacks. We also describe how to propagate those constraints; this can be used either to assist manual construction of a differential characteristic, or to extract more information from an already built characteristic. We show that our new constraints are more precise than what was used in previous works, and can detect more cases of incompatibility.

In particular, we show that several published attacks are in fact fact invalid because the differential characteristics cannot be satisfied. This highlights the importance of verifying differential attacks more thoroughly.

Gaëtan Leurent

Integral and Multidimensional Linear Distinguishers with Correlation Zero

Zero-correlation cryptanalysis uses linear approximations holding with probability exactly 1/2. In this paper, we reveal fundamental links of zero-correlation distinguishers to integral distinguishers and multidimensional linear distinguishers. We show that an integral implies zero-correlation linear approximations and that a zero-correlation linear distinguisher is actually a special case of multidimensional linear distinguishers. These observations provide new insight into zero-correlation cryptanalysis which is illustrated by attacking a Skipjack variant and round-reduced CAST-256 without weak key assumptions.

Andrey Bogdanov, Gregor Leander, Kaisa Nyberg, Meiqin Wang

Differential Attacks against Stream Cipher ZUC

Stream cipher ZUC is the core component in the 3GPP confidentiality and integrity algorithms 128-EEA3 and 128-EIA3. In this paper, we present the details of our differential attacks against ZUC 1.4. The vulnerability in ZUC 1.4 is due to the non-injective property in the initialization, which results in the difference in the initialization vector being cancelled. In the first attack, difference is injected into the first byte of the initialization vector, and one out of 2

15.4

random keys result in two identical keystreams after testing 2

13.3

IV pairs for each key. The identical keystreams pose a serious threat to the use of ZUC 1.4 in applications since it is similar to reusing a key in one-time pad. Once identical keystreams are detected, the key can be recovered with average complexity 2

99.4

. In the second attack, difference is injected into the second byte of the initialization vector, and every key can result in two identical keystreams with about 2

54

IVs. Once identical keystreams are detected, the key can be recovered with complexity 2

67

. We have presented a method to fix the flaw by updating the LFSR in an injective way in the initialization. Our suggested method is used in the later versions of ZUC. The latest ZUC 1.6 is secure against our attacks.

Hongjun Wu, Tao Huang, Phuong Ha Nguyen, Huaxiong Wang, San Ling

Security Proof

An Asymptotically Tight Security Analysis of the Iterated Even-Mansour Cipher

We analyze the security of the iterated Even-Mansour cipher (

a.k.a.

key-alternating cipher), a very simple and natural construction of a blockcipher in the random permutation model. This construction, first considered by Even and Mansour (J. Cryptology, 1997) with a single permutation, was recently generalized to use

t

permutations in the work of Bogdanov

et al.

(EUROCRYPT 2012). They proved that the construction is secure up to

$ \mathcal{O} (N^{2/3})$

queries (where

N

is the domain size of the permutations), as soon as the number

t

of rounds is 2 or more. This is tight for

t

 = 2, however in the general case the best known attack requires Ω(

N

t

/(

t

 + 1)

) queries. In this paper, we give asymptotically tight security proofs for two types of adversaries:

1

for non-adaptive chosen-plaintext adversaries, we prove that the construction achieves an optimal security bound of

$ \mathcal{O} (N^{t/(t+1)})$

queries;

2

for adaptive chosen-plaintext and ciphertext adversaries, we prove that the construction achieves security up to

$ \mathcal{O} (N^{t/(t+2)})$

queries (for

t

even). This improves previous results for

t

 ≥ 6.

Our proof crucially relies on the use of a

coupling

to upper-bound the statistical distance of the outputs of the iterated Even-Mansour cipher to the uniform distribution.

Rodolphe Lampe, Jacques Patarin, Yannick Seurin

3kf9: Enhancing 3GPP-MAC beyond the Birthday Bound

Among various cryptographic schemes, CBC-based MACs belong to the few ones most widely used in practice. Such MACs iterate a blockcipher

E

K

in the so called Cipher-Block-Chaining way, i.e.

C

i

 = 

E

K

(

M

i

 ⊕ 

C

i

 − 1

) , offering high efficiency in practical applications. In the paper, we propose a new deterministic variant of CBC-based MACs that is provably secure beyond the birthday bound. The new MAC 3kf9 is obtained by combining

f

9 (3GPP-MAC) and EMAC sharing the same internal structure, and so it is almost as efficient as the original CBC MAC. 3kf9 offers

$O(\frac{l^3q^3}{2^{2n}}+\frac{lq}{2^n})$

PRF-security when its underlying

n

-bit blockcipher is pseudorandom with three independent keys. This makes it more secure than traditional CBC-based MACs, especially when they are applied with lightweight blockciphers. Therefore, 3kf9 is expected to be a possible candidate MAC in resource-restricted environments.

Liting Zhang, Wenling Wu, Han Sui, Peng Wang

Understanding Adaptivity: Random Systems Revisited

We develop a conceptual approach for probabilistic analysis of adaptive adversaries via Maurer’s methodology of random systems (Eurocrypt’02). We first consider a well-known comparison theorem of Maurer according to which, under certain hypotheses, adaptivity does not help for achieving a certain event. This theorem has subsequently been misinterpreted, leading to a misrepresentation with one of Maurer’s hypotheses being omitted in various applications. In particular, the only proof of (a misrepresentation of) the theorem available in the literature contained a flaw. We clarify the theorem by pointing out a simple example illustrating why the hypothesis of Maurer is necessary for the comparison statement to hold and provide a correct proof. Furthermore, we prove several technical statements applicable in more general settings where adaptivity might be helpful, which can be seen as the random system analogue of the game-playing arguments recently proved by Jetchev, Özen and Stam (TCC’12).

Dimitar Jetchev, Onur Özen, Martijn Stam

RKA Security beyond the Linear Barrier: IBE, Encryption and Signatures

We provide a framework enabling the construction of IBE schemes that are secure under related-key attacks (RKAs). Specific instantiations of the framework yield RKA-secure IBE schemes for sets of related key derivation functions that are non-linear, thus overcoming a current barrier in RKA security. In particular, we obtain IBE schemes that are RKA secure for sets consisting of all affine functions and all polynomial functions of bounded degree. Based on this we obtain the first constructions of RKA-secure schemes for the same sets for the following primitives: CCA-secure public-key encryption, CCA-secure symmetric encryption and Signatures. All our results are in the standard model and hold under reasonable hardness assumptions.

Mihir Bellare, Kenneth G. Paterson, Susan Thomson

Public-Key Cryptography II

Fully Secure Unbounded Inner-Product and Attribute-Based Encryption

In this paper, we present the first inner-product encryption (IPE) schemes that are

unbounded

in the sense that the public parameters do not impose additional limitations on the predicates and attributes used for encryption and decryption keys. All previous IPE schemes were

bounded

, or have a bound on the size of predicates and attributes given public parameters fixed at setup. The proposed unbounded IPE schemes are

fully (adaptively) secure and fully attribute-hiding

in the standard model under a standard assumption, the decisional linear (DLIN) assumption. In our unbounded IPE schemes, the inner-product relation is generalized, where the two vectors of inner-product can be different sizes and it provides a great improvement of efficiency in many applications. We also present the first

fully secure unbounded

attribute-based encryption (ABE) schemes, and the security is proven under the DLIN assumption in the standard model. To achieve these results, we develop novel techniques,

indexing

and

consistent randomness amplification

, on the (extended) dual system encryption technique and the dual pairing vector spaces (DPVS).

Tatsuaki Okamoto, Katsuyuki Takashima

Computing on Authenticated Data: New Privacy Definitions and Constructions

Homomorphic signatures are primitives that allow for public computations on authenticated data. At TCC 2012, Ahn

et al.

defined a framework and security notions for such systems. For a predicate

P

, their notion of

P

-homomorphic signature makes it possible, given signatures on a message set

M

, to publicly derive a signature on any message

m

′ such that

P

(

M

,

m

′) = 1. Beyond unforgeability, Ahn

et al.

considered a strong notion of privacy – called strong context hiding – requiring that derived signatures be perfectly indistinguishable from signatures newly generated by the signer. In this paper, we first note that the definition of strong context hiding may not imply unlinkability properties that can be expected from homomorphic signatures in certain situations. We then suggest other definitions of privacy and discuss the relations among them. Our strongest definition, called

complete

context hiding security, is shown to imply previous ones. In the case of linearly homomorphic signatures, we only attain a slightly weaker level of privacy which is nevertheless stronger than in previous realizations in the standard model. For subset predicates, we prove that our strongest notion of privacy is satisfiable and describe a completely context hiding system with constant-size public keys. In the standard model, this construction is the first one that allows signing messages of arbitrary length. The scheme builds on techniques that are very different from those of Ahn

et al.

Nuttapong Attrapadung, Benoît Libert, Thomas Peters

A Coding-Theoretic Approach to Recovering Noisy RSA Keys

Inspired by cold boot attacks, Heninger and Shacham (Crypto 2009) initiated the study of the problem of how to recover an RSA private key from a noisy version of that key. They gave an algorithm for the case where some bits of the private key are known with certainty. Their ideas were extended by Henecka, May and Meurer (Crypto 2010) to produce an algorithm that works when all the key bits are subject to error. In this paper, we bring a coding-theoretic viewpoint to bear on the problem of noisy RSA key recovery. This viewpoint allows us to cast the previous work as part of a more general framework. In turn, this enables us to explain why the previous algorithms do not solve the motivating cold boot problem, and to design a new algorithm that does (and more). In addition, we are able to use concepts and tools from coding theory – channel capacity, list decoding algorithms, and random coding techniques – to derive bounds on the performance of the previous and our new algorithm.

Kenneth G. Paterson, Antigoni Polychroniadou, Dale L. Sibborn

Certifying RSA

We propose an algorithm that, given an arbitrary

N

of unknown factorization and prime

$e \geq N^{\frac{1}{4}+\varepsilon}$

, certifies whether the RSA function

RSA

N

,

e

(

x

) : = 

x

e

mod

N

defines a permutation over

${\mathbb Z}_{N}^{*}$

or not. The algorithm uses Coppersmith’s method to find small solutions of polynomial equations and runs in time

O

(

ε

− 8

log

2

N

). Previous certification techniques required

e

 > 

N

.

Saqib A. Kakvi, Eike Kiltz, Alexander May

Lattice-Based Cryptography and Number Theory

Faster Gaussian Lattice Sampling Using Lazy Floating-Point Arithmetic

Many lattice cryptographic primitives require an efficient algorithm to sample lattice points according to some Gaussian distribution. All algorithms known for this task require long-integer arithmetic at some point, which may be problematic in practice. We study how much lattice sampling can be sped up using floating-point arithmetic. First, we show that a direct floating-point implementation of these algorithms does not give any asymptotic speedup: the floating-point precision needs to be greater than the security parameter, leading to an overall complexity

Õ

(n3) where n is the lattice dimension. However, we introduce a laziness technique that can significantly speed up these algorithms. Namely, in certain cases such as NTRUSign lattices, laziness can decrease the complexity to

Õ

(n2) or even

Õ

(n). Furthermore, our analysis is practical: for typical parameters, most of the floating-point operations only require the double-precision IEEE standard.

Léo Ducas, Phong Q. Nguyen

Learning a Zonotope and More: Cryptanalysis of NTRUSign Countermeasures

NTRU

Sign

is the most practical lattice signature scheme. Its basic version was broken by Nguyen and Regev in 2006: one can efficiently recover the secret key from about 400 signatures. However, countermeasures have been proposed to repair the scheme, such as the perturbation used in NTRU

Sign

standardization proposals, and the deformation proposed by Hu

et al.

at IEEE Trans. Inform. Theory in 2008. These two countermeasures were claimed to prevent the NR attack. Surprisingly, we show that these two claims are incorrect by revisiting the NR gradient-descent attack: the attack is more powerful than previously expected, and actually breaks both countermeasures in practice,

e.g.

8,000 signatures suffice to break NTRU

Sign

-251 with one perturbation as submitted to IEEE P1363 in 2003. More precisely, we explain why the Nguyen-Regev algorithm for learning a parallelepiped is heuristically able to learn more complex objects, such as zonotopes and deformed parallelepipeds.

Léo Ducas, Phong Q. Nguyen

On Polynomial Systems Arising from a Weil Descent

In the last two decades, many computational problems arising in cryptography have been successfully reduced to various systems of polynomial equations. In this paper, we revisit a class of polynomial systems introduced by Faugère, Perret, Petit and Renault. Based on new experimental results and heuristic evidence, we conjecture that their degrees of regularity are only slightly larger than the original degrees of the equations, resulting in a very low complexity compared to generic systems. We then revisit the application of these systems to the elliptic curve discrete logarithm problem (ECDLP) for binary curves. Our heuristic analysis suggests that an index calculus variant due to Diem requires a

subexponential

number of bit operations

$(O2^{c\,n^{2/3}\log n})$

over the binary field

${\mathbb F}{2^n}$

, where

c

is a constant smaller than 2. According to our estimations, generic discrete logarithm methods are outperformed for any

n

 > 

N

where

N

 ≈ 2000, but elliptic curves of currently recommended key sizes (

n

 ≈ 160) are not immediately threatened. The analysis can be easily generalized to other extension fields.

Christophe Petit, Jean-Jacques Quisquater

Public-Key Cryptography III

ECM at Work

The performance of the elliptic curve method (ECM) for integer factorization plays an important role in the security assessment of RSA-based protocols as a cofactorization tool inside the number field sieve. The efficient arithmetic for Edwards curves found an application by speeding up ECM. We propose techniques based on generating and combining addition-subtracting chains to optimize Edwards ECM in terms of both performance and memory requirements. This makes our approach very suitable for memory-constrained devices such as graphics processing units (GPU). For commonly used ECM parameters we are able to lower the required memory up to a factor 55 compared to the state-of-the-art Edwards ECM approach. Our ECM implementation on a GTX 580 GPU sets a new throughput record, outperforming the best GPU, CPU and FPGA results reported in literature.

Joppe W. Bos, Thorsten Kleinjung

IND-CCA Secure Cryptography Based on a Variant of the LPN Problem

In 2003 Michael Alekhnovich (FOCS 2003) introduced a novel variant of the learning parity with noise problem and showed that it implies IND-CPA secure public-key cryptography. In this paper we introduce the first public-key encryption-scheme based on this assumption which is IND-CCA secure in the standard model. Our main technical tool to achieve this is a novel all-but-one simulation technique based on the correlated products approach of Rosen and Segev (TCC 2009). Our IND-CCA1 secure scheme is asymptotically optimal with respect to ciphertext-expansion. To achieve IND-CCA2 security we use a technique of Dolev, Dwork and Naor (STOC 1991) based on one-time-signatures. For practical purposes, the efficiency of the IND-CCA2 scheme can be substantially improved by the use of additional assumptions to allow for more efficient signature schemes. Our results make Alekhnovich’s variant of the learning parity with noise problem a promising candidate to achieve post quantum cryptography.

Nico Döttling, Jörn Müller-Quade, Anderson C. A. Nascimento

Hash Function

Provable Security of the Knudsen-Preneel Compression Functions

This paper discusses the provable security of the compression functions introduced by Knudsen and Preneel [11,12,13] that use linear error-correcting codes to build wide-pipe compression functions from underlying blockciphers operating in Davies-Meyer mode. In the information theoretic model, we prove that the Knudsen-Preneel compression function based on an

$[r,k,d]_{2^e}$

code is collision resistant up to

$2^{\frac{(r-d+1)n}{2r-3d+3}}$

query complexity if 2

d

 ≤ 

r

 + 1 and collision resistant up to

$2^{\frac{rn}{2r-2d+2}}$

query complexity if 2

d

 > 

r

 + 1. For MDS code based Knudsen-Preneel compression functions, this lower bound matches the upper bound recently given by Özen and Stam [23].

A preimage security proof of the Knudsen-Preneel compression functions has been first presented by Özen et al. (FSE ’10). In this paper, we present two alternative proofs that the Knudsen-Preneel compression functions are preimage resistant up to

$2^{\frac{rn}{k}}$

query complexity. While the first proof, using a wish list argument, is presented primarily to illustrate an idea behind our collision security proof, the second proof provides a tighter security bound compared to the original one.

Jooyoung Lee

Optimal Collision Security in Double Block Length Hashing with Single Length Key

The idea of double block length hashing is to construct a compression function on 2

n

bits using a block cipher with an

n

-bit block size. All optimally secure double length hash functions known in the literature employ a cipher with a key space of double block size, 2

n

-bit. On the other hand, no optimally secure compression functions built from a cipher with an

n

-bit key space are known. Our work deals with this problem. Firstly, we prove that for a wide class of compression functions with two calls to its underlying

n

-bit keyed block cipher collisions can be found in about 2

n

/2

queries. This attack applies, among others, to functions where the output is derived from the block cipher outputs in a linear way. This observation demonstrates that all security results of designs using a cipher with 2

n

-bit key space crucially rely on the presence of these extra

n

key bits. The main contribution of this work is a proof that this issue can be resolved by allowing the compression function to make one extra call to the cipher. We propose a family of compression functions making three block cipher calls that asymptotically achieves optimal collision resistance up to 2

n

(1 − 

ε

)

queries and preimage resistance up to 2

3

n

(1 − 

ε

)/2

queries, for any

ε

 > 0. To our knowledge, this is the first optimally collision secure double block length construction using a block cipher with single length key space.

Bart Mennink

Bicliques for Permutations: Collision and Preimage Attacks in Stronger Settings

We extend and improve biclique attacks, which were recently introduced for the cryptanalysis of block ciphers and hash functions. While previous attacks required a primitive to have a key or a message schedule, we show how to mount attacks on the primitives with these parameters fixed, i.e. on permutations. We introduce the concept of sliced bicliques, which is a translation of regular bicliques to the framework with permutations.

The new framework allows to convert preimage attacks into collision attacks and derive the first collision attacks on the reduced SHA-3 finalist Skein in the hash function setting up to 11 rounds. We also demonstrate new preimage attacks on the reduced Skein and the output transformation of the reduced Grøstl. Finally, the sophisticated technique of message compensation gets a simple explanation with bicliques.

Dmitry Khovratovich

Investigating Fundamental Security Requirements on Whirlpool: Improved Preimage and Collision Attacks

In this paper, improved cryptanalyses for the ISO standard hash function Whirlpool are presented with respect to the fundamental security notions. While a subspace distinguisher was presented on full version (10 rounds) of the compression function, its impact to the security of the hash function seems limited. In this paper, we discuss the (second) preimage and collision attacks for the hash function and the compression function of Whirlpool. Regarding the preimage attack, 6 rounds of the hash function are attacked with 2

481

computations while the previous best attack is for 5 rounds with 2

481.5

computations. Regarding the collision attack, 8 rounds of the compression function are attacked with 2

120

computations, while the previous best attack is for 7 rounds with 2

184

computations. To verify the correctness, especially for the rebound attack on the Sbox with an unbalanced Differential Distribution Table (DDT), the attack is partially implemented, and the differences from attacking the Sbox with balanced DDT are reported.

Yu Sasaki, Lei Wang, Shuang Wu, Wenling Wu

Generic Related-Key Attacks for HMAC

In this article we describe new generic distinguishing and forgery attacks in the related-key scenario (using only a single related-key) for the

HMAC

construction. When

HMAC

uses a

k

-bit key, outputs an

n

-bit MAC, and is instantiated with an

l

-bit inner iterative hash function processing

m

-bit message blocks where

m

 = 

k

, our distinguishing-R attack requires about 2

n

/2

queries which improves over the currently best known generic attack complexity 2

l

/2

as soon as

l

 > 

n

. This means that contrary to the general belief, using wide-pipe hash functions as internal primitive will not increase the overall security of

HMAC

in the related-key model when the key size is equal to the message block size. We also present generic related-key distinguishing-H, internal state recovery and forgery attacks. Our method is new and elegant, and uses a simple cycle-size detection criterion. The issue in the

HMAC

construction (not present in the

NMAC

construction) comes from the non-independence of the two inner hash layers and we provide a simple patch in order to avoid this generic attack. Our work finally shows that the choice of the

opad

and

ipad

constants value in

HMAC

is important.

Thomas Peyrin, Yu Sasaki, Lei Wang

Cryptographic Protocol I

The Five-Card Trick Can Be Done with Four Cards

The “five-card trick” invented by Boer allows Alice and Bob to securely compute the AND function of their secret inputs using five cards—three black cards and two red cards—with identical backs. This paper shows that such a secure computation can be done with only four cards. Specifically, we give a protocol to achieve a secure computation of AND using only four cards—two black and two red. Our protocol is optimal in the sense that the number of required cards is minimum.

Takaaki Mizuki, Michihito Kumamoto, Hideaki Sone

A Mix-Net from Any CCA2 Secure Cryptosystem

We construct a provably secure mix-net from any CCA2 secure cryptosystem. The mix-net is secure against active adversaries that statically corrupt less than

λ

out of

k

mix-servers, where

λ

is a threshold parameter, and it is robust provided that at most min (

λ

 − 1,

k

 − 

λ

) mix-servers are corrupted.

The main component of our construction is a mix-net that outputs the correct result if all mix-servers behaved honestly, and aborts with probability 1 − 

O

(

H

− (

t

 − 1)

) otherwise (without disclosing anything about the inputs), where

t

is an auxiliary security parameter and

H

is the number of honest parties. The running time of this protocol for long messages is roughly 3

t

c

, where

c

is the running time of Chaum’s mix-net (1981).

Shahram Khazaei, Tal Moran, Douglas Wikström

How Not to Prove Yourself: Pitfalls of the Fiat-Shamir Heuristic and Applications to Helios

The Fiat-Shamir transformation is the most efficient construction of non-interactive zero-knowledge proofs.

This paper is concerned with two variants of the transformation that appear but have not been clearly delineated in existing literature. Both variants start with the prover making a commitment. The strong variant then hashes both the commitment and the statement to be proved, whereas the weak variant hashes only the commitment. This minor change yields dramatically different security guarantees: in situations where malicious provers can select their statements adaptively, the weak Fiat-Shamir transformation yields unsound/unextractable proofs. Yet such settings naturally occur in systems when zero-knowledge proofs are used to enforce honest behavior. illustrate this point by showing that the use of the weak Fiat-Shamir transformation in the Helios cryptographic voting system leads to several possible security breaches: for some standard types of elections, under plausible circumstances, malicious parties can cause the tallying procedure to run indefinitely and even tamper with the result of the election.

On the positive side, we define a form of adaptive security for zero-knowledge proofs in the random oracle model (essentially simulation-sound extractability), and show that a variant which we call

strong Fiat-Shamir

yields secure non-interactive proofs. This level of security was assumed in previous works on Helios and our results are then necessary for these analyses to be valid. Additionally, we show that strong proofs in Helios achieve non-malleable encryption and satisfy ballot privacy, improving on previous results that required CCA security.

David Bernhard, Olivier Pereira, Bogdan Warinschi

Cryptographic Protocol II

Sequential Aggregate Signatures with Lazy Verification from Trapdoor Permutations

(Extended Abstract)

Sequential aggregate signature schemes allow

n

signers, in order, to sign a message each, at a lower total cost than the cost of

n

individual signatures. We present a sequential aggregate signature scheme based on trapdoor permutations (

e.g.

, RSA). Unlike prior such proposals, our scheme does not require a signer to retrieve the keys of other signers and verify the aggregate-so-far before adding its own signature. Indeed, we do not even require a signer to

know

the public keys of other signers!

Moreover, for applications that require signers to verify the aggregate anyway, our schemes support

lazy verification

: a signer can add its own signature to an unverified aggregate and forward it along immediately, postponing verification until load permits or the necessary public keys are obtained. This is especially important for applications where signers must access a large, secure, and current cache of public keys in order to verify messages. The price we pay is that our signature grows slightly with the number of signers.

We report a technical analysis of our scheme (which is provably secure in the random oracle model), a detailed implementation-level specification, and implementation results based on RSA and OpenSSL. To evaluate the performance of our scheme, we focus on the target application of BGPsec (formerly known as Secure BGP), a protocol designed for securing the global Internet routing system. There is a particular need for lazy verification with BGPsec, since it is run on routers that must process signatures extremely quickly, while being able to access tens of thousands of public keys. We compare our scheme to the algorithms currently proposed for use in BGPsec, and find that our signatures are considerably shorter nonaggregate RSA (with the same sign and verify times) and have an order of magnitude faster verification than nonaggregate ECDSA, although ECDSA has shorter signatures when the number of signers is small.

Kyle Brogle, Sharon Goldberg, Leonid Reyzin

Commitments and Efficient Zero-Knowledge Proofs from Learning Parity with Noise

We construct a perfectly binding string commitment scheme whose security is based on the learning parity with noise (

LPN

) assumption, or equivalently, the hardness of decoding random linear codes. Our scheme not only allows for a simple and efficient zero-knowledge proof of knowledge for committed values (essentially a Σ-protocol), but also for such proofs showing any kind of relation amongst committed values, i.e., proving that messages

m

0

,…,

m

u

, are such that

m

0

 = 

C

(

m

1

,…,

m

u

) for any circuit

C

.

To get soundness which is exponentially small in a security parameter

t

, and when the zero-knowledge property relies on the

LPN

problem with secrets of length ℓ, our 3 round protocol has communication complexity

${\mathcal O}(t|C|\ell\log(\ell))$

and computational complexity of

${\mathcal O}(t|C|\ell)$

bit operations. The hidden constants are small, and the computation consists mostly of computing inner products of bit-vectors.

Abhishek Jain, Stephan Krenn, Krzysztof Pietrzak, Aris Tentes

Calling Out Cheaters: Covert Security with Public Verifiability

We introduce the notion of

covert security with public verifiability

, building on the covert security model introduced by Aumann and Lindell (TCC 2007). Protocols that satisfy covert security guarantee that the honest parties involved in the protocol will notice any cheating attempt with some constant probability

ε

. The idea behind the model is that the fear of being caught cheating will be enough of a deterrent to prevent any cheating attempt. However, in the basic covert security model, the honest parties are not able to persuade any third party (say, a judge) that a cheating occurred.

We propose (and formally define) an extension of the model where, when an honest party detects cheating, it also receives a

certificate

that can be published and used to persuade other parties, without revealing any information about the honest party’s input. In addition, malicious parties cannot create fake certificates in the attempt of framing innocents.

Finally, we construct a secure two-party computation protocol for any functionality

f

that satisfies our definition, and our protocol is almost as efficient as the one of Aumann and Lindell. We believe that the fear of a public humiliation or even legal consequences vastly exceeds the deterrent given by standard covert security. Therefore, even a small value of the deterrent factor

ε

will suffice in discouraging any cheating attempt.

Gilad Asharov, Claudio Orlandi

A Unified Framework for UC from Only OT

In [1], the authors presented a unified framework for constructing Universally Composable (UC) secure computation protocols, assuming only enhanced trapdoor permutations. In this work, we weaken the hardness assumption underlying the unified framework to only the existence of a stand-alone secure semi-honest Oblivious Transfer (OT) protocol. The new framwork directly implies new and improved UC feasibility results from only the existence of a semi-honest OT protocol in various models. Since in many models, the existence of UC-OT implies the existence of a semi-honest OT protocol.

Furthermore, we show that by relying on a more fine-grained analysis of the unified framework, we obtain concurrently secure computation protocols with super-polynomial-time simulation (SPS), based on the

necessary

assumption of the existence of a semi-honest OT protocol that can be simulated in super-polynomial times. When the underlying OT protocol has constant rounds, the SPS secure protocols constructed also have constant rounds. This yields the first construction of constant-round secure computation protocols that satisfy a meaningful notions of concurrent security (i.e., SPS security) based on tight assumptions.

A notable corollary following from our new unifed framwork is that stand-alone (or bounded-concurrent) password authenticated key-exchange protocols (PAKE) can be constructed from only semi-honest OT protocols; combined with the result of [2] that the existence of PAKE protocols implies that of OT, we derive a tight characterization of PAKE protocols.

Rafael Pass, Huijia Lin, Muthuramakrishnan Venkitasubramaniam

Implementation Issues

Four-Dimensional Gallant-Lambert-Vanstone Scalar Multiplication

The GLV method of Gallant, Lambert and Vanstone (CRYPTO 2001) computes any multiple

kP

of a point

P

of prime order

n

lying on an elliptic curve with a low-degree endomorphism Φ (called GLV curve) over

$\mathbb{F}_p$

as

$kP = k_1P + k_2\Phi(P), \text{with } \max\{|k_1|,|k_2|\}\leq C_1\sqrt n$

, for some explicit constant

C

1

 > 0. Recently, Galbraith, Lin and Scott (EUROCRYPT 2009) extended this method to all curves over

$\mathbb{F}_{p^2}$

which are twists of curves defined over

$\mathbb{F}_p$

. We show in this work how to merge the two approaches in order to get, for twists of any GLV curve over

$\mathbb{F}_{p^2}$

, a four-dimensional decomposition together with fast endomorphisms Φ, Ψ over

$\mathbb{F}_{p^2}$

acting on the group generated by a point

P

of prime order

n

, resulting in a proven decomposition for any scalar

k

 ∈ [1,

n

] given by

kP

 = 

k

1

P

 + 

k

2

Φ(

P

) + 

k

3

Ψ(

P

) + 

k

4

ΨΦ(

P

) with max

i

(|

k

i

|) < 

C

2

n

1/4

, for some explicit

C

2

 > 0. Remarkably, taking the best

C

1

,

C

2

, we obtain

C

2

/

C

1

 < 412, independently of the curve, ensuring in theory an almost constant relative speedup. In practice, our experiments reveal that the use of the merged GLV-GLS approach supports a scalar multiplication that runs up to 50% times faster than the original GLV method. We then improve this performance even further by exploiting the Twisted Edwards model and show that curves originally slower may become extremely efficient on this model. In addition, we analyze the performance of the method on a multicore setting and describe how to efficiently protect GLV-based scalar multiplication against several side-channel attacks. Our implementations improve the state-of-the-art performance of point multiplication for a variety of scenarios including side-channel protected and unprotected cases with sequential and multicore execution.

Patrick Longa, Francesco Sica

Shuffling against Side-Channel Attacks: A Comprehensive Study with Cautionary Note

Together with masking, shuffling is one of the most frequently considered solutions to improve the security of small embedded devices against side-channel attacks. In this paper, we provide a comprehensive study of this countermeasure, including improved implementations and a careful information theoretic and security analysis of its different variants. Our analyses lead to important conclusions as they moderate the strong security improvements claimed in previous works. They suggest that simplified versions of shuffling (e.g. using random start indexes) can be significantly weaker than their counterpart using full permutations. We further show with an experimental case study that such simplified versions can be as easy to attack as unprotected implementations. We finally exhibit the existence of “indirect leakages” in shuffled implementations that can be exploited due to the different leakage models of the different resources used in cryptographic implementations. This suggests the design of fully shuffled (and efficient) implementations, were both the execution order of the instructions and the physical resources used are randomized, as an interesting scope for further research.

Nicolas Veyrat-Charvillon, Marcel Medwed, Stéphanie Kerckhof, François-Xavier Standaert

Theory and Practice of a Leakage Resilient Masking Scheme

A recent trend in cryptography is to formally prove the

leakage resilience

of cryptographic implementations – that is, one formally shows that a scheme remains provably secure even in the presence of side channel leakage. Although many of the proposed schemes are secure in a surprisingly strong model, most of them are unfortunately rather inefficient and come without practical security evaluations nor implementation attempts. In this work, we take a further step towards closing the gap between theoretical leakage resilient cryptography and more practice-oriented research. In particular, we show that masking countermeasures based on the

inner product

do not only exhibit strong theoretical leakage resilience, but moreover provide better practical security or efficiency than earlier masking countermeasures. We demonstrate the feasibility of inner product masking by giving a secured implementation of the AES for an 8-bit processor.

Josep Balasch, Sebastian Faust, Benedikt Gierlichs, Ingrid Verbauwhede

Erratum: Investigating Fundamental Security Requirements on Whirlpool: Improved Preimage and Collision Attacks

The author missed to add the following acknowledgement to the paper:

Acknowledgement:

Shuang Wu is supported by the National Natural Science Foundation of China Under Grant No.61202421.

Yu Sasaki, Lei Wang, Shuang Wu, Wenling Wu

Backmatter

Weitere Informationen

Premium Partner