Skip to main content

2011 | Buch

Advances in Cryptology – CRYPTO 2011

31st Annual Cryptology Conference, Santa Barbara, CA, USA, August 14-18, 2011. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 31st Annual International Cryptology Conference, CRYPTO 2011, held in Santa Barbara, CA, USA in August 2011. The 42 revised full papers presented were carefully reviewed and selected from 230 submissions. The volume also contains the abstract of one invited talk. The papers are organized in topical sections on randomness and its use; computer-assisted cryptographic proofs; outsourcing and delegatin computation; symmetric cryptanalysis and constructions; secure computation: leakage and side channels; quantum cryptography; lattices and knapsacks; public-key encryption; symmetric schemes; signatures; obilvious transfer and secret sharing; and multivariate and coding-based schemes.

Inhaltsverzeichnis

Frontmatter

Randomness and Its Use

Leftover Hash Lemma, Revisited

The famous

Leftover Hash Lemma

(LHL) states that (almost) universal hash functions are good randomness extractors. Despite its numerous applications, LHL-based extractors suffer from the following two limitations:

Large Entropy Loss:

to extract

v

bits from distribution

X

of min-entropy

m

which are

ε

-close to uniform, one must set

v

 ≤ 

m

 − 2

log

(1/

ε

), meaning that the entropy loss

$L^{def}_{=} m-v\ge 2log(1/\epsilon)$

. For many applications, such entropy loss is too large.

Large Seed Length:

the seed length

n

of (almost) universal hash function required by the LHL must be at least

n

 ≥ min (

u

 − 

v

,

v

 + 2

log

(1/

ε

)) − 

O

(1), where

u

is the length of the source, and must grow with the number of extracted bits.

Quite surprisingly, we show that both limitations of the LHL — large entropy loss and large seed — can be overcome (or, at least, mitigated) in various important scenarios. First, we show that entropy loss could be reduced to

L

 = 

log

(1/

ε

) for the setting of deriving secret keys for a wide range of cryptographic applications.

Specifically, the security of these schemes with an LHL-derived key gracefully degrades from

ε

to at most

$\epsilon + \sqrt{\epsilon 2^{-L}}$

. (Notice that, unlike standard LHL, this bound is meaningful even

when one extracts more bits than the min-entropy we have!) Based on these results we build a general

computational extractor

that enjoys low entropy loss and can be used to instantiate a generic key derivation function for

any

cryptographic application.

Second, we study the soundness of the natural

expand-then-extract

approach, where one uses a

pseudorandom generator

(PRG) to expand a short “input seed”

S

into a longer “output seed”

S

′, and then use the resulting

S

′ as the seed required by the LHL (or, more generally, by any randomness extractor). We show that, in general, the expand-then-extract approach is not sound if the Decisional Diffie-Hellman assumption is true. Despite that, we show that it is sound either: (1) when extracting a “small” (logarithmic in the security of the PRG) number of bits; or (2) in

minicrypt

. Implication (2) suggests that the expand-then-extract approach is likely secure when used with “practical” PRGs, despite lacking a reductionist proof of security!

Boaz Barak, Yevgeniy Dodis, Hugo Krawczyk, Olivier Pereira, Krzysztof Pietrzak, François-Xavier Standaert, Yu Yu
Random Oracle Reducibility

We discuss a reduction notion relating the random oracles in two cryptographic schemes

A

and

B

. Basically, the random oracle of scheme

B

reduces to the one of scheme

A

if

any

hash function instantiation of the random oracle (possibly still oracle based) which makes

A

secure also makes

B

secure. In a sense, instantiating the random oracle in scheme

B

is thus not more demanding than the one for scheme

A

. If, in addition, the standard cryptographic assumptions for scheme

B

are implied by the ones for scheme

A

, we can conclude that scheme

B

actually relies on weaker assumptions. Technically, such a conclusion cannot be made given only individual proofs in the random oracle model for each scheme.

The notion of random oracle reducibility immediately allows to transfer an uninstantiability result from an uninstantiable scheme

B

to a scheme

A

to which the random oracle reduces. We are nonetheless mainly interested in the other direction as a mean to establish hierarchically ordered random-oracle based schemes in terms of security assumptions. As a positive example, we consider the twin Diffie-Hellman (DH) encryption scheme of Cash et al. (Journal of Cryptology, 2009), which has been shown to be secure under the DH assumption in the random oracle scheme. It thus appears to improve over the related hashed ElGamal encryption scheme which relies on the random oracle model and the strong DH assumption where the adversary also gets access to a decisional DH oracle. As explained above, we complement this believe by showing that the random oracle in the twin DH scheme actually reduces to the one of the hashed ElGamal encryption scheme. We finally discuss further random oracle reductions between common signature schemes like GQ, PSS, and FDH.

Paul Baecher, Marc Fischlin
Time-Lock Puzzles in the Random Oracle Model

A time-lock puzzle is a mechanism for sending messages “to the future”. The sender publishes a puzzle whose solution is the message to be sent, thus hiding it until enough time has elapsed for the puzzle to be solved. For time-lock puzzles to be useful, generating a puzzle should take less time than solving it. Since adversaries may have access to many more computers than honest solvers, massively parallel solvers should not be able to produce a solution much faster than serial ones.

To date, we know of only one mechanism that is believed to satisfy these properties: the one proposed by Rivest, Shamir and Wagner (1996), who originally introduced the notion of time-lock puzzles. Their puzzle is based on the serial nature of exponentiation and the hardness of factoring, and is therefore vulnerable to advances in factoring techniques (as well as to quantum attacks).

In this work, we study the possibility of constructing time-lock puzzles in the random-oracle model. Our main result is negative, ruling out time-lock puzzles that require more parallel time to solve than the total work required to generate a puzzle. In particular, this should rule out black-box constructions of such time-lock puzzles from one-way permutations and collision-resistant hash-functions. On the positive side, we construct a time-lock puzzle with a linear gap in parallel time: a new puzzle can be generated with one round of

n

parallel queries to the random oracle, but

n

rounds of

serial

queries are required to solve it (even for massively parallel adversaries).

Mohammad Mahmoody, Tal Moran, Salil Vadhan
Physically Uncloneable Functions in the Universal Composition Framework

Recently, there have been numerous works about hardwareassisted cryptographic protocols, either improving previous constructions in terms of efficiency, or in terms of security. In particular, many suggestions use Canetti’s universal composition (UC) framework to model hardware tokens and to derive schemes with strong security guarantees in the UC framework. In this paper, we augment this approach by considering Physically Uncloneable Functions (PUFs) in the UC framework. Interestingly, when doing so, one encounters several peculiarities specific to PUFs, such as the intrinsic non-programmability of such functions. Using our UC notion of PUFs, we then devise efficient UC-secure protocols for basic tasks like oblivious transfer, commitments, and key exchange. It turns out that designing PUF-based protocols is fundamentally different than for other hardware tokens. For one part this is because of the non-programmability. But also, since the functional behavior is unpredictable even for the creator of the PUF, this causes an asymmetric situation in which only the party in possession of the PUF has full access to the secrets.

Christina Brzuska, Marc Fischlin, Heike Schröder, Stefan Katzenbeisser

Computer-Assisted Cryptographic Proofs

Computer-Aided Security Proofs for the Working Cryptographer

We present EasyCrypt, an automated tool for elaborating security proofs of cryptographic systems from proof sketches–compact, formal representations of the essence of a proof as a sequence of games and hints. Proof sketches are checked automatically using off-the-shelf SMT solvers and automated theorem provers, and then compiled into verifiable proofs in the CertiCrypt framework. The tool supports most common reasoning patterns and is significantly easier to use than its predecessors. We argue that EasyCrypt is a plausible candidate for adoption by working cryptographers and illustrate its application to security proofs of the Cramer-Shoup and Hashed ElGamal cryptosystems.

Gilles Barthe, Benjamin Grégoire, Sylvain Heraud, Santiago Zanella Béguelin

Outsourcing and Delegating Computation

Optimal Verification of Operations on Dynamic Sets

We study the design of protocols for

set-operation verification

, namely the problem of cryptographically checking the correctness of outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned (and updated) by a trusted source. We present new authenticated data structures that allow any entity to

publicly

verify a proof attesting the correctness of primitive set operations such as

intersection

,

union

,

subset

and

set difference

. Based on a novel extension of the security properties of

bilinear-map accumulators

as well as on a primitive called

accumulation tree

, our protocols achieve

optimal

verification and proof complexity (i.e., only proportional to the size of the query parameters and the answer), as well as

optimal

update complexity (i.e., constant), while incurring no extra asymptotic space overhead. The proof construction is also efficient, adding a

logarithmic

overhead to the computation of the answer of a set-operation query. In contrast, existing schemes entail high communication and verification costs or high storage costs. Applications of interest include efficient verification of keyword search and database queries. The security of our protocols is based on the

bilinear q-strong Diffie-Hellman

assumption.

Charalampos Papamanthou, Roberto Tamassia, Nikos Triandopoulos
Verifiable Delegation of Computation over Large Datasets

We study the problem of computing on large datasets that are stored on an untrusted server. We follow the approach of

amortized verifiable computation

introduced by Gennaro, Gentry, and Parno in CRYPTO 2010. We present the first practical verifiable computation scheme for high degree polynomial functions. Such functions can be used, for example, to make predictions based on polynomials fitted to a large number of sample points in an experiment. In addition to the many non-cryptographic applications of delegating high degree polynomials, we use our verifiable computation scheme to obtain new solutions for verifiable keyword search, and proofs of retrievability. Our constructions are based on the DDH assumption and its variants, and achieve adaptive security, which was left as an open problem by Gennaro et al.(albeit for general functionalities).

Our second result is a primitive which we call a

verifiable database

(VDB). Here, a weak client outsources a large table to an untrusted server, and makes retrieval and update queries. For each query, the server provides a response and a proof that the response was computed correctly. The goal is to minimize the resources required by the client. This is made particularly challenging if the number of update queries is unbounded. We present a VDB scheme based on the hardness of the subgroup membership problem in composite order bilinear groups.

Siavosh Benabbas, Rosario Gennaro, Yevgeniy Vahlis
Secure Computation on the Web: Computing without Simultaneous Interaction

Secure computation enables mutually suspicious parties to compute a joint function of their private inputs while providing strong security guarantees. However, its use in practice seems limited. We argue that one of the reasons for this is that the model of computation on the web is not suited to the type of communication patterns needed for secure computation. Specifically, in most web scenarios clients independently connect to servers, interact with them and then leave. This rules out the use of secure computation protocols that require that

all

participants interact simultaneously.

We initiate a study of secure computation in a client-server model where each client connects to the server

once

and interacts with it, without any other client necessarily being connected at the same time. We point out some inherent limitations in this model and present definitions that capture what can be done. We also present a general feasibility result and several truly practical protocols for a number of functions of interest. All our protocols are based on standard assumptions, and we achieve security both in the semi-honest and malicious adversary models.

Shai Halevi, Yehuda Lindell, Benny Pinkas
Memory Delegation

We consider the problem of delegating computation, where the delegator doesn’t even know the input to the function being delegated, and runs in time significantly smaller than the input length.

For example, consider the setting of

memory delegation

, where a delegator wishes to delegate her entire memory to the cloud. The delegator may want the cloud to compute functions on this memory, and prove that the functions were computed correctly. As another example, consider the setting of

streaming delegation

, where a stream of data goes by, and a delegator, who cannot store this data, delegates this task to the cloud. Later the delegator may ask the cloud to compute statistics on this streaming data, and prove the correctness of the computation. We note that in both settings the delegator must keep a (short) certificate of the data being delegated, in order to later verify the correctness of the computations. Moreover, in the streaming setting, this certificate should be computed in a streaming manner.

We construct both memory and streaming delegation schemes. We present non-interactive constructions based on the (standard) delegation scheme of Goldwasswer

et. al.

[GKR08]. These schemes allow the delegation of any function computable by an

${\cal L}$

-uniform circuit of low depth (the complexity of the delegator depends linearly on the depth). For memory delegation, we rely on the existence of a polylog PIR scheme, and for streaming, we rely on the existence of a fully homomorphic encryption scheme.

We also present constructions based on the CS-proofs of Micali. These schemes allow the delegation of any function in

P

. However, they are interactive (i.e., consists of 4 messages), or are non-interactive in the Random Oracle Model.

Kai-Min Chung, Yael Tauman Kalai, Feng-Hao Liu, Ran Raz

Symmetric Cryptanalysis and Constructions

Automatic Search of Attacks on Round-Reduced AES and Applications

In this paper, we describe versatile and powerful algorithms for searching guess-and-determine and meet-in-the-middle attacks on byte-oriented symmetric primitives. To demonstrate the strengh of these tool, we show that they allows to automatically discover new attacks on round-reduced AES with very low data complexity, and to find improved attacks on the AES-based MACs Alpha-MAC and Pelican-MAC, and also on the AES-based stream cipher LEX. Finally, the tools can be used in the context of fault attacks. These algorithms exploit the algebraically simple byte-oriented structure of the AES. When the attack found by the tool are practical, they have been implemented and validated.

Charles Bouillaguet, Patrick Derbez, Pierre-Alain Fouque
How to Improve Rebound Attacks

Rebound attacks are a state-of-the-art analysis method for hash functions. These cryptanalysis methods are based on a well chosen differential path and have been applied to several hash functions from the SHA-3 competition, providing the best known analysis in these cases. In this paper we study rebound attacks in detail and find for a large number of cases that the complexities of existing attacks can be improved.

This is done by identifying problems that optimally adapt to the cryptanalytic situation, and by using better algorithms to find solutions for the differential path. Our improvements affect one particular operation that appears in most rebound attacks and which is often the bottleneck of the attacks. This operation, which varies depending on the attack, can be roughly described as

merging

large lists. As a result, we introduce new general purpose algorithms for enabling further rebound analysis to be as performant as possible. We illustrate our new algorithms on real hash functions.

María Naya-Plasencia
A Cryptanalysis of PRINTcipher: The Invariant Subspace Attack

At CHES 2010, the new block cipher

PRINTcipher

was presented as a light-weight encryption solution for printable circuits [15]. The best attack to date is a differential attack [1] that breaks less than half of the rounds. In this paper, we will present a new attack called

invariant subspace attack

that breaks the full cipher for a significant fraction of its keys. This attack can be seen as a weak-key variant of a statistical saturation attack. For such weak keys, a chosen plaintext distinguishing attack can be mounted in unit time. In addition to breaking

PRINTcipher

, the new attack also gives us new insights into other, more well-established attacks. We derive a truncated differential characteristic with a round-independent but highly key-dependent probability. In addition, we also show that for weak keys, strongly biased linear approximations exists for any number of rounds. In this sense,

PRINTcipher

behaves very differently to what is usually – often implicitly – assumed.

Gregor Leander, Mohamed Ahmed Abdelraheem, Hoda AlKhzaimi, Erik Zenner
The PHOTON Family of Lightweight Hash Functions

RFID security is currently one of the major challenges cryptography has to face, often solved by protocols assuming that an on-tag hash function is available. In this article we present the

PHOTON

lightweight hash-function family, available in many different flavors and suitable for extremely constrained devices such as passive RFID tags. Our proposal uses a sponge-like construction as domain extension algorithm and an

AES

-like primitive as internal unkeyed permutation. This allows us to obtain the most compact hash function known so far (about 1120 GE for 64-bit collision resistance security), reaching areas very close to the theoretical optimum (derived from the minimal internal state memory size). Moreover, the speed achieved by

PHOTON

also compares quite favorably to its competitors. This is mostly due to the fact that unlike for previously proposed schemes, our proposal is very simple to analyze and one can derive tight

AES

-like bounds on the number of active Sboxes. This kind of

AES

-like primitive is usually not well suited for ultra constrained environments, but we describe in this paper a new method for generating the column mixing layer in a serial way, lowering drastically the area required. Finally, we slightly extend the sponge framework in order to offer interesting trade-offs between speed and preimage security for small messages, the classical use-case in hardware.

Jian Guo, Thomas Peyrin, Axel Poschmann

Secure Computation

Perfectly-Secure Multiplication for Any t < n/3

In the setting of secure multiparty computation, a set of

n

parties with private inputs wish to jointly compute some functionality of their inputs. One of the most fundamental results of information-theoretically secure computation was presented by Ben-Or, Goldwasser and Wigderson (BGW) in 1988. They demonstrated that any

n

-party functionality can be computed with

perfect security

, in the private channels model. The most technically challenging part of this result is a protocol for multiplying two shared values, with perfect security in the presence of up to

t

 < 

n

/3 malicious adversaries.

In this paper we provide a full specification of the BGW perfect multiplication protocol and prove its security. This includes one new step for the perfect multiplication protocol in the case of

n

/4 ≤ 

t

 < 

n

/3. As in the original BGW protocol, this protocol works whenever the parties hold univariate (Shamir) shares of the input values. In addition, we present a new multiplication protocol that utilizes

bivariate

secret sharing in order to achieve higher efficiency while maintaining a round complexity that is constant per multiplication. Both of our protocols are presented with full proofs of security.

Gilad Asharov, Yehuda Lindell, Tal Rabin
The IPS Compiler: Optimizations, Variants and Concrete Efficiency

In recent work, Ishai, Prabhakaran and Sahai (CRYPTO 2008) presented a new compiler (hereafter the IPS compiler) for constructing protocols that are secure in the presence of malicious adversaries without an honest majority, from protocols that are only secure in the presence of semi-honest adversaries. The IPS compiler has many important properties: it provides a radically different way of obtaining security in the presence of malicious adversaries with no honest majority, it is black-box in the underlying semi-honest protocol, and it has excellent asymptotic efficiency.

In this paper, we study the IPS compiler from a number of different angles. We present an efficiency improvement of the “watchlist setup phase” of the compiler that also facilitates a simpler and tighter analysis of the cheating probability. In addition, we present a conceptually simpler variant that uses protocols that are secure in the presence of covert adversaries as its basic building block. This variant can be used to achieve more efficient asymptotic security, as we show regarding black-box constructions of malicious oblivious transfer from semi-honest oblivious transfer. In addition, it deepens our understanding of the model of security in the presence of covert adversaries. Finally, we analyze the IPS compiler from a

concrete efficiency

perspective and demonstrate that in some cases it can be competitive with the best efficient protocols currently known.

Yehuda Lindell, Eli Oxman, Benny Pinkas
1/p-Secure Multiparty Computation without Honest Majority and the Best of Both Worlds

A protocol for computing a functionality is secure if an adversary in this protocol cannot cause more harm than in an ideal computation, where parties give their inputs to a trusted party which returns the output of the functionality to all parties. In particular, in the ideal model such computation is fair – all parties get the output. Cleve (STOC 1986) proved that, in general, fairness is not possible without an honest majority. To overcome this impossibility, Gordon and Katz (Eurocrypt 2010) suggested a relaxed definition – 1/

p

-secure computation – which guarantees partial fairness. For two parties, they construct 1/

p

-secure protocols for functionalities for which the size of either their domain or their range is polynomial (in the security parameter). Gordon and Katz ask whether their results can be extended to multiparty protocols.

We study 1/

p

-secure protocols in the multiparty setting for general functionalities. Our main result is constructions of 1/

p

-secure protocols that are resilient against

any

number of corrupt parties provided that the number of parties is constant and the size of the range of the functionality is at most polynomial (in the security parameter

n

). If less than 2/3 of the parties are corrupt, the size of the domain is constant, and the functionality is deterministic, then our protocols are efficient even when the number of parties is log log

n

. On the negative side, we show that when the number of parties is super-constant, 1/

p

-secure protocols are not possible when the size of the domain is polynomial. Thus, our feasibility results for 1/

p

-secure computation are essentially tight.

We further motivate our results by constructing protocols with stronger guarantees: If in the execution of the protocol there is a majority of honest parties, then our protocols provide full security. However, if only a minority of the parties are honest, then our protocols are 1/

p

-secure. Thus, our protocols provide the best of both worlds, where the 1/

p

-security is only a fall-back option if there is no honest majority.

Amos Beimel, Yehuda Lindell, Eran Omri, Ilan Orlov

Leakage and Side Channels

Leakage-Resilient Zero Knowledge

In this paper, we initiate a study of zero knowledge proof systems in the presence of side-channel attacks. Specifically, we consider a setting where a cheating verifier is allowed to obtain arbitrary bounded leakage on the

entire state

(

including the witness and the random coins

) of the prover

during the entire protocol execution

. We formalize a meaningful definition of

leakage-resilient zero knowledge

(LR-ZK) proof system, that intuitively guarantees that

the protocol does not yield anything beyond the validity of the statement and the leakage obtained by the verifier

.

We give a construction of LR-ZK interactive proof system based on standard general assumptions. To the best of our knowledge, this is the first instance of a cryptographic

interactive protocol

where the adversary is allowed to perform leakage attacks during the protocol execution on the

entire state

of honest party (in contrast, prior work only considered leakage

prior

to the protocol execution, or very limited leakage

during

the protocol execution). Next, we give an LR-NIZK proof system based on standard number-theoretic assumptions.

Finally, we demonstrate the usefulness of our notions by giving two concrete applications:

We initiate a new line of research to relax the assumption on the “tamper-proofness” of hardware tokens used in the design of various cryptographic protocols. In particular, we give a construction of a universally composable multiparty computation protocol in the

leaky token model

(where an adversary in possession of a token is allowed to obtain arbitrary bounded leakage on the

entire state

of the token) based on standard general assumptions.

Next, we give simple, generic constructions of

fully

leakage-resilient signatures in the bounded leakage model as well as the continual leakage model. Unlike the recent constructions of such schemes, we also obtain security in the “noisy leakage” model.

Sanjam Garg, Abhishek Jain, Amit Sahai
A Comprehensive Evaluation of Mutual Information Analysis Using a Fair Evaluation Framework

The resistance of cryptographic implementations to side-channel analysis is a matter of considerable interest to those concerned with information security. It is particularly desirable to identify the attack methodology (e.g. differential power analysis using correlation or distance-of-means as the distinguisher) able to produce the best results. Such attempts are complicated by the many and varied factors contributing to attack success: the device power consumption characteristics, an attacker’s power model, the distinguisher by which measurements and model predictions are compared, the quality of the estimations, and so on. Previous work has delivered partial answers for certain restricted scenarios. In this paper we assess the effectiveness of mutual information-based differential power analysis within a generic and comprehensive evaluation framework. Complementary to existing work, we present several notions/characterisations of attack success with direct implications for the amount of data required. We are thus able to identify scenarios in which mutual information offers performance advantages over other distinguishers. Furthermore we observe an interesting feature—unique to the mutual information based distinguisher—resembling a type of stochastic resonance, which could potentially enhance the effectiveness of such attacks over other methods in certain noisy scenarios.

Carolyn Whitnall, Elisabeth Oswald
Key-Evolution Schemes Resilient to Space-Bounded Leakage

Much recent work in cryptography attempts to build secure schemes in the presence of

side-channel leakage

or leakage caused by malicious software, like computer viruses. In this setting, the adversary may obtain some additional information (beyond the control of the scheme designer) about the internal secret state of a cryptographic scheme. Here, we consider key-evolution schemes that allow a user to evolve a secret-key

K

1

via a

deterministic

function

f

, to get updated keys

K

2

 = 

f

(

K

1

),

K

3

 = 

f

(

K

2

), …. Such a scheme is

leakage-resilient

if an adversary that can leak on the first

i

steps of the evolution process does not get any useful information about any future keys. For such schemes, one must assume some restriction on the

complexity

of the leakage to prevent

pre-computation attacks

, where the leakage on a key

K

i

simply pre-computes a future key

K

i

 + 

t

and leaks even a single bit on it.

Much of the prior work on this problem, and the restrictions made therein, can be divided into two types. Theoretical work offers rigor and provable security, but at the cost of having to make strong restrictions on the type of leakage and designing complicated schemes to make standard reduction-based proof techniques go through (an example of such an assumption is the “only computation leaks” axiom). On the other hand, practical work focuses on simple and efficient schemes, often at the cost of only achieving an intuitive notion of security without formal well-specified guarantees.

In this paper, we complement the two tracks via a middle-of-the-road approach. On one hand, we rely on the random-oracle model. On the other hand, we show that even in the random-oracle model, designing secure leakage-resilient schemes is susceptible to pitfalls. For example, just assuming that leakage “cannot evaluate the random oracle” can be misleading. Instead, we define a new model in which we assume that the “leakage” can be any arbitrary

space bounded

computation that can make random oracle calls itself. We connect the space-complexity of a computation in the random-oracle modeling to the

pebbling complexity

on graphs. Using this connection, we derive meaningful guarantees for relatively simple key-evolution constructions.

Our scheme is secure also against a large and natural class of active attacks, where an attacker can leak as well as tamper with the internals of a device. This is especially important if the key evolution is performed on a PC that can be attacked by a virus, a setting considered by prior work in the

bounded retrieval model (BRM)

). This paper provides the first scheme were the adversary in the BRM can also modify the data stored on the machine.

Stefan Dziembowski, Tomasz Kazana, Daniel Wichs
Generic Side-Channel Distinguishers: Improvements and Limitations

The goal of generic side-channel distinguishers is to allow key recoveries against any type of implementation, under minimum assumptions on the underlying hardware. Such distinguishers are particularly interesting in view of recent technological advances. Indeed, the traditional leakage models used in side-channel attacks, based on the Hamming weight or distance of the data contained in an implementation, are progressively invalidated by the increased variability in nanoscale electronic devices. In this paper, we consequently provide two contributions related to the application of side-channel analysis against emerging cryptographic implementations. First, we describe a new statistical test that is aimed to be generic and efficient when exploiting high-dimensional leakages. The proposed distinguisher is fully non-parametric. It formulates the leakage distributions using a copula and discriminates keys based on the detection of an “outlier behavior”. Next, we provide experiments putting forward the limitations of generic side-channel analysis in advanced scenarios, where leaking devices are protected with countermeasures. Our results exhibit that all non-profiled attacks published so far can sometimes give a false sense of security, due to incorrect leakage models. That is, there exists settings in which an implementation is secure against such non-profiled attacks and can be defeated with profiling. This confirms that the evaluations of cryptographic implementations should always consider profiling, as a worst case scenario.

Nicolas Veyrat-Charvillon, François-Xavier Standaert
Cryptography with Tamperable and Leaky Memory

A large and growing body of research has sought to secure cryptographic systems against physical attacks. Motivated by a large variety of real-world physical attacks on memory, an important line of work was initiated by Akavia, Goldwasser, and Vaikuntanathan [1] where security is sought under the assumptions that: (1)

all memory is leaky

, and (2) leakage can be

an arbitrarily chosen (efficient) function

of the memory.

However, physical attacks on memory are not limited to leakagethrough side-channels, but can also include active

tampering

attacks through a variety of physical attacks, including heat and EM radiation. Nevertheless, protection against the analogous model for tampering – where (1)

all memory is tamperable

, and (2) where the tampering can be

an arbitrarily chosen (efficient) function

applied to the memory – has remained an elusive target, despite significant effort on tampering-related questions.

In this work, we tackle this question by considering a model where we assume that

both

of these pairs of statements are true – that all memory is both leaky and (arbitrarily) tamperable. Furthermore, we assume that this leakage and tampering can happen repeatedly and continually (extending the model of [10,7] in the context of leakage). We construct a signature scheme and an encryption scheme that are provably secure against such attacks, assuming that memory can be updated in a randomized fashion between episodes of tampering and leakage. In both schemes we rely on the linear assumption over bilinear groups.

We also separately consider a model where only continual and repeated tampering (but only bounded leakage) is allowed, and we are able to obtain positive results assuming only that “self-destruct” is possible, without the need for memory updates.

Our results also improve previous results in the continual leakage regime without tampering [10,7]. Whereas previous schemes secure against continual leakage (of arbitrary bounded functions of the secret key), could tolerate only 1/2 − 

ε

leakage-rate between key updates under the linear assumption over bilinear groups, our schemes can tolerate 1 − 

ε

leakage-rate between key updates, under the same assumption.

Yael Tauman Kalai, Bhavana Kanukurthi, Amit Sahai

Quantum Cryptography

Merkle Puzzles in a Quantum World

In 1974, Ralph Merkle proposed the first unclassified scheme for secure communications over insecure channels. When legitimate communicating parties are willing to spend an amount of computational effort proportional to some parameter

N

, an eavesdropper cannot break into their communication without spending a time proportional to

N

2

, which is quadratically more than the legitimate effort. We showed in an earlier paper that Merkle’s schemes are completely insecure against a quantum adversary, but that their security can be partially restored if the legitimate parties are also allowed to use quantum computation: the eavesdropper needed to spend a time proportional to

N

3/2

to break our earlier quantum scheme. Furthermore, all previous

classical

schemes could be broken completely by the onslaught of a quantum eavesdropper and we conjectured that this is unavoidable.

We give two novel key agreement schemes in the spirit of Merkle’s. The first one can be broken by a quantum adversary that makes an effort proportional to

N

5/3

to implement a quantum random walk in a Johnson graph reminiscent of Andris Ambainis’ quantum algorithm for the element distinctness problem. This attack is optimal up to logarithmic factors. Our second scheme is purely classical, yet it cannot be broken by a quantum eavesdropper who is only willing to expend effort proportional to that of the legitimate parties.

Gilles Brassard, Peter Høyer, Kassem Kalach, Marc Kaplan, Sophie Laplante, Louis Salvail
Classical Cryptographic Protocols in a Quantum World

Cryptographic protocols, such as protocols for secure function evaluation (SFE), have played a crucial role in the development of modern cryptography. The extensive theory of these protocols, however, deals almost exclusively with classical attackers. If we accept that quantum information processing is the most realistic model of physically feasible computation, then we must ask: what classical protocols remain secure against quantum attackers?

Our main contribution is showing the existence of classical two-party protocols for the secure evaluation of any polynomial-time function under reasonable computational assumptions (for example, it suffices that the learning with errors problem be hard for quantum polynomial time). Our result shows that the basic two-party feasibility picture from classical cryptography remains unchanged in a quantum world.

Sean Hallgren, Adam Smith, Fang Song
Position-Based Quantum Cryptography: Impossibility and Constructions

The aim of position-based cryptography is to use the geographical position of a party as its only credential. In this work, we study position-based cryptography in the quantum setting.

We show that if collaborating adversaries are allowed to pre-share an arbitrarily large entangled quantum state, then position-verification, and as a consequence position-based cryptography in general, is

impossible

(also) in the quantum setting.

To this end, we prove that with the help of sufficient pre-shared entanglement, any non-local quantum computation, i.e., any computation that involves quantum inputs from two parties at different locations, can be performed

instantaneously

and

without any communication

, up to local corrections that need to be applied to the outputs. The latter can be understood in that the parties obtain their respective outputs “encrypted”, where each corresponding encryption key is known by the opposite party. This result generalizes to any number of parties, and it implies that any non-local quantum computation can be performed using a

single

round of mutual communication (in which the parties exchange the encryption keys), and that any position-verification scheme can be broken, assuming sufficient pre-shared entanglement among the adversaries.

On the positive side, we show that for adversaries that are restricted to not share any entangled quantum states, secure position-verification is achievable. Jointly, these results suggest the interesting question whether secure position-verification is possible in case of a bounded amount of entanglement. Our positive result can be interpreted as resolving this question in the simplest case, where the bound is set to zero.

Harry Buhrman, Nishanth Chandran, Serge Fehr, Ran Gelles, Vipul Goyal, Rafail Ostrovsky, Christian Schaffner

Lattices and Knapsacks

Analyzing Blockwise Lattice Algorithms Using Dynamical Systems

Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT’91] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt’08] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension. In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis (

b

i

)

i

 ≤ 

n

 ∈ ℚ

n

×

n

of a lattice

L

and a block-size

β

, and if terminated after

$\Omega\left(\frac{n^3}{\beta^2}(\log n + \log \log \max_i \|{b}_i\|)\right)$

calls to a

β

-dimensional HKZ-reduction (or SVP) subroutine, then BKZ returns a basis whose first vector has norm

$\leq 2 \nu _{\beta}^{\frac{n-1}{2(\beta-1)}+\frac{3}{2}} \cdot (\det L )^{\frac{1}{n}}$

, where

ν

β

 ≤ 

β

is the maximum of Hermite’s constants in dimensions ≤ 

β

. To obtain this result, we develop a completely new elementary technique based on discrete-time affine dynamical systems, which could lead to the design of improved lattice reduction algorithms.

Guillaume Hanrot, Xavier Pujol, Damien Stehlé
Pseudorandom Knapsacks and the Sample Complexity of LWE Search-to-Decision Reductions

We study the pseudorandomness of bounded knapsack functions over arbitrary finite abelian groups. Previous works consider only specific families of finite abelian groups and 0-1 coefficients. The main technical contribution of our work is a new, general theorem that provides sufficient conditions under which pseudorandomness of bounded knapsack functions follows directly from their one-wayness. Our results generalize and substantially extend previous work of Impagliazzo and Naor (J. Cryptology 1996).

As an application of the new theorem, we give sample preserving search-to-decision reductions for the Learning With Errors (LWE) problem, introduced by (Regev, STOC 2005) and widely used in lattice-based cryptography. Concretely, we show that, for a wide range of parameters,

m

LWE samples can be proved indistinguishable from random just under the hypothesis that search LWE is a one-way function for the same number

m

of samples.

Daniele Micciancio, Petros Mol

Invited Talk

Tor and Circumvention: Lessons Learned
(Abstract to Go with Invited Talk)

Tor is a free-software anonymizing overlay network that helps people around the world use the Internet in safety. Tor’s 2500 volunteer relays carry almost 10Gb/s of traffic for several hundred thousand users each day.

Roger Dingledine

Public-Key Encryption

Fully Homomorphic Encryption over the Integers with Shorter Public Keys

At Eurocrypt 2010 van Dijk

et al.

described a fully homomorphic encryption scheme over the integers. The main appeal of this scheme (compared to Gentry’s) is its conceptual simplicity. This simplicity comes at the expense of a public key size in

${\cal \tilde O}(\lambda^{10})$

which is too large for any practical system. In this paper we reduce the public key size to

${\cal \tilde O}(\lambda^{7})$

by encrypting with a quadratic form in the public key elements, instead of a linear form. We prove that the scheme remains semantically secure, based on a stronger variant of the approximate-GCD problem, already considered by van Dijk

et al

.

We also describe the first implementation of the resulting fully homomorphic scheme. Borrowing some optimizations from the recent Gentry-Halevi implementation of Gentry’s scheme, we obtain roughly the same level of efficiency. This shows that fully homomorphic encryption can be implemented using simple arithmetic operations.

Jean-Sébastien Coron, Avradip Mandal, David Naccache, Mehdi Tibouchi
Fully Homomorphic Encryption from Ring-LWE and Security for Key Dependent Messages

We present a somewhat homomorphic encryption scheme that is both very simple to describe and analyze, and whose security (quantumly) reduces to the worst-case hardness of problems on ideal lattices. We then transform it into a fully homomorphic encryption scheme using standard “squashing” and “bootstrapping” techniques introduced by Gentry (STOC 2009).

One of the obstacles in going from “somewhat” to full homomorphism is the requirement that the somewhat homomorphic scheme be circular secure, namely, the scheme can be used to securely encrypt its own secret key. For all known somewhat homomorphic encryption schemes, this requirement was not known to be achievable under any cryptographic assumption, and had to be explicitly assumed. We take a step forward towards removing this additional assumption by proving that our scheme is in fact secure when encrypting polynomial functions of the secret key.

Our scheme is based on the ring learning with errors (RLWE) assumption that was recently introduced by Lyubashevsky, Peikert and Regev (Eurocrypt 2010). The RLWE assumption is reducible to worst-case problems on ideal lattices, and allows us to completely abstract out the lattice interpretation, resulting in an extremely simple scheme. For example, our secret key is

s

, and our public key is (

a

,

b

 = 

as

 + 2

e

), where

s

,

a

,

e

are all degree (

n

 − 1) integer polynomials whose coefficients are independently drawn from easy to sample distributions.

Zvika Brakerski, Vinod Vaikuntanathan
Bi-Deniable Public-Key Encryption

In 1997, Canetti et al. (CRYPTO 1997) put forward the intruiging notion of

deniable encryption

, which (informally) allows a sender and/or receiver, having already performed some encrypted communication, to produce ‘fake’ (but legitimate-looking) random coins that open the ciphertext to another message. Deniability is a powerful notion for both practice and theory: apart from its inherent utility for resisting coercion, a deniable scheme is also noncommitting (a useful property in constructing adaptively secure protocols) and secure under selective-opening attacks on whichever parties can equivocate. To date, however, known constructions have achieved only limited forms of deniability, requiring at least one party to withhold its randomness, and in some cases using an interactive protocol or external parties.

In this work we construct

bi-deniable

public-key cryptosystems, in which both the sender and receiver can simultaneously equivocate; we stress that the schemes are noninteractive and involve no third parties. One of our systems is based generically on “simulatable encryption” as defined by Damgård and Nielsen (CRYPTO 2000), while the other is lattice-based and builds upon the results of Gentry, Peikert and Vaikuntanathan (STOC 2008) with techniques that may be of independent interest. Both schemes work in the so-called “multi-distributional” model, in which the parties run alternative key-generation and encryption algorithms for equivocable communication, but claim under coercion to have run the prescribed algorithms. Although multi-distributional deniability has not attracted much attention, we argue that it is meaningful and useful because it provides credible coercion resistance in certain settings, and suffices for all of the related properties mentioned above.

Adam O’Neill, Chris Peikert, Brent Waters
Better Security for Deterministic Public-Key Encryption: The Auxiliary-Input Setting

Deterministic public-key encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO ’07), provides an alternative to randomized public-key encryption in various scenarios where the latter exhibits inherent drawbacks. A deterministic encryption algorithm, however, cannot satisfy any meaningful notion of security when the plaintext is distributed over a small set. Bellare et al. addressed this difficulty by requiring semantic security to hold only when the plaintext has high min-entropy from the adversary’s point of view.

In many applications, however, an adversary may obtain auxiliary information that is related to the plaintext. Specifically, when deterministic encryption is used as a building block of a larger system, it is rather likely that plaintexts do not have high min-entropy from the adversary’s point of view. In such cases, the framework of Bellare et al. might fall short from providing robust security guarantees.

We formalize a framework for studying the security of deterministic public-key encryption schemes with respect to auxiliary inputs. Given the trivial requirement that the plaintext should not be efficiently recoverable from the auxiliary input, we focus on

hard-to-invert

auxiliary inputs. Within this framework, we propose two schemes: the first is based on the decisional Diffie-Hellman (and, more generally, on the

d

-linear) assumption, and the second is based on a rather general class of subgroup indistinguishability assumptions (including, in particular, quadratic residuosity and Paillier’s composite residuosity). Our schemes are secure with respect to any auxiliary input that is subexponentially hard to invert (assuming the

standard

hardness of the underlying computational assumptions). In addition, our first scheme is secure even in the multi-user setting where related plaintexts may be encrypted under multiple public keys. Constructing a scheme that is secure in the multi-user setting (even without considering auxiliary inputs) was identified by Bellare et al. as an important open problem.

Zvika Brakerski, Gil Segev

Symmetric Schemes

The Collision Security of Tandem-DM in the Ideal Cipher Model

We prove that Tandem-DM, which is one of the two “classical” schemes for turning a blockcipher of 2

n

-bit key into a double block length hash function, has birthday-type collision resistance in the ideal cipher model. A collision resistance analysis for Tandem-DM achieving a similar birthday-type bound was already proposed by Fleischmann, Gorski and Lucks at FSE 2009 [3]. As we detail, however, the latter analysis is wrong, thus leaving the collision resistance of Tandem-DM as an open problem until now. Our analysis exhibits a novel feature in that we introduce a trick not used before in ideal cipher proofs.

Jooyoung Lee, Martijn Stam, John Steinberger
Order-Preserving Encryption Revisited: Improved Security Analysis and Alternative Solutions

We further the study of order-preserving symmetric encryption (OPE), a primitive for allowing efficient range queries on encrypted data, recently initiated (from a cryptographic perspective) by Boldyreva et al. (Eurocrypt ’09). First, we address the open problem of characterizing what encryption via a random order-preserving function (ROPF) leaks about underlying data (ROPF being the “ideal object” in the security definition, POPF, satisfied by their scheme.) In particular, we show that, for a database of randomly distributed plaintexts and appropriate choice of parameters, ROPF encryption leaks neither the precise value of any plaintext nor the precise distance between any two of them. The analysis here is quite technically non-trivial and introduces useful new techniques. On the other hand, we also show that ROPF encryption does leak both the value of any plaintext as well as the distance between any two plaintexts to within a

range

of possibilities roughly the square root of the domain size. We then study schemes that are not order-preserving, but which nevertheless allow efficient range queries and achieve security notions stronger than POPF. In a setting where the entire database is known in advance of key-generation (considered in several prior works), we show that recent constructions of “monotone minimal perfect hash functions” allow to efficiently achieve (an adaptation of) the notion of IND-O(rdered) CPA also considered by Boldyreva et al., which asks that

only

the order relations among the plaintexts is leaked. Finally, we introduce

modular

order-preserving encryption (MOPE), in which the scheme of Boldyreva et al. is prepended with a shift cipher. MOPE improves the security of OPE in a sense, as it does not leak any information about plaintext location. We clarify that our work should not be interpreted as saying the original scheme of Boldyreva et al., or the variants that we introduce, are “secure” or “insecure.” Rather, the goal of this line of research is to help practitioners decide whether the options provide a suitable security-functionality tradeoff for a given application.

Alexandra Boldyreva, Nathan Chenette, Adam O’Neill
A New Variant of PMAC: Beyond the Birthday Bound

We propose a PMAC-type mode of operation that can be used as a highly secure MAC (Message Authentication Code) or PRF (Pseudo-Random Function). Our scheme is based on the assumption that the underlying

n

-bit blockcipher is a pseudo-random permutation. Our construction, which we call

PMAC_Plus

, involves extensive modification to PMAC, requiring three blockcipher keys. The

PMAC_Plus

algorithm is a first rate-1 (

i.e.

, one blockcipher call per

n

-bit message block) blockcipher-based MAC secure against

$O\bigl(2^{2n/3}\bigr)$

queries, increasing the

$O\bigl(2^{n/2}\bigr)$

security of PMAC at a low additional cost. Our analysis uses some of the security-proof techniques developed with the sum construction (Eurocrypt 2000) and with the encrypted-CBC sum construction (CT-RSA 2010).

Kan Yasuda
Authenticated and Misuse-Resistant Encryption of Key-Dependent Data

This paper provides a comprehensive treatment of the security of authenticated encryption (AE) in the presence of key-dependent data, considering the four variants of the goal arising from the choice of universal nonce or random nonce security and presence or absence of a header. We present attacks showing that universal-nonce security for key-dependent messages is impossible, as is security for key-dependent headers, not only ruling out security for three of the four variants but showing that currently standarized and used schemes (all these target universal nonce security in the presence of headers) fail to provide security for key-dependent data. To complete the picture we show that the final variant (random-nonce security in the presence of key-dependent messages but key-independent headers) is efficiently achievable. Rather than a single dedicated scheme, we present a RO-based transform RHtE that endows

any

AE scheme with this security, so that existing implementations may be easily upgraded to have the best possible seurity in the presence of key-dependent data. RHtE is cheap, software-friendly, and continues to provide security when the key is a password, a setting in which key-dependent data is particularly likely. We go on to give a key-dependent data treatment of the goal of misuse resistant AE. Implementations are provided and show that RHtE has small overhead.

Mihir Bellare, Sriram Keelveedhi

Signatures

Round Optimal Blind Signatures

Constructing round-optimal blind signatures in the standard model has been a long standing open problem. In particular, Fischlin and Schröder recently ruled out a large class of three-move blind signatures in the standard model (Eurocrypt’10). In particular, their result shows that finding security proofs for the well-known blind signature schemes by Chaum, and by Pointcheval and Stern in the standard model via black-box reductions is hard. In this work we propose the first roundoptimal, i.e., two-move, blind signature scheme in the standard model (i.e., without assuming random oracles or the existence of a common reference string). Our scheme relies on the Decisional Diffie Hellman assumption and the existence of sub-exponentially hard 1-to-1 one way functions. This scheme is also secure in the concurrent setting.

Sanjam Garg, Vanishree Rao, Amit Sahai, Dominique Schröder, Dominique Unruh
Optimal Structure-Preserving Signatures in Asymmetric Bilinear Groups

Structure-preserving signatures are signatures defined over bilinear groups that rely on generic group operations. In particular, the messages and signatures consist of group elements and the verification of signatures consists of evaluating pairing product equations. Due to their purist nature structure- preserving signatures blend well with other pairing-based protocols.

We show that structure-preserving signatures must consist of at least 3 group elements when the signer uses generic group operations. Usually, the generic group model is used to rule out classes of attacks by an adversary trying to break a cryptographic assumption. In contrast, here we use the generic group model to prove a lower bound on the complexity of digital signature schemes.

We also give constructions of structure-preserving signatures that consist of 3 group elements only. This improves significantly on previous structure-preserving signatures that used 7 group elements and matches our lower bound. Our structure-preserving signatures have additional nice properties such as strong existential unforgeability and can sign multiple group elements at once.

Masayuki Abe, Jens Groth, Kristiyan Haralambiev, Miyako Ohkubo

Oblivious Transfer and Secret Sharing

Constant-Rate Oblivious Transfer from Noisy Channels

A

binary symmetric channel

(BSC) is a noisy communication channel that flips each bit independently with some fixed error probability 0 < 

p

 < 1/2. Crépeau and Kilian (FOCS 1988) showed that oblivious transfer, and hence general secure two-party computation, can be

unconditionally

realized by communicating over a BSC. There has been a long line of works on improving the efficiency and generality of this construction. However, all known constructions that achieve security against

malicious

parties require the parties to communicate

poly(k)

bits over the channel for each instance of oblivious transfer (more precisely,

${2\choose 1}$

-

bit

-OT) being realized, where

k

is a statistical security parameter. The question of achieving a constant (positive) rate was left open, even in the easier case of realizing a single oblivious transfer of a long string.

We settle this question in the affirmative by showing how to realize

n

independent instances of oblivious transfer, with statistical error that vanishes with

n

, by communicating just

O

(

n

) bits over a BSC. As a corollary, any boolean circuit of size

s

can be securely evaluated by two parties with

O

(

s

) + 

poly

(

k

) bits of communication over a BSC, improving over the

O

(

s

poly

(

k

) complexity of previous constructions.

Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, Manoj Prabhakaran, Amit Sahai, Jürg Wullschleger
The Torsion-Limit for Algebraic Function Fields and Its Application to Arithmetic Secret Sharing

An (

n

,

t

,

d

,

n

 − 

t

)-arithmetic secret sharing scheme (with uniformity) for

$\mathbb F_{q}^k$

over

$\mathbb F_{q}$

is an

$\mathbb F_{q}$

-linear secret sharing scheme where the secret is selected from

$\mathbb F_{q}^k$

and each of the

n

shares is an element of

$\mathbb F_{q}$

. Moreover, there is

t

-privacy (in addition, any

t

shares are uniformly random in

$\mathbb F_{q}^t$

) and, if one considers the

d

-fold “component-wise” product of any

d

sharings, then the

d

-fold component-wise product of the

d

respective secrets is (

n

 − 

t

)-wise uniquely determined by it. Such schemes are a fundamental primitive in information-theoretically secure multi-party computation. Perhaps counter-intuitively, secure

multi-party

computation is a very powerful primitive for

communication-efficient two-party

cryptography, as shown recently in a series of surprising results from 2007 on. Moreover, the existence of

asymptotically good

arithmetic secret sharing schemes plays a crucial role in their communication-efficiency: for each

d

 ≥ 2, if

A

(

q

) > 2

d

, where

A

(

q

) is Ihara’s constant, then there exists an infinite family of such schemes over

$\mathbb F_{q}$

such that

n

is unbounded,

k

 = Ω(

n

) and

t

 = Ω(

n

), as follows from a result at CRYPTO’06. Our main contribution is a novel paradigm for constructing asymptotically good arithmetic secret sharing schemes from towers of algebraic function fields. It is based on a new limit that, for a tower with a given Ihara limit and given positive integer ℓ, gives information on the cardinality of the ℓ-torsion sub-groups of the associated degree-zero divisor class groups and that we believe is of independent interest. As an application of the bounds we obtain, we relax the condition

A

(

q

) > 2

d

from the CRYPTO’06 result substantially in terms of our torsion-limit. As a consequence, this result now holds over

nearly all finite fields

$\mathbb F_{q}$

. For example, if

d

 = 2, it is sufficient that

q

 = 8,9 or

q

 ≥ 16.

Ignacio Cascudo, Ronald Cramer, Chaoping Xing

Multivariate and Coding-Based Schemes

Public-Key Identification Schemes Based on Multivariate Quadratic Polynomials

A problem of solving a system of multivariate quadratic polynomials over a finite field, which is called an MQ problem, is a promising problem in cryptography. A number of studies have been conducted on designing public-key schemes using the MQ problem, which are known as multivariate public-key cryptography (MPKC). However, the security of the existing schemes in MPKC relies

not

only on the MQ problem but

also

on an Isomorphism of Polynomials (IP) problem. In this paper, we propose public-key identification schemes based on the conjectured intractability of the MQ problem under the assumption of the existence of a non-interactive commitment scheme. Our schemes do

not

rely on the IP problem, and they consist of an identification protocol which is zero-knowledge argument of knowledge for the MQ problem. For a practical parameter choice, the efficiency of our schemes is highly comparable to that of identification schemes based on another problem including Permuted Kernels, Syndrome Decoding, Constrained Linear Equations, and Permuted Perceptrons. Furthermore, even if the protocol is repeated in parallel, our scheme can achieve the security under active attack with some additional cost.

Koichi Sakumoto, Taizo Shirai, Harunaga Hiwatari
Inverting HFE Systems Is Quasi-Polynomial for All Fields

In this paper, we present and prove the first closed formula bounding the degree of regularity of an HFE system over an arbitrary finite field. Though these bounds are not necessarily optimal, they can be used to deduce

1

if D, the degree of the corresponding HFE polynomial, and q, the size of the corresponding finite field, are fixed, inverting HFE system is polynomial for all fields;

2

if D is of the scale O

(

n

α

)

where n is the number of variables in an HFE system, and q is fixed, inverting HFE systems is quasi-polynomial for all fields.

We generalize and prove rigorously similar results by Granboulan, Joux and Stern in the case when

q

 = 2 that were communicated at Crypto 2006.

Jintai Ding, Timothy J. Hodges
Smaller Decoding Exponents: Ball-Collision Decoding

Very few public-key cryptosystems are known that can encrypt and decrypt in time

b

2 + 

o

(1)

with conjectured security level 2

b

against conventional computers and quantum computers. The oldest of these systems is the classic McEliece code-based cryptosystem.

The best attacks known against this system are generic decoding attacks that treat McEliece’s hidden binary Goppa codes as random linear codes. A standard conjecture is that the best possible

w

-error-decoding attacks against random linear codes of dimension

k

and length

n

take time 2

(

α

(

R

,

W

) + 

o

(1))

n

if

k

/

n

 → 

R

and

w

/

n

 → 

W

as

n

 → ∞.

Before this paper, the best upper bound known on the exponent

α

(

R

,

W

) was the exponent of an attack introduced by Stern in 1989. This paper introduces “ball-collision decoding” and shows that it has a smaller exponent for each (

R

,

W

): the speedup from Stern’s algorithm to ball-collision decoding is exponential in

n

.

Daniel J. Bernstein, Tanja Lange, Christiane Peters
McEliece and Niederreiter Cryptosystems That Resist Quantum Fourier Sampling Attacks

Quantum computers can break the RSA, El Gamal, and elliptic curve public-key cryptosystems, as they can efficiently factor integers and extract discrete logarithms. This motivates the development of

post-quantum

cryptosystems: classical cryptosystems that can be implemented with today’s computers, that will remain secure even in the presence of quantum attacks.

In this article we show that the McEliece cryptosystem over

rational Goppa codes

and the Niederreiter cryptosystem over

classical Goppa codes

resist precisely the attacks to which the RSA and El Gamal cryptosystems are vulnerable—namely, those based on generating and measuring coset states. This eliminates the approach of strong Fourier sampling on which almost all known exponential speedups by quantum algorithms are based. Specifically, we show that the natural case of the Hidden Subgroup Problem to which McEliece-type cryptosystems reduce cannot be solved by strong Fourier sampling, or by any measurement of a coset state. To do this, we extend recent negative results on quantum algorithms for Graph Isomorphism to subgroups of the automorphism groups of linear codes.

This gives the first rigorous results on the security of the McEliece-type cryptosystems in the face of quantum adversaries, strengthening their candidacy for post-quantum cryptography. We also strengthen some results of Kempe, Pyber, and Shalev on the Hidden Subgroup Problem in

S

n

.

Hang Dinh, Cristopher Moore, Alexander Russell
Backmatter
Metadaten
Titel
Advances in Cryptology – CRYPTO 2011
herausgegeben von
Phillip Rogaway
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-22792-9
Print ISBN
978-3-642-22791-2
DOI
https://doi.org/10.1007/978-3-642-22792-9