Skip to main content
Top

2008 | Book

Theory of Cryptography

Fifth Theory of Cryptography Conference, TCC 2008, New York, USA, March 19-21, 2008. Proceedings

insite
SEARCH

Table of Contents

Frontmatter

Technical Session 1

Incrementally Verifiable Computation or Proofs of Knowledge Imply Time/Space Efficiency

A probabilistically checkable proof (PCP) system enables proofs to be verified in time polylogarithmic in the length of a classical proof. Computationally sound (CS) proofs improve upon PCPs by additionally shortening the length of the transmitted proof to be polylogarithmic in the length of the classical proof.

In this paper we explore the ultimate limits of non-interactive proof systems with respect to time and space efficiency. We present a proof system where the prover uses space polynomial in the space of a classical prover and time essentially linear in the time of a classical prover, while the verifier uses time and space that are essentially constant. Further, this proof system is

composable

: there is an algorithm for merging two proofs of length

k

into a proof of the conjunction of the original two theorems in time polynomial in

k

, yielding a proof of length

exactly

k

.

We deduce the existence of our proposed proof system by way of a natural new assumption about proofs of knowledge. In fact, a main contribution of our result is showing that knowledge can be “traded” for time and space efficiency in noninteractive proof systems. We motivate this result with an explicit construction of noninteractive CS proofs of knowledge in the random oracle model.

Paul Valiant
On Seed-Incompressible Functions

We investigate a new notion of security for “cryptographic functions” that we term

seed incompressibility

(SI). We argue that this notion captures some of the intuition for the alleged security of constructions in the random-oracle model, and indeed we show that seed incompressibility suffices for some applications of the random oracle methodology. Very roughly, a function family

f

s

(·) with |

s

| = 

n

is seed incompressible if given (say)

n

/2 bits of advice (that can depend on the seed

s

) and an oracle access to

f

s

(·), an adversary cannot “break

f

s

(·)” any better than given only oracle access to

f

s

(·) and no advice.

The strength of this notion depends on what we mean by “breaking

f

s

(·)”. We first show that for any family

f

s

there exists an adversary that can distinguish

f

s

(·) from a random function using

n

/2 bits of advice, so seed incompressible pseudo-random functions do not exist. Then we consider the weaker notion of seed-incompressible correlation intractability. We show that although the negative results can be partially extended also to this weaker notion, they cannot rule it out altogether. More importantly, the settings that we cannot rule out still suffice for many applications. In particular, we show that they suffice for constructing collision-resistant hash functions and for removing interaction from

Σ

-protocols (3-round honest verifier zero-knowledge protocols).

Shai Halevi, Steven Myers, Charles Rackoff

Technical Session 2

Asymptotically Efficient Lattice-Based Digital Signatures

We give a direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices. The construction is provably secure based on the worst-case hardness of approximating the shortest vector in such lattices within a polynomial factor, and it is also asymptotically efficient: the time complexity of the signing and verification algorithms, as well as key and signature size is almost linear (up to poly-logarithmic factors) in the dimension

n

of the underlying lattice. Since no sub-exponential (in

n

) time algorithm is known to solve lattice problems in the worst case, even when restricted to cyclic lattices, our construction gives a digital signature scheme with an essentially optimal performance/security trade-off.

Vadim Lyubashevsky, Daniele Micciancio
Basing Weak Public-Key Cryptography on Strong One-Way Functions

In one of the pioneering papers on public-key cryptography, Ralph Merkle suggested a heuristic protocol for exchanging a secret key over an insecure channel by using an idealized private-key encryption scheme. Merkle’s protocol is presumed to remain secure as long as the gap between the running time of the adversary and that of the honest parties is at most

quadratic

(rather than super-polynomial). In this work, we initiate an effort to base similar forms of public-key cryptography on well-founded assumptions.

We suggest a variant of Merkle’s protocol whose security can be based on the

one-wayness

of the underlying primitive. Specifically, using a one-way function of exponential strength, we obtain a key agreement protocol resisting adversaries whose running time is nearly quadratic in the running time of the honest parties. This protocol gives the adversary a small (but non-negligible) advantage in guessing the key. We show that the security of the protocol can be amplified by using a one-way function with a strong form of a hard-core predicate, whose existence follows from a conjectured “dream version” of Yao’s XOR lemma. On the other hand, we show that this type of hard-core predicate cannot be based on (even exponentially strong) one-wayness by using a black-box construction.

In establishing the above results, we reveal interesting connections between the problem under consideration and problems from other domains. In particular, we suggest a paradigm for converting (unconditionally) secure protocols in Maurer’s

bounded storage model

into (computationally) secure protocols in the random oracle model, translating storage advantage into computational advantage. Our main protocol can be viewed as an instance of this paradigm. Finally, we observe that a

quantum

adversary can completely break the security of our protocol (as well as Merkle’s heuristic protocol) by using the quadratic speedup of Grover’s quantum search algorithm. This raises a speculation that there might be a closer relation between (classical) public-key cryptography and quantum computing than is commonly believed.

Eli Biham, Yaron J. Goren, Yuval Ishai

Technical Session 3

Which Languages Have 4-Round Zero-Knowledge Proofs?

We show that if a language

L

has a 4-round, black-box, computational zero-knowledge proof system with negligible soundness error, then

$\bar L \in {\sf MA}$

. Assuming the polynomial hierarchy does not collapse, this means in particular that

NP

-complete languages do not have 4-round zero-knowledge proofs (at least with respect to black-box simulation).

Jonathan Katz
How to Achieve Perfect Simulation and A Complete Problem for Non-interactive Perfect Zero-Knowledge

We study perfect zero-knowledge proofs (

). Unlike statistical zero-knowledge, where many fundamental questions have been answered, virtually nothing is known about these proofs.

We consider reductions that yield hard and complete problems in the statistical setting. The issue with these reductions is that they introduce errors into the simulation, and therefore they do not yield analogous problems in the perfect setting. We overcome this issue using an

error shifting technique

. This technique allows us to remove the error from the simulation. Consequently, we obtain the first complete problem for the class of problems possessing non-interactive perfect zero-knowledge proofs (

), and the first hard problem for the class of problems possessing public-coin

proofs.

We get the following applications. Using the error shifting technique, we show that the notion of zero-knowledge where the simulator is allowed to fail is equivalent to the one where it is not allowed to fail. Using our complete problem, we show that under certain restrictions

is closed under the

OR

operator. Using our hard problem, we show how a

constant-round, perfectly hiding

instance-dependent commitment may be obtained (this would collapse the round complexity of public-coin

proofs to a constant).

Lior Malka
General Properties of Quantum Zero-Knowledge Proofs

This paper studies general properties of quantum zero- knowledge proof systems. Among others, the following properties are proved on quantum computational zero-knowledge proofs:

Honest-verifier

quantum zero-knowledge equals general quantum zero-knowledge.

Public-coin

quantum zero-knowledge equals general quantum zero-knowledge.

Quantum zero-knowledge with

perfect completeness

equals general quantum zero-knowledge with imperfect completeness.

Any quantum zero-knowledge proof system can be transformed into a

three-message public-coin

quantum zero-knowledge proof system of perfect completeness with polynomially small error in soundness (hence with arbitrarily small constant error in soundness).

All the results proved in this paper are unconditional, i.e., they do not rely any computational assumptions. The proofs for all the statements are direct and do not use complete promise problems, and thus, essentially the same method works well even for quantum statistical and perfect zero-knowledge proofs. In particular, all the four properties above hold also for the statistical zero-knowledge case (the first two were shown previously by Watrous), and the first two properties hold even for the perfect zero-knowledge case. It is also proved that allowing a simulator to output “

FAIL

” does not change the power of quantum perfect zero-knowledge proofs. The corresponding properties are not known to hold in the classical perfect zero-knowledge case.

Hirotada Kobayashi

Technical Session 4

The Layered Games Framework for Specifications and Analysis of Security Protocols

The layered games framework provides a solid foundation to the accepted methodology of building complex distributed systems, as a ‘stack’ of independently-developed protocols. Each protocol in the stack, realizes a corresponding ‘layer’ model, over the ‘lower layer’. We define layers, protocols and related concepts. We then prove the

fundamental lemma of layering

. The lemma shows that given a stack of protocols

$\{\pi_i\}_{i=1}^u$

, s.t. for every

i

 ∈ {1,...

u

}, protocol

π

i

realizes layer

over layer

, then the entire stack can be composed to a single protocol

π

u

||...||1

, which realizes layer

over layer

.

The fundamental lemma of layering allows precise specification, design and analysis of each layer independently, and combining the results to ensure properties of the complete system. This is especially useful when considering (computationally-bounded) adversarial environments, as for security and cryptographic protocols.

Our specifications are based on

games

, following many works in applied cryptography. This differs from existing frameworks allowing compositions of cryptographic protocols, which are based on

simulatability of ideal functionality

.

Amir Herzberg, Igal Yoffe
Universally Composable Multi-party Computation with an Unreliable Common Reference String

Universally composable (UC) multi-party computation has been studied in two settings. When a majority of parties are honest, UC multi-party computation is possible without any assumptions. Without a majority of honest parties, UC multi-party computation is impossible in the plain model, but feasibility results have been obtained in various augmented models. The most popular such model posits a

common reference string

(CRS) available to parties executing the protocol.

In either of the above settings, some

assumption

regarding the protocol execution is made: i.e., that many parties are honest in the first case, or that a legitimately-chosen string is available in the second. If this assumption is incorrect then all security is lost.

A natural question is whether it is possible to design protocols secure if

either one

of these assumptions holds, i.e., a protocol which is secure if

either

at most

s

players are dishonest

or

if up to

t

 > 

s

players are dishonest but the CRS is chosen in the prescribed manner. We show that such protocols exist if and only if

s

 + 

t

 < 

n

.

Vipul Goyal, Jonathan Katz
Efficient Protocols for Set Intersection and Pattern Matching with Security Against Malicious and Covert Adversaries

In this paper we construct efficient secure protocols for

set intersection

and

pattern matching

. Our protocols for securely computing the set intersection functionality are based on secure pseudorandom function evaluations, in contrast to previous protocols that used secure polynomial evaluation. In addition to the above, we also use secure pseudorandom function evaluation in order to achieve secure pattern matching. In this case, we utilize specific properties of the Naor-Reingold pseudorandom function in order to achieve high efficiency.

Our results are presented in two adversary models. Our protocol for secure pattern matching and one of our protocols for set intersection achieve security against

malicious adversaries

under a relaxed definition where one corruption case is simulatable and for the other only privacy (formalized through indistinguishability) is guaranteed. We also present a protocol for set intersection that is fully simulatable in the model of covert adversaries. Loosely speaking, this means that a malicious adversary can cheat, but will then be caught with good probability.

Carmit Hazay, Yehuda Lindell
Fast Private Norm Estimation and Heavy Hitters

We consider the problems of computing the Euclidean norm of the difference of two vectors and, as an application, computing the large components (Heavy Hitters) in the difference. We provide protocols that are approximate but private in the semi-honest model and efficient in terms of time and communication in the vector length

N

. We provide the following, which can serve as building blocks to other protocols:

Euclidean norm problem:

we give a protocol with quasi-linear local computation and polylogarithmic communication in

N

leaking only the true value of the norm. For processing massive datasets, the intended application, where

N

is typically huge, our improvement over a recent result with quadratic runtime is significant.

Heavy Hitters problem:

suppose, for a prescribed

B

, we want the

B

largest components in the difference vector. We give a protocol with quasi-linear local computation and polylogarithmic communication leaking only the set of true

B

largest components and the Euclidean norm of the difference vector. We justify the leakage as (1) desirable, since it gives a measure of goodness of approximation; or (2) inevitable, since we show that there are contexts where linear communication is required for approximating the Heavy Hitters.

Joe Kilian, André Madeira, Martin J. Strauss, Xuan Zheng

Technical Session 5

Matroids Can Be Far from Ideal Secret Sharing

In a secret-sharing scheme, a secret value is distributed among a set of parties by giving each party a share. The requirement is that only predefined subsets of parties can recover the secret from their shares. The family of the predefined authorized subsets is called the access structure. An access structure is ideal if there exists a secret-sharing scheme realizing it in which the shares have optimal length, that is, in which the shares are taken from the same domain as the secrets. Brickell and Davenport (J. of Cryptology, 1991) proved that ideal access structures are induced by matroids. Subsequently, ideal access structures and access structures induced by matroids have received a lot of attention. Seymour (J. of Combinatorial Theory, 1992) gave the first example of an access structure induced by a matroid, namely the Vamos matroid, that is non-ideal. Beimel and Livne (TCC 2006) presented the first non-trivial lower bounds on the size of the domain of the shares for secret-sharing schemes realizing an access structure induced by the Vamos matroid.

In this work, we substantially improve those bounds by proving that the size of the domain of the shares in every secret-sharing scheme for those access structures is at least

k

1.1

, where

k

is the size of the domain of the secrets (compared to

$k+ \Omega(\sqrt{k})$

in previous works). Our bounds are obtained by using non-Shannon inequalities for the entropy function. The importance of our results are: (1) we present the first proof that there exists an access structure induced by a matroid which is not nearly ideal, and (2) we present the first proof that there is an access structure whose information rate is strictly between 2/3 and 1. In addition, we present a better lower bound that applies only to

linear

secret-sharing schemes realizing the access structures induced by the Vamos matroid.

Amos Beimel, Noam Livne, Carles Padró
Perfectly-Secure MPC with Linear Communication Complexity

Secure multi-party computation (MPC) allows a set of

n

players to securely compute an agreed function, even when up to

t

players are under the control of an adversary. Known

perfectly secure

MPC protocols require communication of at least

Ω

(

n

3

) field elements per multiplication, whereas cryptographic or unconditional security is possible with communication linear in the number of players. We present a perfectly secure MPC protocol communicating

$\mathcal{O}(n)$

field elements per multiplication. Our protocol provides perfect security against an active, adaptive adversary corrupting

t

 < 

n

/3 players, which is optimal. Thus our protocol improves the security of the most efficient information-theoretically secure protocol at no extra costs, respectively improves the efficiency of perfectly secure MPC protocols by a factor of

Ω

(

n

2

). To achieve this, we introduce a novel technique – constructing detectable protocols with the help of so-called hyper-invertible matrices, which we believe to be of independent interest. Hyper-invertible matrices allow (among other things) to perform efficient correctness checks of many instances in parallel, which was until now possible only if error-probability was allowed.

Zuzana Beerliová-Trubíniová, Martin Hirt
MPC vs. SFE: Perfect Security in a Unified Corruption Model

Secure function evaluation (SFE) allows a set of players to compute an arbitrary agreed function of their private inputs, even if an adversary may corrupt some of the players. Secure multi-party computation (MPC) is a generalization allowing to perform an arbitrary on-going (also called reactive or stateful) computation during which players can receive outputs and provide new inputs at intermediate stages.

At Crypto 2006, Ishai

et al.

considered mixed threshold adversaries that either passively corrupt some fixed number of players, or, alternatively, actively corrupt some (smaller) fixed number of players, and showed that for certain thresholds, cryptographic SFE is possible, whereas cryptographic MPC is not.

However, this separation does not occur when one considers

perfect

security. Actually, past work suggests that no such separation exists, as all known general protocols for perfectly secure SFE can also be used for MPC. Also, such a separation does not show up with

general adversaries

, characterized by a collection of corruptible subsets of the players, when considering passive and active corruption.

In this paper, we study the most general corruption model where the adversary is characterized by a collection of adversary classes, each specifying the subset of players that can be actively, passively, or fail-corrupted, respectively, and show that in this model, perfectly secure MPC separates from perfectly secure SFE. Furthermore, we derive the exact conditions on the adversary structure for the existence of perfectly secure SFE resp. MPC, and provide efficient protocols for both cases.

Zuzana Beerliová-Trubíniová, Matthias Fitzi, Martin Hirt, Ueli Maurer, Vassilis Zikas

Invited Talk

Bridging Game Theory and Cryptography: Recent Results and Future Directions

Motivated by the desire to develop more realistic models of, and protocols for, interactions between mutually distrusting parties, there has recently been significant interest in combining the approaches and techniques of game theory with those of cryptographic protocol design. Broadly speaking, two directions are currently being pursued:

Applying cryptography to game theory:

Certain game-theoretic equilibria are achievable if a trusted

mediator

is available. The question here is:

to what extent can this mediator be replaced by a distributed cryptographic protocol run by the parties themselves?

Applying game-theory to cryptography:

Traditional cryptographic models assume some honest parties who faithfully follow the protocol, and some arbitrarily malicious players against whom the honest players must be protected. Game-theoretic models propose instead that all players are simply

self-interested

(i.e., rational), and the question then is:

how can we model and design meaningful protocols for such a setting?

In addition to surveying known results in each of the above areas, I suggest some new definitions along with avenues for future research.

Jonathan Katz

Technical Session 6

Verifiably Secure Devices

We put forward the notion of a

verifiably secure device

, in essence a stronger notion of secure computation, and achieve it in the ballot-box model. Verifiably secure devices

1

Provide a perfect solution to the problem of

achieving

correlated equilibrium, an important and extensively investigated problem at the intersection of game theory, cryptography and efficient algorithms; and

1

Enable the secure evaluation of multiple interdependent functions.

Sergei Izmalkov, Matt Lepinski, Silvio Micali
Lower Bounds on Implementing Robust and Resilient Mediators

We provide new and tight lower bounds on the ability of players to implement equilibria using cheap talk, that is, just allowing communication among the players. One of our main results is that, in general, it is impossible to implement three-player Nash equilibria in a bounded number of rounds. We also give the first rigorous connection between Byzantine agreement lower bounds and lower bounds on implementation. To this end we consider a number of variants of Byzantine agreement and introduce reduction arguments. We also give lower bounds on the running time of two player implementations. All our results extended to lower bounds on (

k

,

t

)

-robust

equilibria, a solution concept that tolerates deviations by coalitions of size up to

k

and deviations by up to

t

players with unknown utilities (who may be malicious).

Ittai Abraham, Danny Dolev, Joseph Y. Halpern
Cryptography and Game Theory: Designing Protocols for Exchanging Information

The goal of this paper is finding

fair

protocols for the secret sharing and secure multiparty computation (SMPC) problems, when players are assumed to be rational.

It was observed by Halpern and Teague (STOC 2004) that protocols with bounded number of iterations are susceptible to backward induction and cannot be considered rational. Previously suggested cryptographic solutions all share the property of having an essential exponential upper bound on their running time, and hence they are also

susceptible to backward induction

.

Although it seems that this bound is an inherent property of every cryptography based solution, we show that this is not the case. We suggest

coalition-resilient

secret sharing and SMPC protocols with the property that after any sequence of iterations it is still a computational best response to follow them. Therefore, the protocols can be run any number of iterations, and are

immune to backward induction

.

The mean of communication assumed is a broadcast channel, and we consider both the

simultaneous

and

non-simultaneous

cases.

Gillat Kol, Moni Naor

Technical Session 7

Equivocal Blind Signatures and Adaptive UC-Security

We study the design of adaptively secure blind signatures in the universal composability (UC) setting. First, we introduce a new property for blind signature schemes that is suitable for arguing security against adaptive adversaries: an

equivocal blind signature

is a blind signature where there exists a simulator that has the power of making signing transcripts correspond to any message signature pair. Second, we present a general construction methodology for building adaptively secure blind signatures: the starting point is a 2-move “equivocal lite blind signature”, a lightweight 2-party signature protocol that we formalize and implement both generically as well as concretely; formalizing a primitive as “lite” means that the adversary is required to show all private tapes of adversarially controlled parties; this enables us to conveniently separate zero-knowledge (ZK) related security requirements from the remaining security properties in the blind signature design methodology. Next, we focus on the suitable ZK protocols for blind signatures. We formalize two special ZK ideal functionalities, single-verifier-ZK (SVZK) and single-prover-ZK (SPZK), both special cases of multi-session ZK that may be of independent interest, and we investigate the requirements for realizing them in a commit-and-prove fashion as building blocks for adaptively secure UC blind signatures. Regarding SPZK we find the rather surprising result that realizing it only against static adversaries is sufficient to obtain adaptive security for UC blind signatures.

We instantiate all the building blocks of our design methodology both generically based on the blind signature construction of Fischlin as well as concretely based on the 2SDH assumption of Okamoto, thus demonstrating the feasibility and practicality of our approach. The latter construction yields the first practical UC blind signature that is secure against adaptive adversaries. We also present a new more general modeling of the ideal blind signature functionality.

Aggelos Kiayias, Hong-Sheng Zhou
P-signatures and Noninteractive Anonymous Credentials

In this paper, we introduce P-signatures. A P-signature scheme consists of a signature scheme, a commitment scheme, and (1) an interactive protocol for obtaining a signature on a committed value; (2) a

non

 − 

interactive

proof system for proving that the contents of a commitment has been signed; (3) a noninteractive proof system for proving that a pair of commitments are commitments to the same value. We give a definition of security for P-signatures and show how they can be realized under appropriate assumptions about groups with a bilinear map. We make extensive use of the powerful suite of non-interactive proof techniques due to Groth and Sahai. Our P-signatures enable, for the first time, the design of a practical non-interactive anonymous credential system whose security does not rely on the random oracle model. In addition, they may serve as a useful building block for other privacy-preserving authentication mechanisms.

Mira Belenkiy, Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya

Technical Session 8

Multi-property Preserving Combiners for Hash Functions

A robust combiner for hash functions takes two candidate implementations and constructs a hash function which is secure as long as at least one of the candidates is secure. So far, hash function combiners only aim at preserving a single property such as collision-resistance or pseudorandomness. However, when hash functions are used in protocols like TLS they are often required to provide several properties simultaneously. We therefore put forward the notion of multi-property preserving combiners, clarify some aspects on different definitions for such combiners, and propose a construction that provably preserves collision resistance, pseudorandomness, “random-oracle-ness”, target collision resistance and message authentication according to our strongest notion.

Marc Fischlin, Anja Lehmann
OT-Combiners via Secure Computation

An

OT-combiner

implements a secure oblivious transfer (OT) protocol using oracle access to

n

OT-candidates of which at most

t

may be faulty. We introduce a new general approach for combining OTs by making a simple and modular use of protocols for secure computation. Specifically, we obtain an OT-combiner from any instantiation of the following two ingredients: (1) a

t

-secure

n

-party protocol for the OT functionality, in a network consisting of secure point-to-point channels and a broadcast primitive; and (2) a secure two-party protocol for a functionality determined by the former multiparty protocol, in a network consisting of a single OT-channel. Our approach applies both to the “semi-honest” and the “malicious” models of secure computation, yielding the corresponding types of OT-combiners.

Instantiating our general approach with secure computation protocols from the literature, we conceptually simplify, strengthen the security, and improve the efficiency of previous OT-combiners. In particular, we obtain the first

constant-rate

OT-combiners in which the number of secure OTs being produced is a constant fraction of the total number of calls to the OT-candidates, while still tolerating a constant fraction of faulty candidates (

t

 = 

Ω

(

n

)). Previous OT-combiners required either

ω

(

n

) or poly(

k

) calls to the

n

candidates, where

k

is a security parameter, and produced only a single secure OT.

We demonstrate the usefulness of the latter result by presenting several applications that are of independent interest. These include:

Constant-rate OTs from a noisy channel.

We implement

n

instances of a standard

${2\choose 1}$

-OT by communicating just

O

(

n

) bits over a noisy channel (binary symmetric channel). Our reduction provides unconditional security in the semi-honest model. Previous reductions of this type required the use of

Ω

(

kn

) noisy bits.

Better amortized generation of OTs.

We show that, following an initial “seed” of

O

(

k

) OTs, each additional OT can be generated by only computing and communicating a

constant

number of outputs of a cryptographic hash function. This improves over a protocol of Ishai

et al.

(Crypto 2003), which obtained similar efficiency in the semi-honest model but required

Ω

(

k

) applications of the hash function for generating each OT in the malicious model.

Danny Harnik, Yuval Ishai, Eyal Kushilevitz, Jesper Buus Nielsen
Semi-honest to Malicious Oblivious Transfer—The Black-Box Way

Until recently, all known constructions of oblivious transfer protocols based on general hardness assumptions had the following form. First, the hardness assumption is used in a black-box manner (i.e., the construction uses only the input/output behavior of the primitive guaranteed by the assumption) to construct a

semi-honest

oblivious transfer, a protocol whose security is guaranteed to hold only against adversaries that follow the prescribed protocol. Then, the latter protocol is “compiled” into a (malicious) oblivious transfer using non-black techniques (a Karp reduction is carried in order to prove an NP statement in zero-knowledge).

In their recent breakthrough result, Ishai, Kushilevitz, Lindel and Petrank (STOC ’06) deviated from the above paradigm, presenting a black-box reduction from oblivious transfer to enhanced trapdoor permutations and to homomorphic encryption. Here we generalize their result, presenting a black-box reduction from oblivious transfer to semi-honest oblivious transfer. Consequently, oblivious transfer can be black-box reduced to each of the hardness assumptions known to imply a semi-honest oblivious transfer in a black-box manner. This list currently includes beside the hardness assumptions used by Ishai et al., also the existence of families of dense trapdoor permutations and of non trivial single-server private information retrieval.

Iftach Haitner
Black-Box Construction of a Non-malleable Encryption Scheme from Any Semantically Secure One

We show how to transform any semantically secure encryption scheme into a non-malleable one, with a black-box construction that achieves a quasi-linear blow-up in the size of the ciphertext. This improves upon the previous non-black-box construction of Pass, Shelat and Vaikuntanathan (Crypto ’06). Our construction also extends readily to guarantee non-malleability under a bounded-CCA2 attack, thereby simultaneously improving on both results in the work of Cramer et al. (Asiacrypt ’07).

Our construction departs from the oft-used paradigm of re-encrypting the same message with different keys and then proving consistency of encryptions; instead, we encrypt an encoding of the message with certain locally testable and self-correcting properties. We exploit the fact that low-degree polynomials are simultaneously good error-correcting codes and a secret-sharing scheme.

Seung Geol Choi, Dana Dachman-Soled, Tal Malkin, Hoeteck Wee

Technical Session 9

A Linear Lower Bound on the Communication Complexity of Single-Server Private Information Retrieval

We study the communication complexity of single-server Private Information Retrieval (PIR) protocols that are based on fundamental cryptographic primitives in a black-box manner. In this setting, we establish a tight lower bound on the number of bits communicated by the server in any polynomially-preserving construction that relies on trapdoor permutations. More specifically, our main result states that in such constructions

Ω

(

n

) bits must be communicated by the server, where

n

is the size of the server’s database, and this improves the

Ω

(

n

/ log

n

) lower bound due to Haitner, Hoch, Reingold and Segev (FOCS ’07). Therefore, in the setting under consideration, the naive solution in which the user downloads the entire database turns out to be optimal up to constant multiplicative factors. We note that the lower bound we establish holds for the most generic form of trapdoor permutations, including in particular enhanced trapdoor permutations.

Technically speaking, this paper consists of two main contributions from which our lower bound is obtained. First, we derive a tight lower bound on the number of bits communicated by the sender during the commit stage of any black-box construction of a statistically-hiding bit-commitment scheme from a family of trapdoor permutations. This lower bound asymptotically matches the upper bound provided by the scheme of Naor, Ostrovsky, Venkatesan and Yung (CRYPTO ’92). Second, we improve the efficiency of the reduction of statistically-hiding commitment schemes to low-communication single-server PIR, due to Beimel, Ishai, Kushilevitz and Malkin (STOC ’99). In particular, we present a reduction that essentially preserves the communication complexity of the underlying single-server PIR protocol.

Iftach Haitner, Jonathan J. Hoch, Gil Segev
Randomness Extraction Via δ-Biased Masking in the Presence of a Quantum Attacker

Randomness extraction is of fundamental importance for information-theoretic cryptography. It allows to transform a raw key about which an attacker has some limited knowledge into a fully secure random key, on which the attacker has essentially no information. Up to date, only very few randomness-extraction techniques are known to work against an attacker holding quantum information on the raw key. This is very much in contrast to the classical (non-quantum) setting, which is much better understood and for which a vast amount of different techniques are known and proven to work.

We prove a new randomness-extraction technique, which is known to work in the classical setting, to be secure against a quantum attacker as well. Randomness extraction is done by xor’ing a so-called

δ

-biased mask to the raw key. Our result allows to extend the classical applications of this extractor to the quantum setting. We discuss the following two applications. We show how to encrypt a long message with a short key, information-theoretically secure against a quantum attacker, provided that the attacker has enough quantum uncertainty on the message. This generalizes the concept of entropically-secure encryption to the case of a quantum attacker. As second application, we show how to do error-correction without leaking partial information to a quantum attacker. Such a technique is useful in settings where the raw key may contain errors, since standard error-correction techniques may provide the attacker with information on, say, a secret key that was used to obtain the raw key.

Serge Fehr, Christian Schaffner

Technical Session 10

An Equivalence Between Zero Knowledge and Commitments

We show that a language in NP has a zero-knowledge protocol if and only if the language has an “instance-dependent” commitment scheme. An instance-dependent commitment schemes for a given language is a commitment scheme that can depend on an instance of the language, and where the hiding and binding properties are required to hold only on the yes and no instances of the language, respectively.

The novel direction is the

only if

direction. Thus, we confirm the widely held belief that commitments are not only sufficient for zero knowledge protocols, but

necessary

as well. Previous results of this type either held only for restricted types of protocols or languages, or used nonstandard relaxations of (instance-dependent) commitment schemes.

Shien Jin Ong, Salil Vadhan
Interactive and Noninteractive Zero Knowledge are Equivalent in the Help Model

We show that interactive and noninteractive zero-knowledge are equivalent in the ‘help model’ of Ben-Or and Gutfreund (

J. Cryptology

, 2003). In this model, the shared reference string is generated by a probabilistic polynomial-time dealer who is given access to the statement to be proven. Our results do not rely on any unproven complexity assumptions and hold for statistical zero knowledge, for computational zero knowledge restricted to AM, and for quantum zero knowledge when the help is a pure quantum state.

André Chailloux, Dragos Florin Ciocan, Iordanis Kerenidis, Salil Vadhan

Technical Session 11

The Round-Complexity of Black-Box Zero-Knowledge: A Combinatorial Characterization

The round-complexity of black-box zero-knowledge has for years been a topic of much interest. Results in this area generally focus on either proving lower bounds in various settings (e.g., Canetti, Kilian, Petrank, and Rosen [3] prove concurrent zero-knowledge (

$c\mathcal {ZK}$

) requires

Ω

(log

n

/ loglog

n

) rounds and Barak and Lindell [2] show no constant-round single-session protocol can be zero-knowledge with strict poly-time simulators), or giving upper bounds (e.g., Prabhakaran, Rosen, and Sahai [15] give a (

$c\mathcal {ZK}$

) protocol with

ω

(log

n

) rounds). In this paper we show that though proving upper bounds seems to be quite different from demonstrating lower bounds, underlying both tasks there is a single, simple combinatorial game between two players: a rewinder and a scheduler. We give two theorems relating the success of rewinders in the game to both upper and lower bounds for black-box zero-knowledge in various settings (sequential composition, concurrent composition, etc). Our game and theorems unify the previous results in the area, simplify the task of proving upper and lower bounds, and should be useful in showing future results in the area.

Daniele Micciancio, Scott Yilek
On Constant-Round Concurrent Zero-Knowledge

Loosely speaking, an interactive proof is said to be

zero-knowledge

if the view of every “efficient” verifier can be “efficiently” simulated. An outstanding open question regarding zero-knowledge is whether constant-round concurrent zero-knowledge proofs exists for non-trivial languages. We answer this question to the affirmative when modeling “efficient adversaries” as probabilistic quasi-polynomial time machines (instead of the traditional notion of probabilistic polynomial-time machines).

Rafael Pass, Muthuramakrishnan Venkitasubramaniam

Technical Session 12

Concurrent Non-malleable Commitments from Any One-Way Function

We show the existence of concurrent non-malleable commitments based on the existence of one-way functions. Our proof of security only requires the use of black-box techniques, and additionally provides an arguably simplified proof of the existence of even stand-alone secure non-malleable commitments.

Huijia Lin, Rafael Pass, Muthuramakrishnan Venkitasubramaniam
Faster and Shorter Password-Authenticated Key Exchange

This paper presents an improved password-based authenticated key exchange protocol in the common reference string model. Its security proof requires no idealized assumption (such as random oracles).

The protocol is based on the

GL

framework introduced by Gennaro and Lindell, which generalizes the

KOY

key exchange protocol of Katz et al. Both the

KOY

and the

GL

protocols use (one-time) signatures as a non-malleability tool in order to prevent a man-in-the-middle attack against the protocol. The efficiency of the resulting protocol is negatively affected, since if we use regular signatures, they require a large amount of computation (almost as much as the rest of the protocol) and further computational assumptions. If one-time signatures are used, they substantially increase the bandwidth requirement.

Our improvement avoids using digital signatures altogether, replacing them with faster and shorter message authentication codes. The crucial idea is to leverage as much as possible the non-malleability of the encryption scheme used in the protocol, by including various values into the ciphertexts as

labels

. As in the case of the

GL

framework, our protocol can be efficiently instantiated using either the DDH, Quadratic Residuosity or

N

-Residuosity Assumptions.

For typical security parameters our solution saves as much as 12 Kbytes of bandwidth if one-time signatures are implemented in

GL

with fast symmetric primitives. If we use number-theoretic signatures in the

GL

framework, our solution saves several large exponentiations (almost a third of the exponentiations computed in the

GL

protocol). The end result is that we bring provable security in the realm of password-authenticated key exchange one step closer to practical.

Rosario Gennaro

Technical Session 13

Saving Private Randomness in One-Way Functions and Pseudorandom Generators

Can a one-way function

f

on

n

input bits be used with fewer than

n

bits while retaining comparable hardness of inversion? We show that the answer to this fundamental question is negative, if one is limited black-box reductions.

Instead, we ask whether one can save on

secret

random bits at the expense of more

public

random bits. Using a shorter secret input is highly desirable, not only because it saves resources, but also because it can yield tighter reductions from higher-level primitives to one-way functions. Our first main result shows that if the number of output elements of

f

is at most 2

k

, then a simple construction using pairwise-independent hash functions results in a new one-way function that uses only

k

secret bits. We also demonstrate that it is not the knowledge of

security

of

f

, but rather of its

structure

, that enables the savings: a black-box reduction cannot, for a general

f

, reduce the secret-input length, even given the knowledge that security of

f

is only 2

− 

k

; nor can a black-box reduction use fewer than

k

secret input bits when

f

has 2

k

distinct outputs.

Our second main result is an application of the public-randomness approach: we show a construction of a pseudorandom generator based on any

regular

one-way function with output range of

known

size 2

k

. The construction requires a seed of only

$2n+{\mathcal O}(k {\rm log} k)$

bits (as opposed to

${\mathcal O}(n \log n)$

in previous constructions); the savings come from the reusability of public randomness. The secret part of the seed is of length only

k

(as opposed to

n

in previous constructions), less than the length of the one-way function input.

Nenad Dedić, Danny Harnik, Leonid Reyzin
Degradation and Amplification of Computational Hardness

What happens when you use a partially defective bit- commitment protocol to commit to the same bit many times? For example, suppose that the protocol allows the receiver to guess the committed bit with advantage

, and that you used that protocol to commit to the same bit more than

times. Or suppose that you encrypted some message many times (to many people), only to discover later that the encryption scheme that you were using is partially defective, and an eavesdropper has some noticeable advantage in guessing the encrypted message from the ciphertext. Can we at least show that even after many such encryptions, the eavesdropper could not have learned the message with certainty?

In this work we take another look at amplification and degradation of computational hardness. We describe a rather generic setting where one can argue about amplification or degradation of computational hardness via sequential repetition of interactive protocols, and prove that in all the cases that we consider, it behaves as one would expect from the corresponding information theoretic bounds. In particular, for the example above we can prove that after committing to the same bit for

n

times, the receiver’s advantage in guessing the encrypted bit is negligibly close to

.

Our results for hardness amplification follow just by observing that some of the known proofs for Yao’s lemmas can be easily extended also to handle interactive protocols. On the other hand, the question of hardness degradation was never considered before as far as we know, and we prove these results from scratch.

Shai Halevi, Tal Rabin
Backmatter
Metadata
Title
Theory of Cryptography
Editor
Ran Canetti
Copyright Year
2008
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-78524-8
Print ISBN
978-3-540-78523-1
DOI
https://doi.org/10.1007/978-3-540-78524-8

Premium Partner